Research done by Study.com has found that people are most comfortable with artificial intelligence when the stakes are low. For example, you might be fine with AI recommending which movie you should see, but not so much when it comes to reporting the news. AI news anchors are actually a thing, and, while they may be largely a novelty at this point, they are an example of how AI is increasingly infiltrating our business and personal experiences.
China’s state-run news agency Xinhua has unveiled AI anchors that can be “endlessly copied” and are able to read the news in multiple languages. More recently, the Russian state news channel Rossiya 24 introduced a robot presenter. Some of the possibilities, whether they are still far off or becoming increasingly possible, have serious implications for news trustworthiness and potentially democracy itself. But, beyond that, they can also affect enterprises in a variety of ways, from their ability to staff data scientists to their own susceptibility to false information and reports.
The Problems with AI anchors
Anchors that operate via artificial intelligence allow for end-to-end automation of the news, said Alex Majer, the CEO of Good Robot.
“Of course, news can be manipulated without automation, but automation magnifies that risk,” Majer said. That risk is the widespread dissemination of news that is manipulated, misleading or outright false. There are humans involved at several points in the process in traditional news gathering, he said, and they use news judgement, employ their own ethical frameworks around what is reported and published, and can correct or veto stories that don’t pass muster.
“With a digital AI anchor, the possibility exists to alter the sources of news, or cut out objective reporting altogether with no possibility of veto,” Majer said. “You could in fact have end-to-end automated news generation, influencing not only the topics that get covered, but also the point of view and tone in which they are covered.”
That impact goes beyond the news. For example, fakes could affect markets, consumer behaviour, even business decisions if they are not quickly detected. Many industries rely on social media to disseminate information to clients and media; as fakes become increasingly common online, user trust in those sites and therefore their usefulness for legitimate organizations could decrease.
The proliferation of deep fakes could also make it more difficult for organizations to staff data scientists, as experts in the field work on combating the problem. A salary report from Datafloq found that data science professionals want to work on the detection of fakes, and to use machine learning to combat the problem--an admirable goal but one that could take workers in a high-demand field from enterprise data science roles.
Potential Upsides of AI Anchors
Some see the potential of AI-powered anchors. For example, such an anchor could conceivably expand the overall television news footprint, Majer said. The result could be a richer, more balanced view of the world, he added. For specialized industries, this could provide a way to reach clients and other interested parties even if their work exists in a specialized field often left out of mainstream media coverage.
In addition, the work of combatting deep fakes involves artificial intelligence, machine learning, data analysis and IT security. Advances made in those fields in the name of stopping the spread of fake news online will benefit all other industries, as well, as they are put to use for their own aims and operations.
For example, techniques like one recently developed at Carnegie Mellon University use ML and AI to generate fakes automatically. The system, presented last September at the European Conference on Computer Vision, uses algorithms separated into two models and then allowed to compete with each other in a visual version of AI translation software--a technique that can potentially be put to use in a variety of other ways.