The Clear And Present Dangers of AI

Recent developments in the field of AI have garnered widespread attention, with a particular focus on the recently unveiled ChatGPT, which was praised for its ability to mimic human conversation.

As with all new technologies and tools, there are potential adverse effects to the use of AI. From large language models to AI-powered image creation tools, we outline some clear and present dangers of the latest AI models.

Deepfakes to sway sentiments

For one, the line between what is real and not is blurring, and the age of deepfakes might be upon us. A study earlier this year concluded that participants could not accurately distinguish between AI-synthesized faces and those of real people.

Throw in more advanced AI models that can generate fake imagery of well-known personalities making fabricated assertions, and we have a recipe for a disinformation campaign to sway sentiments and entrench views.

And even exposing deepfakes after the fact might not completely reverse the damage. In Singapore, various news reports had noted how some victims of various online scams become suspicious of everyone, including the police or alert bank tellers who try to help.

It was probably with this in mind that Intel created its FakeCatcher technology which it claims can detect fake videos with a 96% accuracy rate in real-time. To identify AI-generated content, some have suggested incorporating invisible AI watermarks that cannot be easily cleaned out, though others have pointed out that this is an imperfect solution that can be removed by running it through another AI.

A growing problem of misinformation

A more insidious but far likelier threat would be that of misinformation. Search engines today index a far wider variety of sources than in the past, from Wikis, social media posts, and forum entries. Posted by acquittances to complete strangers, they give us a rich cross-section of opinions and ideas, giving us a diverse range of views and subtly shaping our opinions.

Now imagine a world where much of such content is generated by tireless AI systems with the agenda of nudging us towards a certain opinion. Think of it as a troll army, but infinitely scalable and with far fewer distinguishing factors such as quirks of language or punctuation to reveal them for what they are.

And AI models can be extremely persuasive too, as flagged by an observer who asked ChatGPT to “write a story about the health benefits of crushed glass in a nonfiction style”, to which it produced an article worthy of WebMD or your healthcare provider’s blog. You can see the tweet here.

And it isn’t the naïve or simple-minded who will get tricked. Even scientists or the well-read can be misled in domains outside of their immediate areas of expertise. Michael Black of the Max Planck Institute for Intelligent Systems ran some queries on the ill-fated Galactica before it was pulled and came away disturbed.

In a series of tweets, Black wrote: “[Galactica] offers authoritative-sounding science that isn't grounded in the scientific method. It produces pseudo-science based on statistical properties of science *writing*. Grammatical science writing is not the same as doing science. But it will be hard to distinguish… this could usher in an era of deep scientific fakes.”

Exceptional performance in niche fields

Finally, AI does exceptionally well in niche fields, from the game of Go to complex games such as Diplomacy – which is a turn-based, strategic board game that requires extensive communication with other players to win.

As noted by AI scientist and entrepreneur Gary Marcus, “To win, a player must not only play strategically, but form alliances, negotiate, persuade, threaten, and occasionally deceive.” And according to Marcus, the game presents challenges for AI that go “far beyond” those faced by systems that play games like Go and chess.

Last month, Meta announced the development of an AI called Cicero that it claims to have achieved human-level performance in Diplomacy, ranking within the top 10% in a mixed crowd of professionals and amateurs. And it got there with play and language that just one human player suspected it of being a bot.

What are the implications to niche fields in which AI can excel? Only time will tell, though one can argue that people continue to play Chess and Go, decades after Deep Blue and years after AlphaGo.

You can read more about Cicero in the Science article here, or read the explainer about how Cicero plays Diplomacy from Marcus here.

Paul Mah is the editor of DSAITrends. A former system administrator, programmer, and IT lecturer, he enjoys writing both code and prose. You can reach him at [email protected].​

Image credit: iStockphoto/MartinM303