AI: Am I ... Being Paranoid?

We asked Georgia Tech AI experts key questions about the technology, its use and misuse, and how it might shape our shared future. Here’s part three of three.


Portrayals of AI in movies and other media can fuel apprehension and paranoia. These fictional narratives often emphasize dystopian scenarios, where AI turns against humanity. “In Hollywood, you don't need to know how to make the AI to tell the story about AI,” said Mark Riedl.

“A lot of people are talking about this Terminator-like scenario where AI is going to kill us all,” said Srijan Kumar. “I think that’s a bit far-fetched.”

In many instances, the depiction of AI in popular culture has been exaggerated. “There's a lack of understanding of the technology and its limitations,” said Judy Hoffman. “So I think part of our job in characterizing those limitations is then communicating them and not necessarily overhyping what we have today.”

“I don't think in the near future we are going to see a situation like Blade Runner where humans are the second-class citizens and machines are all-powerful,” said Munmun De Choudhury. But this does not imply there are not legitimate concerns.

"There's a lack of understanding of the technology and its limitations. I think part of our job in characterizing those limitations is then communicating them and not necessarily overhyping what we have today." - Judy Hoffman

What are the realistic apprehensions about AI?

“I believe there are a couple of things that are very realistic,” says Kumar. “Things like AI-powered misinformation, hate speech, fake reviews, and voice scams.”

AI-powered tools can produce and disseminate large volumes of content at an unprecedented speed and can spread rapidly across online platforms — making it challenging to address or moderate.

Kumar says as AI tools become more sophisticated, scams and misinformation can be personalized and difficult to identify by even the most discerning users. For instance, in Canada, a couple was tricked by a phone call from a cloned voice of their grandson claiming to be in a car accident.

There are tools available to train AI to replicate a voice using only a few seconds of clear audio. “These new tools are able to do some things that even the people creating them weren’t anticipating,” said De Choudhury. “Which does seem like the opening line of a Terminator movie.”

“What we are going to see is probably an existential threat, because it'll be so subtle,” De Choudhury said. “We still don't understand when things can go wrong or things can break with AI.”

“We've already seen some evidence that certain AI systems are getting worse over time,” explained David Joyner.

He points to the AI system Microsoft based on human-generated social media posts that were taken down after a couple of days because the content grew sexist, racist, and riddled with conspiracy theories. “Which is maybe the most human-like thing AI has ever done.”

Kumar agrees and says, despite millions of dollars invested in security, research hasn’t kept people from being able to step around well-intentioned safety measures.

There is still time to take a closer look, because AI isn’t as advanced portrayed in films and on screens. “I think it's like the self-driving car,” says Mark Riedl. “Ten years ago, we were promised that it would be ready in two years. And, it's starting to happen, but it's taking a lot longer.”

 

Are We Too Trusting?

Users often accept AI-generated recommendations without critical evaluation — like punching in an address using guided navigation in a vehicle and following the directions in any circumstances. This can be problematic if the system provides incorrect or biased information.

As Kumar explains, some AI tools like ChatGPT and Bard have started issuing warning statements when asked to substitute as a lawyer or provide medical advice. “People are too trusting and, of course, there's legal liabilities for the company if they give an incorrect response and something bad happens,” he said.

“People treat it like a search engine,” said Magerko, “or an oracle, trusting the first output a system generates.” He says it’s easy to fall into a trap. The response seems so realistic and lifelike that we forget there is nothing lifelike about it.

“We shouldn’t mistake it for more than just a calculator. Which it is. It’s just calculating numbers. That’s all.”

 

Is the Greatest AI Threat Existential?

Many of the most-used AI tools, such as ChatGPT, underwent extensive training on vast datasets, including millions of books, news articles, blog posts, and code segments. This training involved thousands of computers working tirelessly for months, but it is important to know they were built and derived from human knowledge and creativity.

“We're going to have super useful stuff. And we're going to have fun stuff. And there's going to be some productivity-enhancing stuff,” Riedl said. “But, I think that it’s not going to do everything we expect.”

"I believe there are a couple of things that are very realistic. Things like misinformation, hate speech, fake reviews, and voice scams." - Sriian Kumar

Joyner says these models were built to operate like a human brain, but they cannot replicate how humans have evolved to use their own minds. “Our brains are built to manage so many different concerns,” he said. “We are assuming AI will be like us. It will have motivations like us. It will have priorities like us.”

As AI is fundamentally rooted in the creations of humans and continually improves through our interactions, it raises a profound question: Is AI less a measure of technological advancement and more a reflection of humanity itself?

“The next generation of AI is likely going to be trained on output in large part from the previous generation of AI, so it’s going to reinforce its own biases, reinforce its own understanding,” said Riedl — biases and understandings that we created.

“AI will ultimately be what people decide to do with it,” Magerko said. “This is very much us asking questions about ourselves.”

 

AI Literacy

As researchers work to make sure AI is ready for everyday users, individuals can also do a lot to prepare themselves for the expansion of AI. According to Georgia Tech’s experts, even the most casual of AI users should strive to be AI literate.

An important aspect of AI literacy is learning how to engage with AI systems in safe and effective ways.

“Understanding a bit about how AI works, the mechanisms behind it, its limitations, and knowing what you should and shouldn't do with it will go a long way,” Magerko said. For example, a person should avoid putting personal information into an AI system if they don’t know how the data might be used.

It is also crucial to know that AI is imperfect and is known to introduce bias. It’s a well-documented risk that continues to manifest in subtle — and some not so subtle — ways.

“AI is taking the biases that exist in society and creating new forms of biases,” De Choudhury said. “We will continue to have AI that is biased, and we can try our best to mitigate it. But it's my personal belief that the bias in AI will never completely go away.”

According to her, the best path forward is for people to try to understand how AI works at a functional level. But most importantly, teaching AI literacy will be necessary to create an informed society.

“When I was young, learning to use a computer and type was the thing you needed to lead a life in modern society,” De Choudhury said. “I think knowledge about AI, and especially where its pitfalls could be, will be something that is very important for years to come.”

Am I Being Replaced?
Am I Being Helped?
Am I Being Paranoid?

 

Return to Intro

 

 

 

 

Meet the Experts

Media Inquiries: Ayana Isles, aisles3@gatech.edu.

 

Munmun de Choudhury, Associate Professor, Georgia Tech School of Interactive ComputingMunmun De Choudhury
Associate Professor, Georgia Tech School of Interactive Computing

Judy Hoffman, Assistant Professor, Georgia Tech School of Interactive ComputingJudy Hoffman
Assistant Professor, Georgia Tech School of Interactive Computing

David Joyner, Executive Director of Online Education & OMSCS and Senior Research Associate, Georgia Tech College of ComputingDavid Joyner
Executive Director of Online Education & OMSCS and Senior Research Associate, Georgia Tech College of Computing

Srijan Kumar, Assistant Professor, Georgia Tech School of Computational Science and EngineeringSrijan Kumar
Assistant Professor, Georgia Tech School of Computational Science and Engineering

Brian Magerko, Professor, Georgia Tech School of Literature, Media, and Communication and Director of Graduate Studies in Digital MediaBrian Magerko
Professor, Georgia Tech School of Literature, Media, and Communication and Director of Graduate Studies in Digital Media

Mark Riedl, Taetle Chair and Professor, Georgia Tech School of Interactive ComputingMark Riedl
Taetle Chair and Professor, Georgia Tech School of Interactive Computing

Credits

Writers: Catherine Barzler, Steven Norris
Graphic Design: Julie Watson
Web Design: Rachel Pilvinsky
Photography: Allison Carter, Joya Chapman, Rob Felt
Project Lead: Brice Zimmerman