In this era of technological advancements, Artificial Intelligence(AI) has emerged as a life-changing tool for many, with the undeniable capacity to revolutionize various aspects of human lives. But have humans taken this power of AI for granted? According to a new study published in Science Advances this week, people are 3% less likely to spot false tweets generated by AI than those written by humans. That credibility gap, while small, is concerning given that the problem of AI-generated disinformation seems poised to grow significantly, says Giovanni Spitale, the researcher at the University of Zurich who led the study. “The fact that AI-generated disinformation is not only cheaper and faster but also more effective gives me nightmares,” he says.
This phenomenon raises critical questions about the influence of AI on our belief systems and the challenges it poses for society. Humans have a natural tendency to trust information coming from authoritative sources. Technology has evolved to a point where it can convincingly mimic human voices and generate highly realistic content. This generative AI makes it increasingly difficult for individuals to distinguish between what’s real, and trustworthy, and what’s not. As a result, there are increased chances of people unknowingly consuming and disseminating disinformation, ascribing it with unwarranted credibility due to its seemingly legitimate source.
Cognitive biases play an influential role in shaping our beliefs and decision-making processes. AI-powered algorithms are capable of exploiting these biases, tailoring disinformation to resonate with individuals’ preexisting beliefs and preferences. The researchers stated that large language models such as GPT 3 can already produce indistinguishable organic text; therefore, the emergence of more powerful large language models and their impact should be monitored as they could leave a permanent mark on society. Targeted manipulation can reinforce existing biases, polarize opinions, and further entrench societal divisions. Moreover, the proliferation of social media platforms and personalized news feeds facilitates the rapid spread of AI-generated disinformation, amplifying its impact and reaching a larger audience.
Although AI may not be the “evil power” that most consider it to be, it’s the people that moderate and use AI to make inauthentic content. We are truly responsible for how efficiently we use technology to maximize and harness its true advantages. The problem is that the generative AI boom puts powerful, accessible AI tools in the hands of everyone, including unethical ones. AI Models—such as GPT-3—can generate inaccurate text that seems convincing, which could be used to generate false narratives quickly and cheaply for conspiracy theorists and disinformation campaigns. The weapons to fight the problem—AI text-detection tools—are still in the early stages of development, and many are not entirely accurate.
While the prevalence of AI-generated disinformation is a concerning issue, it is crucial not to resolve to fear or pessimism. Instead, we must actively address the ethical, regulatory, and educational aspects associated with this phenomenon. By stimulating critical thinking, implementing responsible AI practices, and promoting digital literacy, we can safeguard the integrity of information, reinforce democratic values, and protect the well-being of our society in the age of AI, maybe even with the help of more powerful logical AI.
Author: Arshiya Sharma
Welham Girls School