Your cart

Your cart is empty

AI Will Not Destroy The World—AI Illiteracy And Misuse Could

AI Will Not Destroy The World—AI Illiteracy And Misuse Could

“The real problem is not whether machines think but whether men do.” This quote by influential psychologist B.F. Skinner aptly captures two human tendencies that are becoming particularly dangerous in the age of generative AI. First, the tendency to ascribe intelligence and free will to machines, and second, the assumption that computer output is always more accurate than human output. These tendencies could have significant consequences as generative AI becomes more pervasive and sophisticated.

Powered by advanced deep learning architectures (e.g. transformers, variational autoencoders (VEAs) and generative adversarial networks (GANs)), generative AI can produce outputs that are, at face value and without scrutiny, indistinguishable from human-generated content. However, upon second and third observation, certain patterns start to emerge that reveal their artificial origin. The problem is that, according to several studies, 45% to 50% of people lack the ability to detect such patterns, which may lead them to make important decisions based on entirely false premises. If we combine this with the fact that generative AI is getting better by the day, and large portions of the world population unconsciously seeking reaffirming ‘evidence’ of personal beliefs online, we can see how AI illiteracy mixed with intentional misuse can spell disaster.
AI Lacks Free Will and Conscious Thought: Only Humans Can Do the Right Thing, or Not

When philosophizing about AI, it helps to consider the following distinctions. There is a difference between free will and autonomy; and there is also a difference between thinking and computing. Human beings have free will and the ability to think. Computers, including AI systems, have neither.

Having free will means that humans can stop executing a task midway, either on a whim or as a result of an intellectual realization, an existential epiphany, or a deep intuition. Conversely, computers cannot stop executing a task midway for any of these reasons because they lack the human traits that allow free will to emerge. Similarly, humans can realize when their train of thought is heading in the wrong direction and reassess a thought midway—this is due to our ability to think about our thoughts as they happen, often called meta-consciousness. On the contrary, a computer cannot realize that the algorithm being executed is wrong during the process because it cannot think, only compute.

Since the general population does not make these distinctions, they become easy prey to the fear that computers, and AI in particular, can take over the world. What they need to understand is that those who build AI tools, and those who know how to use them properly, will take over the world if we do not make a concerted effort to raise the average level of AI literacy quickly. If the current trend continues, the gap between small AI-proficient groups and AI-illiterate masses will become so large that the world order could inevitably change. Nothing good can come from such an asymmetry in knowledge and skill in the generation and assessment of information.

In the words of the late Carl Sagan, “We've arranged a global civilization in which most crucial elements profoundly depend on science and technology. We have also arranged things so that almost no one understands science and technology. This is a prescription for disaster. We might get away with it for a while, but sooner or later this combustible mixture of ignorance and power is going to blow up in our faces.”

The Growing Concern: Generative AI’s Potential for Manipulation

While Generative AI tools like GPT models (which generate text) and DALL·E (which creates images) represent tremendous advancements, they also introduce new challenges. One of the most pressing concerns is the misuse of generative AI to create misinformation. The illusion of human-like intelligence in these models can mislead people into believing that the content they generate is factual or human-made, when in reality it is synthesized based on patterns in training data and prompts devised to create false narratives.

Consider the exponential rise in deepfakes, AI-generated images, or realistic videos of public figures saying things they never said. These fakes, which look and sound credible, also have evident signatures of AI-generate content, some of which are glaringly evident. However, they get rapidly disseminated across the internet as factual content, manipulating perceptions, opinions, and even political decisions. This becomes a particular issue on social media platforms, where engagement algorithms boost sensational content—whether true or false.

The statistics are concerning:

  1. In a recent survey, 40% of participants could not distinguish between AI-generated and human-made images. This makes them vulnerable to believing—and spreading—misinformation.
  2. Recent study showed that 60% of participants fell victim to AI-automated and AI-generated phishing, which is comparable to the success rates of regular phishing messages created by humans.
  3. A study comparing AI-generated propaganda with human-written propaganda found that AI-generated articles were highly persuasive, with 43.5% of participants agreeing with AI-generated propaganda, compared to 47.4% for human-written content. When human editors improved the AI outputs, the AI-generated articles became as persuasive, or even more so, than the real-world propaganda.

These findings highlight the inherent risk of AI-generated disinformation, especially as AI tools become cheaper and more efficient.

Real-World Examples of Generative AI Failures

Beyond the hype, the truth is that Generative AI, as applied to different fields, produces a vast amount of incorrect technical assessments, fake information, incoherent content, and, in short, unreliable “hallucinations.” Domain experts must always be present for assessment and validation. Even when AI is mostly correct, we cannot delegate ultimate responsibility to a system that cannot ever be driven by the desire to do the right thing. Only entities subject to morality and ethics can be held responsible, and the only ones capable of such a thing are humans.

Here are three real-world examples where these systems have caused significant problems, thus emphasizing the need for ultimate human accountability:

1. Legal: In New York, a lawyer used ChatGPT to generate legal precedents for a case, only to find out that the AI had fabricated six non-existent cases. This led to a court order demanding an explanation, demonstrating the risks of using generative AI in legal contexts without human oversight.

2. Technical: CNET experimented with AI-generated articles on finance topics, but 41 out of 77 published articles were found to contain significant factual errors, including plagiarized content. The company had to issue multiple corrections, illustrating the dangers of trusting AI to produce factually accurate content without careful human review.

3. Diversity: Levi's faced backlash when they announced plans to use AI-generated models to promote diversity in their advertising. Critics argued that using AI to simulate diverse body types and skin colors bypassed real inclusivity efforts, underscoring how generative AI can sometimes be misapplied in areas requiring authentic representation .

These examples make it clear that Generative AI is not without significant risks. AI systems depend entirely on the data and instructions provided by humans. When they generate incorrect or harmful content or make faulty decisions, the problem lies in how they are designed, deployed, or supervised—not in any inherent malevolence. There is no such thing as “AI going rogue.” The real concern is for people not to go rogue—or become negligent—with AI.

Real-World Examples of Generative AI Successes

Generative AI can only reach its highest potential when paired with human supervision and verification. Deep learning systems, at least so far, are nowhere near achieving the sense of clarity, relevance, appropriateness, and justice humans provide. Unsupervised AI sounds like a utopia until it starts acting beyond its training: then, it becomes irrelevant at best, disastrous at worst.

Here are three examples where generative AI, duly supervised and verified, maximizes human potential for real benefit.

1. Drug Discovery: Insilico Medicine, a drug discovery company, used generative AI to design a new drug candidate to treat idiopathic pulmonary fibrosis. They used generative AI throughout the preclinical drug discovery process to identify targets, generate drug candidates, and predict clinical trial outcomes. This approach reduced the time and cost significantly, achieving results for one-tenth of the traditional cost and in one-third of the time, reaching the first clinical trial phase in just 2.5 years, compared to six years using conventional methods.

2. Design: Forma, a generative AI tool by Autodesk, enhances the early stages of the design and planning process by automating repetitive tasks and offering AI-powered insights. It enables architects to explore multiple design concepts quickly and evaluate environmental factors like sunlight and wind, helping to optimize building designs. Its contextual modeling feature allows users to set up 3D models of entire projects in minutes, with real-time analyses on various site conditions. This tool empowers architects to work iteratively, improve productivity, and create higher-quality deliverables without requiring deep technical expertise.

3. Customer Service: Verizon uses generative AI to streamline business customer interactions by offering agents quick insights into customer histories and suggesting solutions, which are then verified by human agents. The AI also automates tasks like call summaries, reducing workload and improving efficiency. Continuous feedback from agents helps refine the system for better performance.

The Importance of AI Literacy in the Age of Generative AI

As Generative AI continues to evolve, AI literacy is becoming an essential skill for individuals and businesses alike. Understanding how AI-generated content is created, recognizing its limitations, and learning how to critically assess the validity of AI outputs are necessary steps in navigating an increasingly AI-driven world.

AI literacy will define success or failure in various areas, from businesses that need to use AI responsibly to individuals navigating complex political landscapes. Without a basic understanding of how AI works, people are vulnerable to manipulation by AI-driven misinformation. This issue isn’t just about technology—it’s about society’s ability to discern truth from falsehood in a world where AI-generated content is ubiquitous.

Conclusion: Generative AI Won’t Destroy the World, But AI Illiteracy And Misuse Could

In conclusion, Generative AI itself is not a threat to humanity, but the potential for its misuse poses a significant risk. The combination of AI illiteracy and the intentional exploitation of AI tools for profit or political manipulation could lead to societal instability. Generative AI models are powerful tools, but they are still just that—tools. The real danger lies in how humans choose to use or abuse them.

To mitigate the risks associated with Generative AI, we must prioritize AI literacy and develop ethical frameworks for the responsible use of AI technologies. Without proper oversight and understanding, Generative AI could provide bad actors with unprecedented scale and reach, potentially leading to systemic societal failures.

Article originally published on Forbes.com.

Previous post