In the whirlwind of advancements in artificial intelligence (AI), fears about the future tend to vacillate between grounded concerns and far-fetched dystopian nightmares. On one end, we have valid questions about ethical training, environmental impact, and AI-related scams. On the other, the imagery of AI becoming sentient, as depicted in sci-fi classics like The Matrix or Terminator, dominates our cultural imagination. Yann LeCun, Meta’s Chief AI Scientist, is quick to dismiss such apocalyptic ideas. According to LeCun, the notion of AI plotting humanity’s demise is, to use his words, “complete B.S.”
LeCun’s assertion that AI is far from being as intelligent as a cat is well taken. AI is not sentient, nor does it have any desires or schemes. However, his dismissal of potential risks overlooks a critical issue—what happens when we place too much trust in AI’s abilities? The problem may not be AI orchestrating the end of humanity, but rather humans giving AI far more control than it can responsibly handle.
AI Is Just a Tool—But Who Holds the Power?
LeCun is right: AI, at its core, is just another piece of technology. It isn’t inherently good or bad. Its outcomes depend entirely on how humans design, implement, and use it. However, there is a long history of technology going awry when trusted too much or used without careful oversight. From the chaos of high-frequency stock trading to near nuclear disasters caused by malfunctioning detection systems, technology has shown that it can quickly get out of control when trusted beyond its capacity.
Consider the example of high-frequency trading, where algorithms make decisions in fractions of a second—far faster than any human could. This has caused significant market disruptions and contributed to flash crashes, where stock prices plummet within minutes before rebounding. While these incidents are often corrected, they highlight the dangers of relying too heavily on automated systems to manage complex, high-stakes situations.
Or take the infamous case of the Soviet missile detection system in 1983, when a glitch led the system to believe that U.S. nuclear warheads were on their way to the USSR. It took a human at the controls, Stanislav Petrov, to recognize the false alarm and prevent what could have been global nuclear war. Now, imagine an AI system in charge of such decision-making. Would it have had the foresight or intuition to question the faulty data? Likely not. Yet, the possibility of AI taking on similar responsibilities in the future is not as distant as we might hope.
The Real Danger: Humans Misusing AI
The true concern isn’t AI turning against us, but humans misapplying it. AI’s rapid adoption in fields like healthcare, finance, and customer service shows how much we already rely on it to make critical decisions. For instance, an AI customer service bot might decide if you get a refund based solely on pre-programmed rules, leaving no room for human judgment or intervention. The scary part? This is already happening today.
Even worse, we could see more critical decisions being handed over to AI in the future, such as who gets hired, which medical treatments get approved, or whether someone is granted parole. Each of these scenarios involves subjective, nuanced judgment—things AI is currently ill-equipped to handle without potentially harmful biases.
Consider the rise of AI in hiring. While AI tools can sift through resumes much faster than human recruiters, they are still prone to reflecting the biases present in the data they were trained on. If companies aren’t careful, they could perpetuate discriminatory practices without even realizing it. Hiring decisions should involve human insight, not just AI algorithms processing historical data, which may carry its own embedded prejudices.
Copilots, Not Captains
One of the best metaphors for how we should be thinking about AI comes from Microsoft’s branding of its AI tools as “Copilots.” The idea here is that AI should assist humans, helping them achieve their goals without taking full control. Much like a plane’s copilot, AI should support human decision-making, but never entirely replace it.
This is a balanced approach. LeCun might downplay fears about AI taking over, but it’s crucial to remember that AI, while not smarter than a cat today, still has the potential to cause massive disruption if misused. A cat may not scheme, but it can certainly knock over something valuable. Similarly, AI could unintentionally push society in dangerous directions if left unchecked.
The Problem of Overreliance
The danger lies in overestimating AI’s current capabilities. Today’s AI systems are impressive, but they are far from flawless. Known for “hallucinating” facts—making up information or drawing incorrect conclusions—AI still requires careful oversight. Yet, in our rush to adopt new technologies, it’s easy to hand over more responsibility than AI is ready for. We risk putting too much faith in a system that lacks the ability to understand context or moral implications, and which, unlike humans, can’t be held accountable for its mistakes.
One particular area of concern is the environmental cost of large-scale AI operations. Training massive models like GPT-3, which require substantial computing power, comes with a hefty carbon footprint. As we integrate AI into more aspects of life, the environmental impact could grow, exacerbating one of the very problems we hope AI might help solve—sustainability.
The Future of AI: Smarter Boundaries
AI is, without a doubt, transforming the world. However, it’s essential to establish smarter boundaries. As AI continues to evolve, we must ensure it is treated as a tool, not a decision-maker. That means developing systems where AI assists, but never fully takes over critical decisions that have profound impacts on people’s lives. More importantly, we need robust frameworks to ensure that humans remain in control and accountable.
LeCun is right: AI isn’t going to “wake up” and overthrow us. But that doesn’t mean it can’t cause unintended consequences. The future of AI may not be as terrifying as the plot of The Matrix, but if we aren’t cautious, it could still push us off balance—both technologically and ethically.