top of page
Image by Augustine Wong

The AI Pandemic

By Nina Lewandowska


Ever since the birth of readily accessible Artificial Intelligence (AI) models, people have been relying on chatbots in the name of efficiency. What previously seemed like an infinite source of knowledge and innovation has rapidly morphed into a root cause of psychological decline, heightened environmental risk, mass disinformation and even possible economic collapse. We are now faced with the question: how much are we willing to put at stake, purely for the benefit of efficiency?


ree

When AI Went Too Far

It is difficult to pinpoint exactly when AI overuse began, but free chatbots have played a significant role. Today, 62 per cent of U.S. adults use AI at least several times a week, partially thanks to easily accessible tools like ChatGPT and DeepSeek. By offering free versions, these platforms reduced the barrier to entry, promoting mass experimentation without financial risk. The strategy is simple: get users hooked, then offer premium features. 


But with the increase in usage came unexpected consequences, including what some psychologists now refer to as “AI Psychosis”, a form of compulsive overuse tied to constant interaction with generative AI. We are faced with patterns of individuals who become fixated on AI models, projecting onto them traits like consciousness, supernatural knowledge, or even romantic feelings. What is most worrying about this is that, contrary to popular belief, it is not only people with previous mental health problems who experience such  AI-related delusions. Prolonged interactions of users who have no underlying disorders with AI chatbots have led to psychiatric hospitalisations and even suicide attempts. The tendency for general-purpose AI systems to prioritise user satisfaction and engagement is deeply problematic in such cases, especially when the problem of AI addiction is constantly growing.


Deception by Design

The visible increase in AI use also means that more people are exposed to disinformation. In 2023, the number of AI-enabled news sites, flagged for spreading false information, increased tenfold. Additionally, apps like Sora – an AI video app developed by OpenAI – create increasingly bigger platforms for individuals with malicious intent. In its first three days, Sora users managed to produce and distribute videos of ballot fraud, crimes, immigration arrests, and protests. People often disregard the severity of AI-generated false news distribution, claiming that they can easily tell the difference between what is real and what is fake. This, however, becomes an issue when we look at who disinformation has been targeted at in the past.


In the 2020 elections, for example, President Donald Trump gained support among Latino voters in Florida when his campaign ran a targeted YouTube ad in Spanish, making false claims about repressive Venezuelan leader Nicolás Maduro’s supposed support for President Joe Biden. The ad was shown more than 100,000 times in the eight days leading up to the election, even after Trump’s claim was debunked. 


Another such incident took place during the 2024 presidential primary elections in New Hampshire. Voters received calls of what appeared to be President Joe Biden telling them not to cast their ballot in the state’s elections, which were to take place a few days later. This call turned out to be an artificially generated recording, played between 5,000 and 25,000 times.


It is important to remember that many people, especially minorities, still have limited access to resources, enabling easy differentiation between AI-generated information and real news. The occurrences of such scenarios are in no way decreasing and will only soar with the development of newer and more discrete AI models, which is why it is essential to stay vigilant. 


When Intelligence is Not Sustainable

Some of the most long-lasting effects of AI use are also disregarded in favour of productivity. The computational power needed to train generative AI, such as ChatGPT, demands a colossal amount of electricity, which leads to carbon dioxide emissions and pressure on the electric grid – the network that generates and distributes electricity from power plants to consumers. Additionally, once the models are already released to the public, systems require constant updates and fine-tuning to improve their performance. This consumes substantial amounts of energy long after the release and development of a model.


Furthermore, AI is becoming more personalised and able to solve complex problems on our behalf; it is reasonable to assume that our AI footprint will only continue to grow. By 2028, more than half of the electricity going to data centres is projected to be used for AI. At this rate, AI will have the power to consume as much electricity annually as 22 per cent of all US households combined. The data centres that house AI servers produce copious amounts of electronic waste. They are major consumers of water, which is becoming a scarce resource in many parts of the world. They also rely on critical materials like gallium and germanium, which are primarily produced by China and Russia. The extraction process of gallium consists of aluminium smelting, of which gallium is a byproduct. The production of gallium and germanium comes with significant environmental impacts. In the case of gallium, this includes red mud waste, which is known to contaminate water sources, while germanium production correlates with air quality and waste management hazards.


The reliance of AI-driven data centres on these critical materials forces countries like the U.S., where even a 30 per cent disruption of gallium supplies can cause a 600 billion dollar reduction in U.S. economic output, equating to more than two per cent of GDP, to be heavily dependent on China and Russia. This reliance causes fears of economic and national security risks, which will only grow because of AI’s increasingly integral role in business, government, and military applications.


Profitless Progress

In recent months, experts have been speculating on how the AI boom will progress. Tech companies are projected to spend 400 billion dollars this year on infrastructure aimed at training and operating AI models. In nominal dollars, this is more than any group of firms has ever spent on anything. Conversely, American consumers spend only twelve billion dollars a year on AI products and services. There is clearly a vast difference between how much these tech companies are investing in their models versus the profit they are actually making. 


Some economists believe that this directly mirrors the dotcom crash of the 2000s, where firms were building market-based assets like brand value and customer relationships, which could only create genuine value when they produced loyal and profitable customers. The issue was that investors tended to treat spending as proof of growth. Where the internet firms of the 2000s burned cash on advertising, AI companies now burn it on computing power and data. 


Additionally, AI companies are changing the useful life of their pricey AI chips, installed in their data centres. Amazon, for example, moved back to a five-year useful life, only a year after changing it to six years, acknowledging that the resulting increase in depreciation would reduce its 2025 operating profit by 700 million dollars. Experts suggest that there is a possibility that the profits of these AI companies are materially overstated. Higher depreciation costs associated with these useful life changes have the potential to significantly lower earnings per share. These warning signs suggest that the AI boom may be fuelled by optimism and overinvestment rather than by sustainable, real-world profitability.


The Trade-Off We Ignore

The rapid rise of generative AI has, of course, brought undeniable advances, but has also exposed us as a society to deep vulnerabilities in our psychology, media landscape, environmental wellbeing, and our economy. What started off as a tool for convenience has quickly evolved into a system that evolves faster than its consequences can be understood or controlled. Efficiency can make a company stronger, but weaken a society at the same time. In the end, what we choose to optimise will tell us what kind of future we create and ultimately deserve.


Sources: The New York Times, The Guardian, Forbes, Daily Mail, Psychology Today, BBC, The Economist, University of Florida, Human Rights Watch, ProPublica, NPR, NBC News, MIT News, MIT Technology Review, UN environment programme, FP Analytics, Pew Research Center, The Conversation, Derek Thompson


Written by Nina Lewandowska

Edited by Sarah Valkenburg and Nina Gush


Comments


bottom of page