Editorial Board

The Matrix

May 10, 202610 min read16 views
The Matrix
(Photo: Wikimedia Commons)

The dystopian movie The Matrix was released in 1999 and tells the story of a future in which humanity is unknowingly trapped in the Matrix, a simulated reality created by artificial intelligence in order to pacify and subdue the human population. In the Matrix, AI creates a fully immersive and accurate simulation so convincing that people don’t realize they’re inside it. While we are, today, not trapped in a simulated world, it is becoming increasingly difficult to distinguish between what’s real and what was made by AI. This technology is shaping how people—especially youth—consume culture, make decisions, socialize, communicate, study, and dress. As we struggle to adapt to life with a rapidly changing technology, we are forced to make difficult decisions with real and poorly understood impacts on human lives. Can a machine make kill decisions?


Education

One of the first and most prominent areas of concern with generative artificial intelligence is its use in education systems, and what that means for the education of the next generation. Whether it be Gemini-generated AI overviews providing a basic summary of internet pages or the use of generative AI programs like ChatGPT, the work students do in school today (even on just baseline research) is vastly different than the work their parents did pouring over textbooks or encyclopedias for information. Prompted by these vast changes in education, worries of parents, teachers, and officials regarding the development of critical thinking in youth are not completely unfounded: the (over)use of AI tools is proven to reduce brain activity and other critical cognitive functions that correspond to decision making, memory retention, and learning as a whole. The inability of kids to simply synthesize information by themselves, or handle work individually in large loads, signifies a shift towards a less able generation, which may deteriorate the development of societal innovation in the coming years. In March, Education Week published a finding that analyzed 1.2 million interactions between students and Artificial Intelligence and one in five students interaction with this tool on school technology involved cheating, self-harm, bullying, and other problematic behavior.
Rebecca Winthrop, one of the report's authors and a senior fellow at Brookings Institution’s Center for Universal Education, warns, “When kids use generative AI that tells them what the answer is … they are not thinking for themselves. They're not learning to parse truth from fiction. They're not learning to understand what makes a good argument. They're not learning about different perspectives in the world because they're actually not engaging in the material.”
Some in Silicon Valley say they want to see their tools used responsibly and in ways that maintain integrity in education, as OpenAI claims it is building “tools for educators” that should be helping them facilitate their students’ learning (although others refuse to let even their own children utilize the technology). However, a year ago it developed a technology that was 99.9% accurate in detecting work generated by ChatGPT. The project had been stuck in internal debate at the company for roughly 2 years. OpenAI would decide against making it public, knowing that people would stop using ChatGPT and switch to a competing product without a verified AI detection method. 

On the other hand, using AI in education comes with advantages, since this technology can provide students with feedback on their work, helping them understand their strengths and weaknesses; AI can also be used to handle administrative tasks, such as scheduling and managing students' records; it can provide greater access to resources; and it can also be used to provide personalized learning; as it can tailor the content to individual students need and learning styles. All of this optimizes the time students spend studying, with AI also creating flashcards, podcasts, mock tests, and presentations to help students understand the content.
Social Media
In areas such as Social Media, platforms that were supposed to enable users to create, share content, and interact within virtual communities are even more dominated by Artificial Intelligence. The dominance of AI-generated images has transformed social platforms into showcases of synthetic creativity: 71% of images shared globally on social media were generated by AI. YouTube’s CEO Neal Mohan stated in his 2026 look-ahead blog that in December alone, more than 1 million YouTube channels used the platform’s AI tool to make content. Additionally, more than 20% of the videos that the platform's algorithm shows to new users are “AI-slop”, low-quality content made by AI with the intention of farming views and subscriptions or swaying political opinions. The problem with the AI-slop is how it has been normalized and may come across as harmless fun—short videos of fruits getting pregnant or cartoons titled "Mum cat saves kitten from deadly belly parasites.” A study made by the Nanyang Technological University shows how the “illusory truth effect” makes people more likely to believe in claims or images the more they encounter them, even when the viewer has been explicitly told that a video is fake. According to philosopher Harry Frankfurt: “[the AI-slop is] A form of linguistic communication characterized by ‘a lack of connection to a concern with truth' – […] indifference to how things really are.”
In October, Meta CEO Mark Zuckerberg happily declared that social media had entered a third phase, now centered on AI. "First was when all content was from friends, family, and accounts that you followed directly. He added, "The second was when we added all of the creator content. Now, as AI makes it easier to create and remix content, we're going to add yet another huge corpus of content." Meta, which runs Facebook, Instagram, WhatsApp, Messenger, and Threads, not only allows people to post AI-generated content but also provides tools that let users interact with an AI chat box and create videos and images with this technology.
Jobs
As tech companies invest in AI, workers are losing their jobs. Last year, Microsoft fired 15,000 workers. Amazon fired 30,000 in the last six months. According to a Reuters report, Meta may fire 20% of its employees in the near future. “At no point in my career have I ever been this pessimistic about the future of careers in tech,” said a tech employee in an interview for The Guardian. “And that’s really sad because I love tech.”
However, some experts are skeptical. AI still has significant limitations, such as inconsistent reliability, limited continuous learning, and reliance on high-quality training data. In fact, researchers and AI experts say that some companies may be “AI-washing” layoffs, using the technology as an excuse for a slowing labor market, lagging consumer demand, or rising costs.  Ryan Nunn, director of research at Yale University’s Budget Lab, which researches AI’s impact on jobs, said, “It’s easy to confuse the effects of something like generative AI with a weakening of the labor market,” adding, “We really don’t see anything differentially happening with the AI-exposed labor market.”
What’s certain is that workers are paying the price, whether AI is the cause or a convenient excuse. It’s in the name of progress—as many layoffs due to labor mechanization are—and yet the progress remains abstract because what that future looks like is unknown.
The Bubble
The human cost is real, but so is the financial risk, and some warn that the AI boom may be built on shaky ground. Just between July and September of last year, OpenAI posted a loss of $11.5 billion USD. At its current pace, it will need to make nearly $50 billion USD a year to keep operating at current capacity—mainly because the vast majority of its 800 million users use the free version of ChatGPT, and keeping up with the sheer excess of data needed requires significant processing power and electricity. 

The current company's revenue, which is $13 billion USD per year, is far from covering its share of the economy. However, OpenAI doesn't seem to care and wants to, instead, enhance the AI bubble with further (likely unstable) speculation measures. Although he has privately expressed doubts about the growing AI bubble, company interests have clearly taken precedent. In November of last year, OpenAI CEO Sam Altman announced plans to invest $1.4 trillion USD in new data centers over the next eight years. “How can a company with 13 billion in revenue commit to spending $1.4 trillion?” asked investor Brad Gerstner, who owns a stake in OpenAI, to Sam Altman during a podcast. Irritated, the CEO answered Mr. Gerstner, “ If you want to sell your stake, I will find a buyer. Enough.” In the same month, OpenAI’s chief financial officer said the U.S. government should financially support the company, which further reinforced concerns about its economic sustainability.

Manipulating such large amounts of money is dangerous for the development of AI and the rest of the international economy. Although AI companies may be rapidly expanding, they are not bringing in the profits such extravagant measures would typically reflect. The development of these spending gaps foreshadows, in economists eyes, the eventual end of the AI rise and an eventual stop to the influx of funds being poured into the market will lead to a collapse and a lack of reprieve considering the monetary scale.

But it’s difficult to tell whether or not this supposed ‘bubble’ is going to pop. Differing opinions from varied perspectives are present, some of which note that many of the same sentiments expressed toward the development of AI programs are mirrored by the public reaction to the telephone and the internet in earlier eras. Although, those technologies were relatively understood by developers and AI programmers reflect the sentiment that they have no idea how AI truly works, nor how it will begin to think as its information feed is stopped. So whether AI will be the harbinger of great societal change or great collapse is still unknown.

The Future
As Artificial Intelligence expands into education, entertainment, labor, and even human interactions, society faces a paradox: do we develop it safely, or do we go full-speed-ahead so it can start addressing humanity’s unsolvable problems? Rejecting AI completely is neither realistic nor necessarily beneficial, especially considering the fact that not knowing the full extent of AI could be dangerous as other AI tools are developed. Artificial Intelligence has already accelerated scientific research, expanded access to education, and optimized countless tasks. The issue, therefore, is not whether AI should exist, but whether governments, companies, schools, and users are capable of establishing ethical limits before the technology develops faster than society’s ability to regulate it.

artificial intelligenceAIeducationfinance
Share