technology

Is the Generative AI Hype Bubble About to Burst?

24 Jun 2025, 3:01 pm GMT+1

Is the AI bubble about to burst?
Is the AI bubble about to burst?

DEARBORN, Mich. — Two years after the launch of OpenAI’s ChatGPT, the generative artificial intelligence (AI) sector stands at a critical juncture. While early excitement spurred massive investment and ambitious expectations, a growing number of technologists and academics are beginning to question whether the technology’s actual impact is keeping pace with the hype.

Among those offering a measured, expert view is Professor Paul Watta, a faculty member in Electrical and Computer Engineering at the University of Michigan–Dearborn, who believes that although generative AI holds significant promise, it currently faces serious limitations that could temper future growth if not addressed.

From Predictive Tools to Reasoning Models

Professor Watta provides a clear distinction between the early generation of large language models (LLMs) and the more advanced systems emerging today. He describes early models—such as initial versions of ChatGPT—as “chat tools” that operate much like predictive-text engines. These systems could replicate sentence structures but struggled to engage meaningfully with users or provide factually accurate answers. This led to disappointing results in professional applications such as customer service, where nuanced understanding is essential.

However, the field is evolving. Newer models are moving beyond mere text prediction toward what Watta calls “reasoning models.” These systems are designed not only to generate responses but to explore multiple logical pathways, evaluate information, and apply context more effectively.

The result is the ability to perform more advanced tasks such as summarising long documents, assisting in code development, and even producing structured legal or financial analyses. These reasoning models can also trace their decision-making process, a feature that increases transparency and reliability—two qualities previously missing from earlier versions.

Rapid Development Meets Rising Competition

Despite criticisms about overhyped expectations, progress in generative AI has been undeniable. According to Watta, OpenAI CEO Sam Altman claims that each successive generation of models is “ten times better” than its predecessor. This bold assertion finds support in concrete data: AI models now score near the top 50 in global programming tests—an area where early versions underperformed. Future iterations are expected to outperform average human coders, suggesting significant disruptive potential in industries that rely heavily on software development.

At the same time, global competition in the AI space is intensifying. In a noteworthy development, the Chinese startup DeepSeek has reportedly released a reasoning model that rivals the best current Western systems, despite not having access to the latest NVIDIA hardware due to U.S. export restrictions. The news contributed to a 17% drop in NVIDIA’s stock value, signalling growing investor concern that dominance in AI infrastructure may be shifting—and that new players could challenge the current market leaders.

Structural Constraints and Ethical Headwinds

While the technology races forward, several structural and ethical challenges threaten to slow progress or diminish returns on investment.

One major bottleneck is the availability of training data. Current LLMs rely on vast amounts of human-created content—books, articles, code repositories, and websites. Experts estimate that within a few years, the supply of useful and publicly accessible training data could be exhausted, limiting further model improvements unless new strategies or synthetic data sources are developed.

Another concern is energy consumption. AI operations, particularly training and running large-scale models, demand substantial computing power. Watta notes that generative AI systems already account for around 3% of global electricity usage, and that a single ChatGPT query consumes 10 times more energy than a standard Google search. As adoption scales, this could strain power grids, increase carbon emissions, and raise the overall cost of AI integration.

The regulatory environment also poses significant barriers. In high-risk sectors like healthcare, laws such as HIPAA in the United States demand strict privacy protections and near-zero error rates—two areas where generative AI still underdelivers. Similar concerns exist in finance, law, and education. Moreover, public sentiment remains cautious. There is widespread discomfort about delegating high-stakes decisions to algorithms, and the potential for misuse—such as AI-assisted creation of weapons or spread of disinformation—could invite stronger government scrutiny or trigger a backlash from users and advocacy groups.

Outlook: From Disruption to Disillusionment?

Professor Watta cautions that while the technical evolution of generative AI is impressive, the expectations surrounding it have often been exaggerated. He suggests the sector may be entering what analysts refer to as a “trough of disillusionment”—a phase in the innovation cycle where lofty promises confront real-world complexity, and investor enthusiasm gives way to scepticism.

This does not mean, however, that the technology lacks merit. In fact, Watta argues that AI models—especially those now shifting into reasoning capabilities—are beginning to demonstrate genuine value in practical, narrowly focused tasks. Tools for drafting contracts, summarising lengthy research papers, assisting with computer programming, or managing customer queries are already improving productivity in measurable ways. These incremental but meaningful gains suggest that the long-term outlook remains promising.

However, for generative AI to move from dazzling prototype to dependable infrastructure, the field must confront several pressing challenges head-on. These include managing energy costs, creating sustainable data pipelines, navigating regulatory frameworks, and addressing ethical concerns. The sector must also foster a more realistic understanding among users and investors about what these systems can—and cannot—do.

In this context, Watta advises a shift in narrative: “We need to move from hype-driven expectations to evidence-based integration.” Only then, he argues, can generative AI achieve the kind of lasting impact its most vocal proponents envision
 

Share this

João Guarda

João Guarda is an upcoming writer for Sportsabc and the Ztudium team: primarily focused on sports, João has been contributing to the team since February 2025. Despite specializing in sports, João has a wide range of knowledge from literature, art, history to politics and economics.

Born in Leiria, Portugal; João lived in Paris, France for a major part of his life, mastering both the English language as well as the French and Portuguese Language.
He is currently studying Communications at Lisbon University and desires to become a proficient actor in the field.