The HYPE Games
Hype culture threatens to dilute the credibility of legitimate scientific advancements while posing significant harm to society.
Happy 2025!
We’ve experienced an astonishing period over the last couple of years, with news after news of “groundbreaking” developments on various technological fronts. It seems as if science has all of a sudden gotten on steroids and it seems almost impossible to keep pace with the developments, to the extent that tracking the developments themselves is developing into a new industry altogether encompassing curation services, newsletters with the latest weekly or even daily developments, non-stop news coverage, industry publications, consulting position papers, investor bulletins, and more. There are certainly various developments to talk about but AI has been at the forefront.
Rarely does a day go by without claims of paradigm shifting developments in AI. And with those claimed developments come announcements, bets, and predictions almost untethered from reality. It feels like we are living in an incessant torrent of groundbreaking news in AI.
Here’s a sampling of such claims:
I recently received a newsletter email covering the announcement of OpenAI’s o3, with the title — o3: Near AGI!. I won’t name the newsletter, but WHAT? It’s hogwash! And such claims aren’t isolated. There seems to be a barrage of such nonsensical claims on an almost daily basis. For the ones disseminating such nonsense, one of the prime motivations seems to be the pursuit of, or continuance of, “influencer” status — on social media venues, technical circles, leadership circles, press or any concerned community. But it is not just the search for fame. There are deep-seated vested interests and monetization goals too.
By the way, here’s a post that adds some color to o3 and the tasks at hand, ARC-AGI, and adds context to the discussion (although there is a need for much more context and many more careful analyses):
But such analyses and commentaries are few and far between. Sometimes these are sparse, and at other times they seem to be drowned out by the kool-aid-laden, self-serving headlines such as these:
Some direct quotes from the above VentureBeat article:
Leading figures in AI, including Anthropic’s Dario Amodei and OpenAI’s Sam Altman, suggest that “powerful AI” or even superintelligence could appear within the next two to 10 years, potentially reshaping our world.
In his recent essay Machines of Loving Grace, Amodei provides a thoughtful exploration of AI’s potential, suggesting that powerful AI — what others have termed artificial general intelligence (AGI) — could be achieved as early as 2026. Meanwhile, in The Intelligence Age, Altman writes that “it is possible that we will have superintelligence in a few thousand days,” (or by 2034).
Now, AI has indeed made quite impressive progress and the advancements on the generative AI front are significant. That is not up for debate. But, there is no robust basis for leaps to predictions such as above, other than some non-rigorous projections on scaling. If only science were that straight-forward. Or, if only we were still doing science. Most serious science seems to have taken a backseat in favor of scaling. Not that scaling hasn’t proved useful or that it isn’t worth pursuing (well, there are arguments against it as well but that isn’t the point here). Scaling has proved very useful over the years and we have seen unexpected solutions to seemingly very difficult tasks brought within reach. But, can it be extrapolated it to the outcomes such as AGI just yet?
Then, of course, there is also the AGI definition issue — it seems that OpenAI has taken an entirely arbitrary non-scientific approach in defining AGI — seemingly the lesson being that we can achieve anything as long as we can define it to our convenience. A self-serving, intellectually vacuous, and scientifically dishonest trajectory. There is no consensus on the definitions of AGI either across the scientific community or industry, let alone public (and before someone suggest “human-level intelligence” at tasks, pray tell, what are the stringent criteria of success, along what metrics, on what necessary and sufficient conditions?). But that has not stopped us from already jumping to hype up “imminent superintelligence”, or ASI. But why stop there? How about AI singularity? Why not, apparently singularity is around the corner too.
And just recently, we see this ridiculous claim:
I won’t delve into the details of reckless projections, debates about AGI or ASI, nonsensical imminent singularity claims, or even the current state of AI. Those debates are happening, albeit sparsely, and unfortunately again on social media, lacking the gravity or rigor that they should command. Without digressing from the theme of hyperbolic claims, there are a bunch that were originated from this announcement on quantum computing from Google:
Now, I should note that the developments in quantum computing demonstrated by Willow are undeniably significant and meaningful, and do move the needle. They have pushed our efforts on quantum computing tangibly forward. However, these advancements need proper context and responsible scientific analysis by the experts ( and I am not one by any means). Resorting to click-bait techniques and spurious claims does a serious disservice. The above announcement is good PR. But can we not have a measured take on the significance of the advancement? It seems like we don’t even want to stop there though. While we’re at it, why not claim access to parallel universes?
The rhetoric is getting insane. Yes, we might have an axe to grind. We may crave popularity. We may have vested interests. Or, we may have completely benign intentions to share the developments but are potentially impacted by very high (potentially misplaced and/or premature) enthusiasm or wishful thinking. However, that does not justify propagating unvetted, unverifiable, fantastic claims that misinform the state of science.
There has to be an upper limit to hyping up (often misleading) scientific information, especially when the hype comes with a very high cost — profound damage to society. And this is not just a lone person like me cautioning. The National Academies of Sciences, Engineering, and Medicine concur:
And in addition to the ecosystem and various sectors that help propagate science misinformation, it pains me to say that unfortunately a part of the AI scientific community shares a significant blame as well. It is disappointing to see how a section of the research community publicizes mostly minor incremental developments in increasingly hyperbolic terms.
Every new “advance” in AI is hailed as if the atom had been split. Every paper at NeurIPS (or any other flagship AI/ML conference) is publicized as if it’s going to change the course of humanity. Every new “model” is seemingly poised to have a dramatic effect on society and immediate impact on the lives of people (more ambitious and irresponsible claims stretch the effects to the entire humankind) — whether it’s drug discovery, preventing diseases, protecting communities, you name it. Every meaningful phrase and characterization is being trivialized.
That’s not how science works. And that is certainly not how a responsible scientific community should act. Scientific progress is typically both incremental and cumulative (and that is precisely why we celebrate the exceptions!).
Moreover, just because some scientific “advance” is demonstrated in a limited context doesn’t mean it is ready for practical applications, and even if so, that the productization problems and challenges are solved. There is a big step from scientific accumulated progress to real-life meaningful, positive impact on society (and no, ads and surveillance ability do not count). For instance, with the super-sized claims of AI drug discovery. We have certainly made significant progress, inarguably so. But there remains so much to be done. The challenge isn’t just to discover new molecules but to get the new “applicable” drugs. What’s the cost-benefit of these? TBD. Do AI-discovered molecules have a higher chance of success in trials? We don’t know yet. But, we seem to have earned the rights to make unsubstantiated claims of the (future?) impact of AI on the field. This is akin to pulling the future unverified projected revenues as realized gains today, typically an unethical corporate trick to buy or buoy the market support for valuations. We cannot afford to be so careless with science.
Even when we speak of the challenges and risks of AI, the discourse is mistakenly happening at the two extremes of the possibility spectrum. On one end, there is a significant push from the likes of e\acc movement and other similar echoes of support, or even bizarre projections when contextualized with the timeline :
and on the other end, there is alarms of extinction-level risk:
And intriguingly, there are players who are taking both sides of these bets. Why not have a more grounded, principled, intellectually honest discussion about the actual state, real risks (especially present and imminent), and the beneficial directions to devote efforts to that can help with benevolent and beneficial societal outcomes? It’s not as if AI is a runaway technology that we have no control over — at least not yet, and not in near-future. And if indeed the possibility of imminent AGI, Singularity, and ASI are real based on some secret, undisclosed data that doesn’t seem to find evidentiary support in the published research so far, should we not be even more responsible, measured and extremely darn serious in discussing it? Alas, I suspect that is not the case.
Earlier, when hypes took over (from the tulip bubble to the dotcom bubble), they would be reflected in the financial betting world (the markets). We would see the bubbles burst at some point and the bubble-inducing entities would witness the consequences (not necessarily face them). But, with the new hyperconnected world, where every recommendation engine looks to monetize on every type of hype, extremely high vested interests, and much faster reach and spread of various mediums of data propagation, we risk much more serious and high-stakes consequences— not just financial, but also systemic and societal. We need to stop this irresponsible behavior and pull ourselves back.
It increasingly seems as if there are no rules, or even an agreed-upon code of conduct, to govern responsible scientific public discourse. Whatever threadbare guardrails we seemed to have, are weakening: journal review standards diminishing, academic conferences becoming marketing venues, media doing a serious disservice to responsible journalism in the quest of breaking the news asap leaving little room for due diligence, sections of scientific community abdicating their responsibilities and ethos of objectivity (either as participants of the rat-race or due to pressures from and on funding sources), unqualified non-experts taking over the narratives, and disregard for intellectually honest debate and pushback on the hype.
We have moved far from an earlier world when the journey from scientific developments to breaking news typically took years, even decades, with quite a few disappointments along the way. Now, we start with the breaking news, often based on unsubstantiated (even fantasized) claims and projections, with actual details on the topic in question to follow, if we are lucky.
And even when the details follow, it’s too late. The misleading narrative has already taken hold and damage is already done. We are already seeing this phenomenon run amok with AGI/ASI claims serving as Exhibit 1.
The phenomenon of hype and hyperbole has taken over the entire spectrum of discourse, from individual contributors to corporate leadership to startup founders to investors to media to government and even general public. This is not meant to over-generalize, or malign the efforts of scores of participants who continue to work hard and tirelessly to inject intellectual and scientific honesty into their work. But our societal reward structure seems to ensure that most of those efforts are either drowned out or ignored, and even when receive attention, they are rendered impossible to act upon.
Science is not a subject to be settled based on public opinion or “media trials”. Nor is the legitimacy of scientific claims a function of likes, shares or thumbs-ups on social media, or support and endorsement from “celebrities” created by these same criteria. It is, and has to be, based on robust empirical and theoretical evidence, vetted against rigorous tests on the benefits and implications to society. At a minimum, we need both — a responsible, dispassionate scientific debate (which exists but needs to be widely heard) and media to cover these topics objectively, responsibly, and with the requisite underlying context on the veracity of claims. And more importantly, the public information and dissemination of progress warrant an informed and measured scientific discussion rather than the hyperbole and bombast associated with the news and nonsensical imminency claims.
With the current hype culture, we do not just risk diluting the credibility of legitimate scientific advancements but risk having a fundamental deleterious impact on the practice of science, not to mention the damage to society and information ecosystem.