The Dangerous Fallacy Of Artificial Intelligence
Artificial Intelligence (AI) has become the hottest craze since sliced bread, with companies like Nvidia moving chips faster than Frito‑Lay moves snack bags. The recent “woke” fiasco from Google’s Gemini is just the tip of the iceberg—a flashy, unhinged fashion‑of‑the‑day moment that perfectly mirrors the corporate world’s obsession with DEI (Diversity, Equity, Inclusion). It’s a misguided quest that looks clever in press releases but is destined to backfire in monumental fashion.
To show just how far Google has gone in rewriting history, look no further than the images Gemini generated when asked for a pope and a Viking. The results are so blatantly wrong they don’t even need explanation—standing instead as proof of how misguided, tone‑deaf, and downright unintelligent the corporate gatekeepers of these tools have become.
At the core, it’s about those spinning the lies—hungry to be worshipped as gods and saviors of the “less competent,” all while secretly aware of their own inferiority. That’s the real driver, and it makes them toxic to society.
Fixing algorithms won’t erase intent or the baked‑in stupidity of the people behind them. The only cure is dismissal—Twitter‑style—and keeping them far away from positions of power or influence.
Let’s be clear: AI is nothing more than automation on steroids, not intelligence. A black Pope or Viking is the least of our worries because those errors are obvious today. The real danger is when distortions persist, quietly shaping future generations who won’t know they’re being fed fraud.
For proof of AI’s fragility, look no further than its clumsy attempt to “correct” a simple transposition of two letters in a common word while this post was typed. And still—no “they” option.
The question posed to ChatGPT cut straight to the heart of the matter, and its answer summed up Artificial Intelligence with surprising honesty: it’s powered by human input, and “true human‑like intelligence is unlikely to be achieved anytime soon.” In other words, the hype machine runs faster than the reality.
Let’s give credit where it’s due: today’s AI tools are mesmerizing, capable of producing creative and impressive output. But let’s be clear—it’s not intelligence.
Beyond the Silicon Valley “God complex” that infects corporations with grandiose visions of control, the problems with AI run deeper than most realize. Yes, outputs can be manipulated with ease, but the real danger lies in the training itself. Large Language Models (LLMs) are built on the information they consume, and that’s where the cracks widen.
Enter academia—a sprawling factory of questionable scholarship that churns out mountains of scientific garbage. Retractions are piling up at record pace; Nature reported thousands in 2023 alone, and that’s just the visible tip of the iceberg. Imagine how many more erroneous, peer‑reviewed papers are still floating around, quietly feeding into AI systems. The result? Models trained on fraud, error, and noise, dressed up as knowledge.
Another recent retraction tied to a Harvard‑affiliated organization managed to make headlines, but countless others slip quietly under the radar, never reported and never acknowledged.
The Dana-Farber Cancer Institute initiated retractions or corrections to 37 papers authored by four senior researchers following allegations of data falsification, a DFCI research integrity officer said on Sunday.
Here’s yet another case of “incorrect” information pumped out on a massive scale, unnoticed by most of society—highlighted by Richard Horton of The Lancet. Meanwhile, the culprits continue to shield themselves behind PhDs and other inflated titles that, in practice, prove to be little more than decorative armor.
Lastly, a lawyer got in trouble for relying on fake cases generated by ChatGPT.
AI may find its natural stage in Hollywood, where creativity, fiction, lies, and outright fraud are welcome guests. Beyond that domain, however, its existence is far more troubling than a mere nuisance.
The dangerous fallacy of Artificial Intelligence can be summed up like this: we are asked to believe in and rely upon a system that devours mountains of information churned out by fraudulent academia and a compromised scientific community—where truth and lies blur together—and then delivers outputs society accepts without hesitation.
Picture a system that prescribes medicine Y for disease X, only for the patient to die after taking the jab. We’ve already seen versions of this play out, and AI wasn’t the culprit—at least not yet. The reality is that human biology remains barely understood, as evidenced by the pharmaceutical and medical industries, let alone the complexities of human intelligence, artificial or otherwise. Worse still, people will be seduced into believing in AI’s supposed wisdom, when in fact it is nothing more than the man behind the curtain, pulling levers to control populations for the benefit of a few vile operators—at the speed of light.
The truth is stark: humanity will likely vanish long before genuine Artificial Intelligence ever arrives. The billions poured into expanding LLMs, data centers, and super‑charged chips only highlight the astonishing power of the humble human brain—a fragile three‑pound mass of noodles wrapped in brittle bone. And that, ironically, shows just how little we truly know.










