- Teb's Lab
- Posts
- Scared of the Wrong AI
Scared of the Wrong AI
Forget AGI: Narrow AI systems are doing harm now
The Lab Report
I’m Tyler Elliot Bettilyon (Teb) and this is the Lab Report: Our goal is to deepen your understanding of software and technology by explaining the concepts behind the news.
If you’re new to the Lab Report you can subscribe here.
If you like what you’re reading you’ll love one of our classes. Signup for an upcoming class, browse our course catalog for corporate trainings, or request a custom class consultation.
Scared of the Wrong AI
I belong to two worlds, software and education, where there is profound anxiety about Artificial [General (Super)] Intelligence and its potential role in our futures. Doomers and accelerationists alike tell me that AI tutors, AI teachers, AI curricula designers, and AI engineers will soon replace me. I hear similar anxieties from writers, artists, professional drivers, and all manner of office workers.
I’m also a citizen of the world, watching as AI and its enthusiasts infiltrate the halls of power. Elon Musk whispers in the president’s ears. Sam Altman and Marc Andreessen preach their gospel and deploy their capital from Silicon Valley. Jeff Bezos joins the ensemble from his perch atop the Washington Post. A chorus of billionaires all singing in harmony: “It’s an arms race! We’ll lose to China if we slow down! We must be the first to build AGI!” And, a bit quieter, “But — and don’t freak out when I say this — but AGI might also kill us all... And that’s a risk you’ll just have to accept.”
They want you to believe they can build a god and then enslave it.
Frankly, that is a circle I cannot square. An entire “country of geniuses in a data center” dutifully performing our drudgery? Doubtful. Either they are lying to us about the power of these models, or their hubris is truly astonishing. Although I suppose it could be both, I lean towards the former.
Dario Amodei claims AI will replace 90% of programmers within 6 months, but as of today his company is hiring dozens of software engineers. Sam Altman wrote, “We are now confident we know how to build AGI as we have traditionally understood it.” But a month later, his customers were not feeling the AGI after the release of GPT-4.5. Elon Musk has consistently promised fully self-driving Teslas are right around the corner for over a decade now, while in reality Tesla is stuck at Level 2 autonomy.
I am not bullish that the current slate of technologies will sublimate into AGI simply due to scale. This is also the emerging consensus among experts. Neither more training nor test time compute will take deep neural networks to the promised land of generality without some additional breakthrough(s). But I want to put a pin in the question of general intelligence for now because, even if I’m wrong, I don’t think it matters much in approaching AI safety.
Why? Because, unlike artificial general intelligence, artificial super intelligence has clearly arrived by most definitions. It’s just restricted to certain narrow domains.
Games are the first to mind. Checkers has long been solved. Top Chess AIs remain undefeated since November 2005. Go seems headed in a similar direction, though current top systems still have some interesting and silly failure modes. From IBM’s Jeopardy! Playing Watson to DeepMind’s Atari agent, AIs can play all kinds of games at superhuman levels.
The list of super intelligent AI systems gets quite long, depending on what you call AI. The joke in Computer Science circles has long been, “AI is anything computers can’t do yet, we just call everything else software.”
For example, an early milestone in deep learning – the AI paradigm du jour — was optical character recognition (OCR): looking at an image of text and extracting the text itself. Some of Yann LeCun’s early work was bringing neural networks to bear on OCR. Through this work, AI became superhuman at sorting mail and transcribing physical documents into digital formats. Google Maps is in the same category: no human can navigate as quickly and reliability in so many different locales.
OCR is “just software” even though the models powering modern OCR have enormous overlap with state-of-the-art LLMs, and belong to the same family of machine learning models (deep neural networks).
Google Maps is “just software” even though its key algorithm, A* Graph Search, features prominently in the world’s most popular AI textbook.
Regardless, AI tools are becoming more adept in other domains, too. They will surely achieve “super intelligence” in more tasks as time goes on. Hacking, engineering drugs, and better/faster scientific simulations in domains from the three-body problem to weather prediction are all on the “promising” list for eventual AI supremacy.
So here is my big question: How much difference is there really between AGI and humans wielding a suite of enormously powerful, albeit narrow, AI systems?
All the evils we imagine an AGI might perpetrate could be equally perpetrated by a human using narrow AI tools. Hacking operations that launch the nukes or shut down the grid, the invention of super-ebola, massive job loss, and so on. For every existential scenario of note, we can easily replace the G in AGI with a human and arrive at roughly the same apocalypse.
Meanwhile, the handwringing about AGI hides the fact that these narrow AIs, controlled by people, have already infiltrated key aspects of our lives.
AI models already power your information ecosystem. Podcast, movie, and book recommendations – interspersed with relentless “personalized” advertisements — are all driven by AI. Increasingly, the content itself is AI-generated either in part or wholesale. Job and loan applications increasingly rely on AI along with health insurance authorizations, medical diagnoses, and college admissions. Even our justice system has incorporated AI in areas from setting bail to sentencing.
These AIs display every kind of -ism known to humanity, not in the future, but today, and yesterday, and constantly now for several years running. They are feeding you propaganda, parroting old racist medical advice, and replicating the sexist hiring practices of the firms that built them. AI is also powering fake-voice scams, non-consensually undressing people (usually women and girls, and flooding the internet with inane brain-rotting drivel.
Indeed, even the first several cases of AI-powered homicide have already occurred. Not because a rogue AGI wanted to turn us all into paper clips, but because human-directed narrow AIs were deployed negligently and maliciously. I don’t know whether the first death was an inappropriate denial-of-coverage on a lifesaving medication, a false facial recognition match to an innocent man, a wayward “fully self-driving” Tesla, a drone targeting system behaving exactly as intended, or something else. But it’s not a future hypothetical, it’s our current reality.
Crucially, AI doesn’t have to be remotely super nor general to destroy lives. In fact, terrible AI is quite prone to do so. An okay radiologist can miss a tumor. A mediocre hiring manager can discriminate. A bad driver can crash. One “AGI” might be capable of all these things, but does it really matter if the self-driving car can also play chess and trade stocks? When it crashes, people die.
And that’s to say nothing of the dangerous systems upstream of modern AI, many of which are also causing ongoing harm and have enormous potential to do even more.
Massive surveillance by corporations and states involves data from every aspect of our lives. Pictures and videos of us. Our voices. Our whereabouts. Our emails and texts. Detailed lists of everything we buy. What we watch, read, and listen to. All of it aggregated and associated with our identity in real time. This data is a treasure trove for hackers, scammers, and law enforcement who frequently gain access to this information without any warrants by purchasing it from a corporate provider.
AI firms are performing intellectual property theft on an unfathomable scale. AI models have been trained on nearly every copy-protected work ever published on a public webpage, and many works that never were. Books, podcasts, illustrations, videos, poems. You name it, they used it. Creative works are being used, in mass, as fuel for the engine of their creators’ destruction. AI firms want you to call this “fair use”
(it’s not).
AI contributes significantly to software’s extraordinary energy, land, and resource use. The International Energy Agency estimates that by 2026, AI use alone will account for 1,000+ terawatt hours of electricity – equivalent to all the energy Japan uses in a year. Most data centers use water to cool their servers. In the American West these data centers are sucking water from ever-dwindling supplies.
We must build safety systems and protocols that solve these current problems. If we can’t even protect people from other humans using narrow AI, then we have no hope against AGI. Furthermore, work that ensures we are safe from malicious people using powerful software will also shore up our defenses against a potential rogue AGI.
I coach high school debate and there’s a trope among the students: Everything leads to extinction. Climate change? Biosphere collapses, sea levels rise, and farmlands all spoil, which eventually causes extinction. Support NATO? Pisses off Russia, they invade Poland, that escalates to a nuclear war, which yields extinction. End support for NATO? Emboldens Russia, they invade Poland, same same but different extinction. Economic decline? Causes political instability, exacerbates international tensions, causes minor resource-based conflicts that spill over and (you guessed it) escalate into another nuclear-war-fueled extinction.
In debate, we all know it’s a little ridiculous. If all the things debaters claimed would cause extinction actually did, we would all be dead many times over. But the calculus of an infinite risk makes it oddly compelling in a competitive setting. Debaters say, “you must be sure beyond all doubt of something to risk all human life. Therefore, any small risk of extinction outweighs all other priorities.”
This is also how Sam Altman, Elon Musk, Mark Zuckerberg, Curtis Yarvin, Peter Thiel, and the other kingpins of Silicon Valley want you to frame the problem.
Oh, you want us to pass a new intellectual property law? That stifles innovation and gives China the lead in AI. China’s evil AI will probably go rogue and launch all the nukes or create and unleash SARS-COV-3-Ultra. That is, unless our AI can stop it. Are you really willing to risk extinction just to stop our monopolist looting and rapturous negligence?
I am not opposed to the ongoing work being done on alignment and other kinds of preparation for an eventual, hypothetical AGI. Alignment is also crucial for the systems we’ve already created. But we have over-prioritized the unlikeliest threats, and allowed powerful corporate interests to run roughshod around the discourse, regulatory effort, and deployment of AI systems.
Powerful people always acquire powerful tools. It’s time to refocus our regulatory efforts on liability and consequences for those who would use and build these powerful tools maliciously and negligently.
Remember…
The Lab Report is free and doesn’t even advertise. Our curricula is open source and published under a public domain license for anyone to use for any purpose. We’re also a very small team with no investors.
Help us keep providing these free services by scheduling one of our world class trainings, requesting a custom class for your team, or taking one of our open enrollment classes.
Reply