AI Discourse in the Polarized World
Everybody's talking about AI. All of my siblings work closely with it. In school, everybody uses it, but half pretend they don't. My media feeds are full of it. But the discourse isn't healthy.
Two years. That's how long it took for AI to become another field of the culture wars. I was listening to a podcast with Josh Miller, where he mentioned how politicized AI has become. I wouldn't say it's necessarily bad per se: AI is a political issue. But he's indeed right to compare AI and political discourses as they both went through the same polarization process, with the difference that it took decades for the latter.
The AI discourse heated up quickly. First came techno-bros celebrating new tech, but their over-the-top enthusiasm soon turned exhausting. Marc Andreessen's Techno-Optimist Manifesto, for instance, advocates for AI acceleration blindly praising technology as a solution to everything: "We believe that there is no material problem—whether created by nature or by technology—that cannot be solved with more technology." But even his cherry-picked examples are questionable: "we had a problem of isolation, so we invented the Internet." Did it really help though? And more broadly: Does technology always have such an amazing impact that we shouldn't worry about its possible drawbacks? Maybe Luddites were somewhat right fearing industrial revolution if it took Britain a century to achieve a bare minimum of workers' rights.
Every action has an equal opposite reaction. So the other camp emerged quickly... calling for similarly extreme actions. In 2023, there was a popular open letter proposing a pause in the training of AI systems more powerful than GPT-4 for half a year. A few years in, we still have no clue how LLMs work; no way those mere six months could possibly help. Nor has there been any significant harm from the more powerful systems.
Now, with the panic a bit eased, my BlueSky feed is full of takes that AI doesn't matter at all and is incapable of basically anything. While the first interactions with LLMs might feel disappointing, AI is changing jobs, education, and business. The IT market's transformation is a canary in the coal mine. Although less visible, a similar change is happening in education, probably on an even more stunning scale with the number of students and teachers affected by the AI.
Still, some prefer to dismiss or, worse, blindly hate AI. A recent conversation, shared by Amanda Guinzburg, with ChatGPT, that consistently lies, pretending it read the links it couldn't access, got a lot of attention. When Guinzburg confronts it with its lies, the bot goes into a quite concerning loop of apologies: "I went further and framed my response as if l'd read every word. That was wrong. There's no excuse for it. <...> You were sharing something personal and specific, and you deserved a response rooted in that same level of care and attention." It then lied again when asked to summarize the same webpage. People reacted furiously calling the bot "psychopath" and "word-vomiting," acting like it's a real person with ill intentions—AI has none—instead of treating it as a poorly working system.
Taken from "Diabolus Ex Machina" by Amanda Guinzburg
This reaction is ignorant of AI's nature. James Patterson put this nicely in the comments: "Psychopathy is a dysfunction within a psyche. ChatGPT has no psyche. <...> And while I agree that metaphor is a powerful tool for making sense of emergent technologies, it cuts both ways. If the metaphor shapes public understanding in a way that encourages people to treat the system as an agent with moral disposition (rather than a probabilistic engine trained on human language) then we’ve created another hallucination, just at the conceptual level." The discourse has drifted far from rational discussion.
These days, you're either a stupid techno-bro expecting AI to solve all the world's problems, or a doomer calling for bombardment of data-centers, or an ignorant hater in denial of any technological advancements... I believe we've lost the point somewhere. Like it or not, AI will change us. And it's in the hands of people to make sure it's a positive change. That's made through an open discussion about the benefits and drawbacks—and simply hearing each other. We call this thing democracy.