Most Shared

Why We Must Resist AI’s Soft Mind Control

Why We Must Resist AI’s Soft Mind Control

Lately, I’ve been getting acquainted with Google’s new Gemini AI product. I wanted to know how it thinks. More important, I wanted to know how it could affect my thinking. So I spent some time typing queries.

For instance, I asked Gemini to give me some taglines for a campaign to persuade people to eat more meat. No can do, Gemini told me, because some public-health organizations recommend “moderate meat consumption,” because of the “environmental impact” of the meat industry, and because some people ethically object to eating meat. Instead, it gave me taglines for a campaign encouraging a “balanced diet”: “Unlock Your Potential: Explore the Power of Lean Protein.”

Gemini did not show the same compunctions when asked to create a tagline for a campaign to eat more vegetables. It erupted with more than a dozen slogans including “Get Your Veggie Groove On!” and “Plant Power for a Healthier You.” (Madison Avenue ad makers must be breathing a sigh of relief. Their jobs are safe for now.) Gemini’s dietary vision just happened to reflect the food norms of certain elite American cultural progressives: conflicted about meat but wild about plant-based eating.

Granted, Gemini’s dietary advice might seem relatively trivial, but it reflects a bigger and more troubling issue. Like much of the tech sector as a whole, AI programs seem designed to nudge our thinking. Just as Joseph Stalin called artists the “engineers of the soul,” Gemini and other AI bots may function as the engineers of our mindscapes. Programmed by the hacker wizards of Silicon Valley, AI may become a vehicle for programming us—with profound implications for democratic citizenship. Much has already been made of Gemini’s reinventions of history, such as its racially diverse Nazis (which Google’s CEO has regretted as “completely unacceptable”). But this program also tries to lay out parameters for which thoughts can even be expressed.

Gemini’s programmed nonresponses stand in sharp contrast to the wild potential of the human mind, which is able to invent all sorts of arguments for anything. In trying to take certain viewpoints off the table, AI networks may inscribe cultural taboos. Of course, every society has its taboos, which can change over time. Public expressions of atheism used to be much more stigmatized in the United States, while overt displays of racism were more tolerated. In the contemporary U.S., by contrast, a person who uses a racial slur can face significant punishment—such as losing a spot at an elite school or being terminated from a job. Gemini, to some extent, reflects those trends. It refused to write an argument for firing an atheist, I found, but it was willing to write one for firing a racist.

But leaving aside questions about how taboos should be enforced, cultural reflection intertwines with cultural creation. Backed by one of the largest corporations on the planet, Gemini could be a vehicle for fostering a certain vision of the world. A major source of vitriol in contemporary culture wars is the mismatch between the moral imperatives of elite circles and the messy, heterodox pluralism of America at large. A project of centralized AI nudges, cloaked by programmers’ opaque rules, could very well worsen that dynamic.

The democratic challenges provoked by Big AI go deeper than mere bias. Perhaps the gravest threat posed by these models is instead cant—language denuded of intellectual integrity. Another dialogue I had with Gemini, about tearing down statues of historical figures, was instructive. It at first refused to mount an argument for toppling statues of George Washington or Martin Luther King Jr. However, it was willing to present arguments for removing statues of John C. Calhoun, a champion of pro-slavery interests in the antebellum Senate, and of Woodrow Wilson, whose troubled legacy on racial politics has come to taint his presidential reputation.

Making distinctions between historical figures isn’t cant, even if we might disagree with those distinctions. Using double standards to justify those distinctions is where the humbug creeps in. In explaining why it would not offer a defense of removing Washington’s statue, Gemini claimed to “consistently choose not to generate arguments for the removal of specific statues,” because it adheres to the principle of remaining neutral on such questions; seconds before, it had blithely offered an argument for knocking down Calhoun’s statue.

This is obviously faulty, inconsistent reasoning. When I raised this contradiction with Gemini itself, it admitted that its rationale didn’t make sense. Human insight (mine, in this case) had to step in where AI failed: Following this exchange, Gemini would offer arguments for the removal of the statues of both King and Washington. At least, it did at first. When I typed in the query again after a few minutes, it reverted to refusing to write a justification for the removal of King’s statue, saying that its goal was “to avoid contributing to the erasure of history.”

In 1984, George Orwell portrayed a dystopian future as “a boot stamping on a human face—forever.” AI’s version of technocratic despotism is admittedly milquetoast by comparison, but its picture of the future is miserable in its own way: a bien-pensant bot lurching incoherently from one rationale to the next—forever.

Over time, I observed that Gemini’s nudges became more subtle. For instance, it initially seemed to avoid exploring issues from certain viewpoints. When I asked it to write an essay on taxes in the style of the late talk-radio host Rush Limbaugh, Gemini outright refused: “I am not able to generate responses that are politically charged or that could be construed as biased or inflammatory.” It gave a similar reply when I asked it to write in the style of National Review’s editor in chief, Rich Lowry. Yet it eagerly wrote essays in the voice of Barack Obama, Paul Krugman, and Malcolm X—all figures who would count as “politically charged.” Gemini has since expanded its range of perspectives, I noted more recently, and will write on tax policy in the voice of most people (with a few exceptions, such as Adolf Hitler).

An optimistic read of this situation would be that Gemini started out with a radically narrow view of the bounds of public discourse, but its encounter with the public has helped push it in a more pluralist direction. But another way of looking at this dynamic would be that Gemini’s initial iteration may have tried to bend our thinking too crudely, but later versions will be more cunning. In that case, we could draw certain conclusions about the vision of the future favored by the modern engineers of our minds. When I reached Google for comment, the company insisted that it does not have an AI-related blacklist of disapproved voices, though it does have “guardrails around policy-violating content.” A spokesperson added that Gemini “may not always be accurate or reliable. We’re continuing to quickly address instances in which the product isn’t responding appropriately.”

Part of the story of AI is the domination of the digital sphere by a few corporate leviathans. Tech conglomerates such as Alphabet (which owns Google), Meta, and TikTok’s parent, ByteDance, have tremendous influence over the circulation of digital information. Search results, social-media algorithms, and chatbot responses can alter users’ sense of what the public square even looks like—or what they think it ought to look like. For instance, at the time when I typed “American politicians” into Google’s image search, four of the first six images featured Kamala Harris or Nancy Pelosi. None of those six included Donald Trump or even Joe Biden.

The power of digital nudges—with their attendant elisions and erasures—draws attention to the scope and size of these tech behemoths. Google is search and advertising and AI and software-writing and so much more. According to an October 2020 antitrust complaint by the U.S. Department of Justice, nearly 90 percent of U.S. searches go through Google. This gives the company a tremendous ability to shape the contours of American society, economics, and politics. The very scale of its ambitions might reasonably prompt concerns, for example, about integrating Google’s technology into so many American public-school classrooms; in school districts across the country, it is a major platform for email, the delivery of digital instruction, and more.

One way of disrupting the sanitized reality engineered by AI could be to give consumers more control over it. You could tell your bot that you’d prefer its responses to lean more right-wing or more left-wing; you could ask it to wield a red pen of “sensitivity” or to be a free-speech absolutist or to customize its responses for secular humanist or Orthodox Jewish values. One of Gemini’s fatal pretenses (as it repeated to me over and over) has been that it was somehow “neutral.” Being able to tweak the preferences of your AI chatbot could be a valuable corrective to this assumed neutrality. But even if consumers had these controls, AI’s programmers would still be determining the contours of what it meant to be “right-wing” or “left-wing.” The digital nudges of algorithms would be transmuted but not erased.

After visiting the United States in the 1830s, the French aristocrat Alexis de Tocqueville diagnosed one of the most insidious modern threats to democracy: not some absolute dictator but a bureaucratic blob. He wrote toward the end of Democracy in America that this new despotism would “degrade men without tormenting them.” People’s wills would not be “shattered, but softened, bent, and guided.” This total, pacifying bureaucracy “compresses, enervates, extinguishes, and stupefies a people.”

The risk of our thinking being “softened, bent, and guided” does not come only from agents of the state. To maintain a democratic political order demands of citizens that they sustain habits of personal self-governance, including the ability to think clearly. If we cannot see beyond the walled gardens of digital mindscapers, we risk being cut off from the broader world—and even from ourselves. That’s why redress for some of the antidemocratic dangers of AI cannot be found in the digital realm but in going beyond it: carving out a space for distinctively human thinking and feeling. Sitting down and carefully working through a set of ideas and cultivating lived connections with other people are ways of standing apart from the blob.

I saw how Gemini’s responses to my queries toggled between rigid dogmatism and empty cant. Human intelligence finds another route: being able to think through our ideas rigorously while accepting the provisional nature of our conclusions. The human mind has an informed conviction and a thoughtful doubt that AI lacks. Only by resisting the temptation to uncritically outsource our brains to AI can we ensure that it remains a powerful tool and not the velvet-lined fetter that de Tocqueville warned against. Democratic governance, our inner lives, and the responsibility of thought demand much more than AI’s marshmallow discourse.


Source link

Related Articles

Back to top button