Subtle Propaganda
Google returns results for over eight billion searches per day, a truly unfathomable number. Millions of people turn to Google to find information on current events, the recent Hurricane Beryl being no exception. I was one of those people, searching Google for "hurricane" to read about the strongest storm to develop before July in Atlantic hurricane history.
As I scrolled through the usual top results, I was greeted with Google's new AI-generated Perspectives section, which promises to add "new ways to find and explore diverse perspectives on Search". The top result in this section was an opinion piece published in the Duluth News Tribune titled "Pro/Con: There are big bucks in scary predictions, climate alarmism", written by the Heritage Foundation funded writer Chris Talgo. Just to get it out of the way first: this is a terrible article that fundamentally misunderstands the proven science of climate change, which was written by someone who has obvious financial and career incentives to oppose climate legislation.
It should also be noted that the existence of such an article as the number one Google result for Perspectives can absolutely have serious and negative impacts. Such an article questions the reliability of some of our most important government agencies in times of crisis. It is not such a leap to imagine a person ignoring an evacuation recommendation as just some "alarmism" from a government agency. This can get people killed.
That said, I don't think the existence of such an article is so interesting on its own. After all, it seems there will always be people willing to sell their souls to corporations and think tanks and overextended newspapers will continue to print their "opinions". Instead, I think it is much more interesting to examine the interaction of such an article with the modern media landscape, including generative articificial intelligence (AI).
First, let's set the stage. Google, a company that claims it will go net-zero by 2030, looks to be moving in the opposite direction as it embraces generative AI products that have massive climate impacts. Despite their claims to be a forward looking company that does good, this is no surprise, since Google has long profited explicitly from running climate change denying ads.
How exactly did such a harmful article find its way to the top of the search result? Unfortunately, the answer to this question is tricky in the age of generative AI. There was not some big-wig in the C-suite of Google, pushing a big red button that forced climate change denying stories to the top of the feed. Instead, this was the result of a generative AI model, trained on an unknown dataset, tuned by an army of engineers, putting out its version of "diverse" perspectives.
Indeed, this is really a story about one small aspect of the larger "enshitification" of the web in the age of AI. We are using algorithms that are increasingly difficult to understand to push news stories during times of crisis. These algorithms are designed with the biases inherent in their human creators, with or without the knowledge of said creators. This subtle propaganda can create a sort of positive feedback loop in which it reaffirms our biases, which then strengthens the biases inherent to the AI, and so forth.
We should ask ourselves then: is this even necessary? Was search so broken that it needed AI to come save the day? Or is this just the latest expansion for the sake of expansion that satisifies the shareholders, at least for this quarter? While the original PageRank algorithm will be remembered as revolutionizing internet search (and was developed from publicly funded research), I suspect that today's AI search tools will not have such a fond remembering. Instead, they will be remembered as the tools that broke the web.
For a company that serves more daily search results than there are people on the planet, such power demands massive responsibility. Unfortunately, such responsibility seems to allude even the best intentioned multinational corporations. The only solution is to break Google into many pieces, none of which are large enough to abuse generative AI. The Department of Justice is already taking the lead on this, pursuing several historic antitrust lawsuits against Google (among other big-tech companies). This is a moment to seize as a possible turning point in the evolution of the internet.
We stand on the brink, with two choices in our future. One option paves the way for giant corporations to increasingly control the web, pushing subtle propaganda through opaque algorithms. The second option pursues the ideal web as its founders envisioned: a web that allows for collaboration, fosters compassion and generates creativity. While we may have strayed slightly off course along the way, the next step towards the ideal web is clear: break up big tech.