AI and the Threat of “Enshittification”
A Personal Encounter with AI – Driven Recommendations
Recently, during my sojourn in Italy, I, like many in the digital age, sought the assistance of GPT – 5 to curate my itinerary. Specifically, I solicited its insights regarding sightseeing attractions and restaurant recommendations. GPT – 5 pinpointed a particular eatery, a short stroll down Via Margutta from our hotel in Rome, as the top dinner choice. The experience at this establishment, which I’ll refer to as “Babette” (for reasons of potential future reservation – seeking), was nothing short of exceptional, ranking among the most memorable meals I’ve had.
Upon my return, my curiosity piqued, I inquired of the model about its selection criteria for this restaurant. The response was intricate and commendable, encompassing factors such as glowing testimonials from locals, mentions in food blogs and the Italian press, and the restaurant’s renowned fusion of Roman and contemporary culinary styles. The proximity of the restaurant was also factored in.
This interaction, however, necessitated a leap of faith on my part. I had to trust that GPT – 5 was operating as an impartial advisor, untainted by bias in its restaurant selection. I had to assume that the recommendation was not a form of sponsored content, and that the restaurant would not receive a portion of my bill. While I could have conducted in – depth research independently to verify the recommendation (I did, in fact, visit the restaurant’s website), the allure of AI lies in circumventing such arduous processes.
This experience both fortified my confidence in AI – generated results and instigated a series of contemplations. As entities like OpenAI amass more power and strive to yield returns for their investors, one cannot help but wonder: will AI succumb to the devaluation that seems to afflict the tech applications we use daily?
The Concept of “Enshittification”
Writer and tech critic Cory Doctorow has coined the term “enshittification” to describe this phenomenon of devaluation. His theory posits that platforms such as Google, Amazon, Facebook, and TikTok initially strive to satisfy users. However, once they have vanquished their competitors, they deliberately become less useful in order to maximize profits. After WIRED republished Doctorow’s groundbreaking 2022 essay on this topic, the term entered the common parlance, primarily because people recognized its aptness. In fact, “enshittification” was designated as the American Dialect Society’s 2023 Word of the Year. The concept has been so frequently cited that it has transcended its somewhat vulgar connotation, even appearing in forums that would typically eschew such language. Doctorow has recently released a book titled Enshittification, with a cover featuring an emoji that one can likely surmise.
The Potential Impact of “Enshittification” on AI
If chatbots and AI agents were to undergo enshittification, the consequences could be graver than the diminishing utility of Google Search, the inundation of Amazon results with ads, or the prioritization of anger – inducing clickbait over social content on Facebook.
AI is on a trajectory to become an ever – present companion, providing instant responses to a multitude of our requests. People already rely on it to interpret current affairs, seek advice on various purchasing decisions, and even make life – altering choices. Given the exorbitant costs associated with developing a comprehensive AI model, it is reasonable to anticipate that only a handful of companies will dominate this field. These companies plan to invest hundreds of billions of dollars in the coming years to enhance their models and make them accessible to as many users as possible. Presently, I would argue that AI is in what Doctorow terms the “good to the users” phase. However, the pressure to recoup the colossal capital investments will be immense, particularly for companies with a captive user base. As Doctorow notes, these circumstances enable companies to exploit their users and business clients “to reclaim all the value for themselves.”
Advertising and the Threat of “Enshittification” in AI
When envisioning the enshittification of AI, advertising immediately springs to mind. The nightmare scenario is that AI models will base their recommendations on which companies have paid for placement. Currently, this is not the case, but AI firms are actively exploring the advertising space. In a recent interview, OpenAI CEO Sam Altman stated, “I believe there probably is some cool ad product we can do that is a net win to the user and a sort of positive to our relationship with the user.” Simultaneously, OpenAI recently announced a partnership with Walmart, enabling the retailer’s customers to shop within the ChatGPT app. One can’t help but question the potential conflicts of interest. The AI search platform Perplexity has a program where sponsored results are presented in clearly labeled follow – ups. It assures users that “these ads will not change our commitment to maintaining a trusted service that provides you with direct, unbiased answers to your questions.”
The question remains: will these safeguards hold? Perplexity spokesperson Jesse Dwyer asserts, “For us, the number one guarantee is that we won’t let it.” At OpenAI’s recent developer’s day, Altman emphasized that the company is “hyper – aware of the need to be very careful” about serving its users rather than itself. However, Doctorow’s doctrine casts doubt on such statements, stating, “Once a company can enshittify its products, it will face the perennial temptation to enshittify its products.”
Other Forms of “Enshittification” in AI
Advertising is not the sole avenue through which AI can become enshittified. Doctorow cites instances where companies, upon achieving market dominance, alter their business models and fees. For example, in 2023, Unity, the leading provider of video – game development tools, introduced a new “runtime fee.” This move was met with such vehement opposition from users that the fee was ultimately rescinded. Similarly, consider the evolution of streaming services like Amazon Prime Video. Once an ad – free service, it now subjects users to commercials before and during movies, and users must pay to disable them. Moreover, the price of Amazon Prime continues to increase. Thus, it appears to be a common practice in the big – tech realm to lock users into a service and then levy ever – higher fees. It’s conceivable that, in the future, users may be required to upgrade to a more expensive tier to maintain the same level of intelligence in a chatbot’s responses. Additionally, companies that initially promised not to use users’ chatbot activities to train future models may renege on this promise, simply because they can get away with it.
Cory Doctorow’s Take on AI and “Enshittification”
Doctorow did not address AI in his book, so I reached out to him to gauge his perspective on whether AI is destined to follow the path of enshittification. I anticipated that he would delineate the various ways in which AI companies might succumb to this phenomenon. To my surprise, he offered a different perspective. He is not a proponent of AI, contending that the field has not even reached the “good to users” stage I described earlier. Nevertheless, he posits that the enshittification process could occur regardless. Due to the opacity of what transpires within the “black boxes” of large language models (LLMs), he argues, “they have an ability to disguise their enshittifying in a way that would allow them to get away with an awful lot.” Most significantly, he claims that the “terrible economics” of the field compel companies to enshittify even before delivering value, stating, “I think they’ll try every sweaty gambit you can imagine as the economics circle the drain.”
While I disagree with Doctorow regarding the value of AI (after all, it led me to Babette), I share his concern that the technology may be susceptible to the enshittification process he has astutely identified in existing tech giants. Intriguingly, GPT – 5 concurs with me. When I posed the question to the chatbot, it responded, “Doctorow’s ‘enshittification’ framework (platforms start good for users, then shift value to business customers, then extract it for themselves) maps disturbingly well onto AI systems if incentives go unchecked.” GPT – 5 then proceeded to outline several ways in which AI companies could degrade their products for profit and power. AI companies may assure us that they won’t enshittify, but their own products have already laid out the blueprint.
This is an edition of Steven Levy’s Backchannel newsletter. Read previous newsletters here.
