How Meals Supply Employees Formed Chinese language Algorithm Laws
In 2021, China issued a series of policy documents aimed at governing the algorithms that underpin much of the internet today. The policies included a regulation on recommendation algorithms and a draft regulation on synthetically generated media, commonly known as deepfakes. Domestically, Chinese media touted the recommendation engine regulations for the options they gave Chinese internet users, such as the choice to “turn off the algorithm” on major platforms. Outside China, these regulations have largely been seen through the prism of global geopolitics, framed as questions over whether China is “ahead” in algorithm regulations or whether it will export a “Chinese model” of artificial intelligence (AI) governance to the rest of the world.
Matt Sheehan is a fellow at the Carnegie Endowment for International Peace, where his research focuses on global technology issues, with a specialization in China’s artificial intelligence ecosystem.
These are valid questions with complex answers, but they overlook the core driver of China’s algorithm regulations: they are designed primarily to address China’s domestic social, economic, and political problems. The Chinese Communist Party (CCP) is the ultimate arbiter here, deciding both what counts as a problem and how it should be solved. But the CCP doesn’t operate in a vacuum. Like any governing party, it is constantly creating new policies to try to put out fires, head off problems, and respond to public desires.
Through a short case study, we can see how Chinese food delivery drivers, investigative journalists, and academics helped shape one part of the world’s first regulations on recommendation algorithms. From that process, we can learn how international actors might better predict and indirectly influence Chinese algorithm policy.
Sharon Du is a James C. Gaither Junior Fellow in the Carnegie Asia Program.
Over the past decade, food delivery apps have exploded in popularity in China. The country’s food delivery industry is three times larger than the United States’, employing 6 million drivers who crisscross cities on electric bikes and scooters. Two companies—Meituan and Ele.me—control almost 98 percent of the market, and they have been locked in a cutthroat competition to win over customers with low fees and fast deliveries.
The delivery drivers have borne the brunt of this contest. Meituan and Ele.me use algorithms to both assign deliveries and tell drivers how quickly they must make them. As competition between the companies ramped up, drivers frequently found themselves unable to meet the ever-increasing demands of the algorithms. These drivers are typically rural migrants who lack insurance or proper legal protections, and they have become a symbol of labor exploitation over the past two years.
Public outcry exploded in September 2020 when Chinese magazine Renwu published a longform investigation titled “Delivery Workers Trapped in the System” (originally in Chinese). Renwu is run by the state-owned People’s Publishing House, but it has previously published politically sensitive content, including a March 2020 interview with a coronavirus whistleblower, which was later censored. The delivery workers piece was based on dozens of interviews with drivers, company employees, and academics, and it detailed the plight of drivers whose schedules and incomes are determined by algorithms that don’t account for the realities of Chinese cities.
Renwu reported that Meituan and Ele.me’s unrealistic time windows forced drivers to speed and run red lights to avoid hefty fines for “late” deliveries, and routes recommended by the platform often instructed drivers to ride their scooters in the opposite direction of traffic or directly through walls. One driver reported seeing a colleague dismembered by oncoming traffic, while another found their account deleted when they sought compensation following injuries sustained at work.
In addition to accounts from delivery workers, the article also built on academic research by Chinese sociologists whose work has charted the impact of algorithmic logic on delivery workers. “The Chinese government is still in the initial stage of regulating platform development,” Sun Ping of the Chinese Academy of Social Sciences argued in one research paper cited in the article. “Hence, the logic of algorithms in food delivery platforms mainly reflects the logic of capitalism.” Both Sun and Renwu reference American scholars, including anthropologist Nick Weaver’s work on algorithms as culture and James Carse’s Finite and Infinite Games. Tracing these intellectual roots illustrates one of several ways that international technology governance ideas can enter into the Chinese discourse—and even shape it.
The Renwu article quickly went viral, provoking intense public outrage; within a month, it garnered more than 3 million views on Weibo alone. Chinese state media quickly piled on the criticism, reprinting and responding to the article. Leading television anchor Bai Yansong called for the platforms to treat their employees as “people, rather than machines.” CCTV Finance stated that “in past years, the government adopted a relatively lax attitude toward these platforms’ development . . . however, since it’s developed to a certain point, there should be some level of oversight.”
Meituan and Ele.me responded the day after publication. While Meituan added eight minutes of “flexible time” for drivers, Ele.me gave customers the option to “wait five minutes more” for their delivery. These responses did little to quiet the criticism, with many noting that Ele.me was simply transferring the burden of reform from the platform to the users. In the following months, Ele.me aroused further backlash when it issued only 2,000 yuan ($310) to a driver’s family following their death. In January 2021, an Ele.me driver in Zhejiang province self-immolated in protest after the company withheld 5,000 yuan ($770) in wages.
Renwu’s bombshell benefited from a bit of good timing. The article’s grim accounts of worker exploitation may have been deemed too inflammatory in earlier years, when the government was championing Chinese platform technology companies. But by fall 2020, the political zeitgeist was shifting.
Tech platform companies such as Alibaba and Tencent had gained control over huge swaths of information flows and economic activities in China, giving them a level of influence and independence that the party-state did not like. Beginning in late 2020, the Chinese government rolled out a blistering series of attacks on its largest tech companies, canceling the public listing of Alibaba affiliate Ant Financial, slapping Meituan with a $530 million antitrust fine, and imposing a $1.28 billion penalty on Didi following a year-long investigation. Backlash against big tech created an environment in which criticism of food delivery platforms was permitted—and even welcome. The Renwu article likely wasn’t written at the behest of any government officials, but it highlighted deep sociotechnical problems in an industry that the government also had in its crosshairs.
But that leeway wasn’t extended to all in China. When delivery workers themselves attempted to organize strikes independently, they were detained by the Chinese police. The government’s response to problems revealed by the Renwu article followed a long-standing playbook for responding to social issues: arrest those who attempt to organize collective action and simultaneously create policy that addresses the underlying cause of public ire.
That policy response came during the summer of 2021, when the Chinese government rolled out two new regulations impacting the algorithms underlying food delivery platforms. The first move came in July of that year, when the State Administration of Market Reform and the Cyberspace Administration of China (CAC) teamed up with five other government bodies to issue a document requiring platforms to “safeguard the rights and interests of food delivery workers.” Directly addressing algorithms in its first provision, the document demanded that platforms not use the “strictest algorithm” when assigning and assessing deliveries. Instead, it called for them to use a “moderate algorithm” (“算法取中”) that balances several factors and loosens up time limits. The policy document also addressed several other issues raised in the Renwu article, such as ensuring that delivery driver incomes do not fall below minimum wage and encouraging participation in social and injury insurance programs.
One month later, the CAC released the draft of its broader regulation on internet platforms’ algorithmic recommendation. Recommendation algorithms are at the core of most consumer-facing digital products, determining which news articles, social media posts, and products users see. The majority of CAC’s draft regulation focused on these use cases, requiring that algorithms “disseminate positive energy” and avoid excessive price discrimination.
But tucked into the regulation’s twenty-nine provisions was one that directly addressed the role of recommendation algorithms in dispatch systems for laborers, such as food delivery workers. The language in the provision remains vague, requiring platforms to make sure that algorithms provide workers with adequate compensation and rest and that they “ensure workers’ rights and interests.”
Despite that vagueness, the inclusion of dispatching algorithms forced Meituan and Ele.me to submit information with the CAC’s newly established algorithm registry (or “filing system”). The public versions of these filings include a high-level description of the algorithm’s fundamentals, functioning, and use cases. In their filings, both Meituan and Ele.me’s entries emphasize the steps they have taken to lengthen delivery times. Meituan’s entry explicitly claims its algorithm chooses the longest among four possible estimated delivery times to display on the order page. Ele.me emphasized that the “minimum delivery time will not be adopted” and illustrates mechanisms for drivers to request more time in difficult conditions. Earlier this year, a discussion with an employee at one of these companies confirmed that making adjustments to delivery algorithms became a top engineering priority during this time.
This case study examines just two regulations within Chinese AI policymaking. But through it, U.S. officials and experts can get a glimpse into forces that will continue to shape how China governs algorithms.
As algorithms become increasingly ingrained in China’s economy and society, algorithmic regulations are no longer a strictly technocratic matter but are influenced by a broad range of actors across Chinese society: journalists, academics, and public sentiment. These actors all exist within the political constraints of President Xi Jinping’s China, and many problems and solutions are simply not up for discussion. But when these actors’ goals align—or at least don’t conflict—with CCP priorities, they can materially shape government regulations on technology.
This also presents international actors with opportunities to better predict Chinese technology policy. Foreign analysts justifiably focus their attention on the official documents that emerge from the Chinese party-state. This is where ultimate power resides and where the most reliable signals of future policy appear. But widening the lens also brings benefits. By tracking influential media outlets, academics, and public conversation, U.S. and international analysts can get earlier and more granular insight into the problems that the Chinese government may seek to tackle.
Related analysis from Carnegie
Finally, this more diffuse set of inputs into government policy also illustrates the ways that international ideas on algorithms and AI governance find their way into Chinese policy. The original Renwu article relied heavily on the work of Chinese labor sociologists, who in turn engage with the work of sociologists and anthropologists in the United States. Similarly, bureaucrats shaping China’s AI governance frameworks are closely following international developments, examining the structure of the European Union’s proposed AI Act and the implementation of the Federal Trade Commission’s initiatives targeting algorithmic bias.
None of these influences constitute a direct mechanism for altering Chinese policy. And the Chinese government will never go against its core interests merely because an American academic advised it. But when international scholars, technologists, and policymakers debate AI policy, there are people in China paying attention. And when the sociopolitical stars align, those debates can nudge Chinese regulation in new directions.
Comments are closed.