China’s social credit system
We examine China’s Social Credit system and argue that the new governance mode underlying the SCS relies on the fuzzy notion of “trust,” thus derogating from certain mandates enshrined in China’s Constitution.
- Dr Yu-Jie Chen (Institutum Iurisprudentiae, Academia Sinica) (see also U.S.-Asia Law Institute)
- Professor Ching-Fu Lin (National Tsing Hua University)
- Dr Han-Wei Liu (Department of Business Law and Taxation, Monash University)
Project Background and Aims
This project “'Rule of Trust': The Power and Perils of China's Social Credit Megaproject” is a collaborative effort that brings together three scholars: Dr. Yu-Jie Chen at the Institutum Iurisprudentiae, Academia Sinica (Taiwan) (from August, as an Assistant Professor at New York University, Shanghai) & U.S. & Asia Law Institute at NYU Law School, Professor Ching-Fu Lin at National Tsing Hua University (Taiwan), and Dr. Han-Wei Liu at Monash University. In addition to the research support from other collaborators’ home institutions, our research project was also enabled by the BLT Research Grant. The purpose of this project is as follows.
With the rapid developments in data analytics, computational capacities, and machine learning techniques, artificial intelligence (AI) systems have been increasingly employed by government institutions—social welfare agencies, law enforcement authorities, and courts—to make real-life decisions that impinge upon individuals’ rights and obligations. While the society has benefited from AI-enabled applications in many ways, potential problems tied to their underlying designs should not be overlooked. Depending on the data set and algorithms of each AI, controversies have emerged regarding biases against and harm to underrepresented groups, invisible powers and rules of scores and bets, and separate and unequal economies. Such broader implications have thus engendered scholarly debates over ethical, social, legal, and political dimensions. While existing literature spreads over various aspects of AI—such as legal personhood of robots, regulatory framework for self-driving cars, autonomous weapons, and the use of AI in policing and court proceeding, much of the discussion has focused on the American or European contexts. The interactions between the Chinese government and AI have thus far received scant attention.
As a formidable power in innovation and advanced technologies, China has been pioneering in integrating AI into its national policy and legal infrastructure. Among others, one of China’s most ambitious AI-enabled projects is the so-called "social credit" system. As part of its strategic plan towards a comprehensive framework of social control and stability by 2020, China has rolled out several pilot projects in Shanghai, among other places. The idea of monitoring and rating the behavior of every individual has attracted much media attention, yet questions about the actual operation of the system and its legal, political, and ethical ramifications certainly merit in-depth scholarly examination. How does the Chinese government gather information about each person? What behavioral patterns are monitored, and why? What are the rewards and punishments in the social credit system? Are such rewards and punishments limited to realms of public authority exercised by the government, or can they be translated to multiple other spheres dominated by giant corporations? On what basis does the government justify the use of certain factors for rating/scoring purposes? To what extent is the system transparent and accountable? Does Chinese law adequately address the relevant human rights concerns, including privacy, non-discrimination, and the right to an effective remedy? What does the system tell us about China’s ideology regarding technology and control? The paper aims to investigate questions as such, and through them, examines the AI-enabled powers possessed by an authoritarian regime to enhance its rule of wide-ranging aspects of the lives of its citizens
Based on traditional legal analysis, this paper thoroughly examines laws, regulations, and other normative documents at both local and national levels. We consider, in particular, the trajectory of the development of the Social Credit System by tracing how the regulatory tools evolve since the establishment of the People’s Republic of China in 1949. We also engaged a comparative analysis by pointing out how the Chinese government’s new approach distinguishes other jurisdictions when it comes to leveraging disruptive technologies.
We identify four key mechanisms of the SCS—namely, information gathering, information sharing, labeling, and joint sanctions, and highlight their unique characteristics as well as normative implications. In our view, the new governance mode underlying the SCS — what we call the “rule of trust” — relies on the fuzzy notion of “trust” and wide-ranging arbitrary and disproportionate punishments. It derogates from the notion of “governing the country in accordance with the law” enshrined in China’s Constitution.
This Article contributes to legal scholarship by offering a distinctive critique of the perils of China’s SCS in terms of the party-state’s tightening social control and human rights violations. Further, we critically assess how the Chinese government uses information and communication technologies to facilitate data-gathering and data-sharing in the SCS with few meaningful legal constraints. The unbounded and uncertain notion of “trust” and the unrestrained employment of technology are a dangerous combination in the context of governance. We conclude with a caution that with considerable sophistication, the Chinese government is preparing a much more sweeping version of SCS reinforced by artificial intelligence tools such as facial-recognition and predictive policing. Those developments will further empower the government to enhance surveillance and perpetuate authoritarianism.
- Yu-Jie Chen, Ching-Fu Lin and Han-Wei Liu, ‘Rule of Trust’: The Power and Perils of China’s Social Credit Megaproject,’ Columbia Journal of Asian Law, Vol. 32, No. 1, 2018
- This paper has been described by Professor Frank Pasquale (University of Maryland School of Law), a leading expert in the field of artificial intelligence and laws as “Important paper on the shift from ‘rule by law’ to ‘rule of trust’ at the advent of digital totalitarianism.” Since it was uploaded onto the SSRN on December 20, 2018, this paper has been downloaded many times and has also been discussed on twitter.