As artificial intelligence becomes increasingly integrated into various aspects of governance and society in China, its sociocultural and political implications present a compelling area of study. This panel will explore this topic from multiple perspectives, examining the potential biases and authoritarian tendencies inherent in AI applications.
Xing's and Huang and Tsai's papers highlight the importance of local contexts in AI surveillance applications for political control. Xing’s research challenges the notion of AI systems as fixed black boxes by examining Didi, a popular ride-hailing platform in China. It demonstrates that Didi's algorithms are not static but rather fragmented and contingent, significantly shaped by interactions with various levels of government. Through participant observations and interviews, the study reveals the complex relationship between the state and platforms in shaping algorithmic surveillance systems. Huang and Tsai explore how a security solution from developed democracies was adapted by Chinese police and digital companies to suit the Chinese party-state system and examine the spread of China's electronic surveillance technology in the Global South and its role in advancing digital authoritarianism. The authors argue that the specific mechanisms of authoritarian capitalism need to be reconsidered by re-evaluating the local context of surveillance technology adoption.
The other two papers in this panel address how AI applications may potentially and gradually bias beliefs and understandings. Kang and Zhu’s research utilizes a survey experiment to investigate how citizens in China exhibit less resistance to state surveillance. Their findings reveal that public acceptance of surveillance during crises, specifically the COVID-19 pandemic, can lead to a diminished sense of agency and increased compliance with the same surveillance strategies even after the crises have passed. Wang, Ling, and Zhang's paper delves into AI’s pedagogical implications through content analysis, interviews and digital ethnographic approaches in which students interact with chatbots trained on popular discourses about ancient thinkers. It reveals how Chinese narratives about the thinkers shape the AI’s output and in turn affect students' learning behavioural outcomes.
Together, these diverse perspectives will foster a rich discussion on the implications of AI in authoritarian contexts, urging a critical examination of its impact on political dynamics.