This blog is inspired by Joanne Kuai’s webinar “Media innovation in China and its implication on contemporary China research” which was arranged by the China in Europe Research Network CHERN; Nordic Institute for Asian Studies NIAS, University of Copenhagen, Denmark; and Centre for East Asian Studies CEAS, University of Turku, Finland. For further information about the webinar, you can contact her by sending an email to email@example.com
Artificial intelligence (AI), as a disruptive technology, is unprecedentedly changing the society and our personal lives, in both good and bad ways. From what we can buy at the convenience store to how much we can apply for a loan at the bank, and to nearly all the contents we see on social media platforms, AI is always there playing the role of an invisible yet ubiquitous hand. This is exactly the case nowadays in China, where since the release of the New Generation AI Development Plan (2017) AI has become some sort of government-endorsed panacea for efficiency, productivity, and economic vitality. Meanwhile, we see many unethical uses of AI hitting the headlines, including privacy intrusion, algorithm discrimination, fake news dissemination and so forth. Another more subtle yet possibly more dangerous outcome of long-term use of algorithm-enabled applications is the problem of filter bubble or sometimes also called information cocoon or echo chamber where we only encounter information or opinions that are similar to and reinforce that of ours. Precisely “if you like this, you will like that”, fueling the polarization of the public opinion. This is one of the key phenomena that scholars and researchers in the field of media and communication study, like Joanne, the speaker of the webinar that this blog is inspired by. Combining the content that she shared in the talk, I will take this opportunity to bring “AI + media + China” in focus and probe into the intertwinement of technology, ethics, and politics.
As Joanne pointed out, in China AI has been implemented into nearly all stages of media production, including information collection, content production, content recommendation, and distribution. In the more traditional media industry, AI news anchor is no longer new and chatbots are already basics. CCTV (China Central Television, the state media) has for instance hired several generations of AI news anchor since 2018 like Kang Xiaohui (康晓辉, 2018), Xiao Hao (小浩, 2018), Xiao Meng (小萌, 2019), Xin Xiaowei (新小薇, 2020). (Xiao/小, in case you are wondering, in mandarin means “little”.)
On social media platforms, on the other hand, AI can be used to conduct data mining, construct knowledge graph through real-time information acquisition and therefore build a smart system for information collection. One typical example is Weibo’s “Hawk Eye” system. Feeding on the platform’s enormous content and at the same time combining the use of sophisticated algorithms and real-time computation, “Hawk eye” can help the editorial team to build a model of the brewing, outbreak and spreading process of a trending hot spot. Based on the evaluation and analysis of the views, thumb-ups, and reposts of certain issues, Weibo can instantly identify the potential trending hot spots and recommend them to users literally immediately. Furthermore, based on our search history, view history, the content we like, the time we spend on certain hashtags, whom we are following, basically all traces we leave on the platform, and some, if not all, of our personal information, for instance gender, location, or education level, the system can recommend highly-personalized content to us all, in order to make us spend more time there. More users and more time, for the platform, means more revenue and power. Therefore, assuming we have limited time allocated for social media per day, the platforms are trying to offer the content that we might like the most to fight for our time. Developing the most accurate algorithm accordingly becomes the key to success. Yet how accurate are they now? Some may find those algorithms like artificial stupidity instead of intelligence, while many deem them as creepy mind-readers, knowing all too well.
Then, what is the harm? Would not it be nice if we can always get what we like? From a personal perspective, to begin with, there is the risk of privacy intrusion. Many platforms, for the sake of their own benefits, collect personal information that is certainly protected as privacy. Facebook-Cambridge Analytica data scandal is a classic example in this respect, where personal data of millions Facebook users were collected without consent and used for political advertising such as Donald Trump’s presidential campaign. In recent years, there are more and more laws coming out regulating collection of personal information. Yet still, even companies do abide by those laws, there is always the risk of outlaws hacking, leaking or selling our data. Indeed, nowhere to run. Apart from concerns over privacy, algorithms may also discriminate against us. COMPAS used in the US court systems, the hiring algorithm hired by Amazon, and the discriminatory pricing algorithms pervasively embedded in platforms like Trip.com (the largest online travel agency in China), Taobao.com (the largest online shopping site in China), Didi (the largest vehicle for hire company in China) are respectively clear examples of algorithm discrimination against certain ethnic groups, gender, and social class. Privacy intrusion and algorithm discrimination are probably the two problems that have received the most attention from the general public and the researcher community. In fact, we individual users are also exposed to many other risks including but not limited to addiction, deception, and manipulation.
From a social perspective, the aforementioned problem of filter bubble constitutes one of the major concerns. It is perhaps not a big deal if bubble A and bubble B just champion different trivial opinions like whether pizza goes well with pineapple. Yet what if they are religious beliefs, perceptions of certain ethnicities, or political orientations? More personalized algorithmic recommendation will, if without intervention, result in a more polarized society. In addition, not only in personal recommendations, algorithms are employed in other scenes and pose significant challenges to our society. For instance, precisely as Joanne pointed out, algorithms in censorship is a lucrative business. The suppression to the society could be even heavier if it works hand in hand with algorithm-facilitated propaganda. Nonetheless, one could also argue that algorithms can play important roles in protecting public safety if they serve as content moderators who are constantly screening the platforms and filtering out those containing pornography, violence, hate speech, or other malicious contents. So, some could argue that censorship and content moderation are basically the same thing, given their identical technics. Yet if we take a closer look, the power dynamics of the two almost contrast with each other. Removing child pornography aims to protect the minors (the powerless), while blocking citizens’ complaint about the government consolidates the powerful. What content is considered malicious and who has the power to decide what is malicious become the core questions that should be asked before judging the action of removing some contents as censorship or content moderation.
And…the way out
We have talked about some of the most substantial risks that AI could invite, from both personal and social perspectives. Noteworthily, I, as well as Joanne during her talk, adopt the narrow definition of AI, namely algorithms constructed via machine learning technologies aiming at replacing humans in single and repetitive tasks. The ethical concerns over artificial general or super intelligence, although important and intriguing, are beyond the scope of this blog. The recommendations that we will discuss here are therefore, focusing on addressing the problems from narrow AI, or to put it more simply, they are recommendations for algorithm governance.
To tackle the issue of privacy intrusion and algorithm discrimination, many institutions have issued ethical guidelines for AI development. Although privacy and fairness are always included as core principle, those guidelines, given their weak-binding quality, are accused of being mere “ethics-washing” practice, namely a company’s fabricated or exaggerated interests in fair and ethical AI systems. Hence, we need our governments to issue more strong-binding regulations. However, it is believed that excessive regulations will kill technological innovation and hence hamper economic development. When to regulate and how tight it should be become significant decisions to make for the policymakers. Here we will not go down the rabbit hole and give a harangue on what is a perfect government. Instead, let’s take a look at how AI or rather, algorithm is regulated in China.
In the Chinese context, before 2020, the whole Internet industry enjoyed a rapid and wild development. Since the regulations were relatively loose, many social scandals occurred while those tech giants were busy with fighting for market share and revenue with their technological innovations including AI systems. For instance, data leak kept hitting the headlines and employing those discriminatory pricing algorithms became almost a common practice in the whole e-commerce industry. Yet after entering the new decade (perhaps since the IPO of Ant Group), it seems that the government has determined to tighten up the regulation. Apart from the comprehensive Personal Information Protection Law, the government issued the first specific regulation ever on algorithmic recommendation systems, Provisions on Internet Information Service Algorithmic Recommendation Management (互联网信息服务算法推荐管理规定). It came into effect in March 2022, in which Article 17 states that “algorithmic recommendation service providers should provide users with service options that are not based on their personal characteristics; or provide users with convenient options to turn off algorithmic recommendation services.” Further, Article 21 stipulates that “algorithmic recommendation service providers who sell commodities or provide services to consumers shall protect consumers’ rights of fair transaction and shall not, based on consumers’ preferences, trading habits and other characteristics, use algorithms to carry out illegal acts such as unreasonable differential treatment on transaction prices and other transaction conditions.” In tandem with other regulatory methods such as anti-trust investigations and administrative punishments, these new laws are expected to reduce the unethical uses of algorithm in China, although it will require some more time to see how effective they are. After all, will you turn off the algorithmic recommendation services? Perhaps more will turn them off on e-commerce sites considering the risk of discriminatory pricing, yet how many will turn them off on social media and video platforms? If not so many, how can we really get out of the filter bubble?
Either way, Chinese netizens may welcome these regulatory initiatives since now they have more options now, in terms of living with algorithmic recommendation or not, although finding that turn-off option might feel like a hide-and-seek. This is perhaps the advantage of having a government that is more powerful than the tech giants. Yet meanwhile, that power, exactly like AI, can also be used for censorship, suppression, and propaganda. So, is AI in worse or better hands in China?
Junhua Zhu is a doctoral researcher working at the Centre for East Asian Studies, University of Turku. Previously he received his master’s degree from Lund University and currently his research focuses on AI ethics, particularly in the Chinese context.