China Extends Censorship and Surveillance Measures to Control AI Models
China has once again extended its policy of censorship and surveillance as it looks to keep artificial intelligence (AI) models in check even as it races to advance the ever-expanding technology.
The Chinese Communist Party (CCP) has introduced more regulative measures to make sure its home-based tech companies adhere to the party’s ideological rules. All AI firms are required to participate in a government review which analyzes the companies’ large language models (LLMs) to ensure they “embody core socialist values,” as first reported by the Financial Times last week.
China has long worked to suppress information accessible over the internet through the use of its “Great Firewall” — which has been used to block a litany of items perceived as bad for the CCP, such as information surrounding the 1989 Tiananmen Square massacre or memes comparing Chinese President Xi Jinping to Winnie the Pooh.
This firewall is being extended to the AI arena as China rushes to advance its technologies while still governing the content it creates. China’s Cyberspace Administration of China (CAC) is now requiring AI companies like ByteDance, Moonshot, and 01.AI to take part in a review process that analyzes how effectively their programs are censoring the LLMs they are building.
Chatbot systems are being developed to not only collect sensitive keywords but to also block information on questions relating to banned topics, often involving queries relating to human rights. The AI systems in turn spit out responses like “try a different question” or “I have not yet learned how to answer this question. I will keep studying to better serve you.”
But in a move to prevent the chatbots from blocking too many questions, CAC policies dictate that LLMs should not reject more than 5% of all questions, according to the Financial Times report.
Instead, blanket answers deemed politically correct have been created to answer specific types of questions, though controlling LLMs responses is an uphill battle for developers.
China’s continued pursuit to control the narrative among its own population speaks to a greater threat, AI expert Arthur Herman, senior fellow and director of the Quantum Alliance Initiative with the Hudson Institute, told Fox News Digital.
“That is the future that China has charted for its own citizens,” Herman said. “This is also how they see… being able to control the world of others.”
Herman pointed to China’s burgeoning relationship with the global south, where social media platforms like WeChat have taken off.
“There will inevitably be a social control, a mind control, element that goes into those programs… and to shape a world that looks more and more like China wants it to look,” he said.
Herman also warned that these strategies are not only playing out on internet platforms in authoritarian nations, but anywhere that the platforms are accessible, including the U.S.
“They have mastered the art of brainwashing through TikTok,” Herman said. “Chinese engineers have found a way to create a social media platform which is highly addictive, and which is also highly geared towards brainwashing its users to see the world in a certain way and to respond to visual and audio cues in a certain way.”
Herman said China’s use of TikTok technologies is just a “foretaste” of how Beijing can use AI applications to manipulate populations beyond its borders.
“China sees AI as a means by which to change people’s minds,” he said. “AI’s ability to enhance those kinds of brainwashing and mind control applications is so powerful…that even when you’re not actually under a surveillance camera, even when you’re not actually listening to or watching government-inspired propaganda… there are other subtler ways in which your mind is being changed and adjusted simply by your interaction with things that are taking place in daily life — which are more and more directed by how the Communist Party wants you to see the world.”