China’s race to dominate AI may be hindered by its censorship agenda



summary
Summary

Peter Gostev, Head of AI at Moonpig, found an easy way to get a Chinese language model (LLM) to talk about taboo topics like the Tiananmen Square incident.

Gostev manipulated DeepSeek’s public chatbot by mixing languages and swapping out certain words. He would reply in Russian, then translate his message back into English, tricking the AI into talking about the events in Tiananmen Square. Without this method, the chatbot would simply delete all messages on sensitive topics, Gostev said.

Video: Peter Gostev via LinkedIn

Gostev’s example illustrates China’s dilemma of wanting to be a world leader in AI, but at the same time wanting to exert strong control over the content generated by AI models (see below).

Ad

Ad

Controlling the uncontrollable

But if the development of language models has shown one thing, it is that they cannot be reliably controlled. This is due to the random nature of these models and their massive size, which makes them complex and difficult to understand.

Even the Western industry leader OpenAI sometimes exhibits undesirable behavior in its language models, despite numerous safeguards.

In most cases, simple language commands, known as “prompt injection,” are sufficient – no programming knowledge is required. These security issues have been known since at least GPT-3, but until now, no AI company has been able to get a handle on them.

Simply put, the Chinese government will eventually realize that even AI models it has already approved can generate content that contradicts its ideas.

How will it deal with this? It is difficult to imagine that the government will simply accept such mistakes. But if it doesn’t want to slow AI progress in China, it can’t punish every politically inconvenient output with a model ban.

Recommendation

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top