Despite reportedly tracking and ranking its own citizens, China now says AI research should respect people’s privacy

A new document on ethics in AI, written by several top Chinese organisations, has called for researchers to respect people’s privacy, autonomy, and freedom.
Reuters

Despite having rolled out several incredibly invasive technologies that track citizens’ every action, China has just published a document calling for artificial intelligence (AI) research to protect people’s privacy and freedom.

Read also: China reportedly made an app to show people if they’re standing near someone in debt — a new part of its intrusive ‘social credit’ policy

The Beijing AI Principles, which was unveiled on Saturday (May 25), listed 15 ethical standards in AI research. These include using AI to promote sustainable development of nature and society, reducing discrimination and bias in AI systems, and being “cautious” about promoting AI apps that could hugely impact employment.

According to a statement by the Beijing Academy of Artificial Intelligence (BAAI), the document was jointly authored by BAAI, three top Chinese universities (Peking, Tsinghua, and the Chinese Academy of Sciences), as well as an AI industrial league that includes tech giants Alibaba and Tencent.

The document said AI research should “sufficiently respect” values such as privacy, dignity, freedom, autonomy, and rights, and should not be used to harm users.

It added that users must be informed – and consent to – how an AI system impacts their rights and interests, and that people should be trained to adapt to the “psychological, emotional and technical” impacts of AI.

It also called researchers and developers to consider the potential ethical, legal, and social impacts of their products.

BAAI director Zeng Yi told state news agency Xinhua that the document reflected China’s willingness to engage in dialogue with the rest of the world.

BAAI’s Principles document also called for international cooperation to establish a “global and comprehensive AI governance ecosystem” that would help humanity avoid creating a “malicious” AI race.

Read also: