Rise of AI presents dual challenges on security front
While the frontier technology is improving police work, it is also being abused by criminals, fraudsters
Regulatory controls
The large amount of polluted information on AI platforms and abuse of AI-generated content, have made more people realize that what they see is not necessarily real, prompting them to demand stronger oversight and more comprehensive management of the technology.
In August 2023, China issued an interim regulation to manage AI-generated services and products, aiming to safeguard national security and protect people's legitimate rights and interests, while promoting development of the technology.
The regulation, which was jointly formulated by seven authorities, including the Cyberspace Administration of China, the Ministry of Public Security and the Ministry of Science and Technology, highlights the protection of personal data and intellectual property.
It requires AI-generated service providers to improve the accuracy and reliability of generated information and to label such content. The interim regulation also requires regular security assessments as well as measures to prevent juveniles from becoming addicted to AI services.
In August this year, the European Union gave final approval to an AI law that takes a "risk-based approach" to products and services that use AI, stating that the riskier an AI application is, the more scrutiny it faces.
The law stipulates that AI-generated deepfake pictures, video and audio of existing people, places, and events must be labeled as artificially manipulated.
Earlier, Brussels also suggested broader AI rules, while some US states are working on their own AI legislation.
Similar legislative guardrails are being considered in countries around the world, as well as by global groups such as the United Nations and the Group of Seven industrialized nations.
Striking a balance
Compared with rules and guidelines made by other nations and organizations, China's AI management is more like a tool kit, said Zhu Wei, deputy head of the Communication Law Research Center at the China University of Political Science and Law.
"Instead of putting all AI-related content into an individual law, our AI management can be seen in many laws and rules, such as the Civil Code and the interim regulation," he said.
"We're building a legal system to develop, manage and supervise AI."
Zheng, from the Communication University of China, said that China's management of AI is more flexible, which not only clarifies the boundaries for operators and users, but also leaves them more space for innovation and development.
"The bottom line is to neither damage national security and data security, nor bring damage to others," she said.
The development and application of AI technology cannot violate the National Security Law, the Data Security Law, the Cybersecurity Law and the Law on Personal Information Protection, according to Zheng.
"Under such a legal framework, it is crucial to refine rules for AI development in some major areas, such as finance, education, transportation and medical care to meet more needs of the people and industries," she said.
"It's not easy to achieve a balance between the security and development of the emerging technology, so it's urgent and necessary for more walks of life, including government agencies, judicial authorities, internet platforms and the public, to jointly participate in the management," she added.
Wang Sixin, a professor of internet law at the Communication University of China, said that seeking this balance is not achievable overnight. It requires long-term effort and refinement, as there are many uncertainties in technological development.
"Technological developments allow us to discover new problems and also urge us to find ways to solve them," he said.
"The management and supervision of AI and its generative content is always on the move. It's a process that constantly needs to be improved," he said. "Therefore, relevant rules or provisions should be made flexibly and not be too detailed."
Wang compared AI to a knife that can cut both vegetables and meat. "The key is what the person using the knife wants to do," he said, adding this is why management and oversight in China focus more on how to enhance the legal and security awareness of AI developers, operators and users.
- Survivor of Japan's 'comfort women' system dies, leaving 8 on Chinese mainland
- 19 foreigners among China's first officially certified hotpot chefs
- China approves new lunar sample research applications from institutions
- Fishing, Hunting festival opens at Chagan Lake in Jilin
- A glimpse of Xi's global insights through maxims quoted in 2024
- China's 'Ice City' cracks down on ticket scalping in winter tourism