Rise of AI presents dual challenges on security front
While the frontier technology is improving police work, it is also being abused by criminals, fraudsters
While police use of artificial intelligence is achieving impressive results in analyzing data and solving cases, the frontier technology is also being used to commit new crimes and is presenting fresh challenges for actual police officers.
In October, the Kunshan public security bureau in Jiangsu province said its AI team had played a crucial role in the detection and prevention of crimes, as the cutting-edge technology is able to promptly recognize suspicious activities and issue warnings.
The bureau highlighted the remarkable role of AI in combating telecom fraud, revealing one case earlier this year in which its AI team helped a local who had been cheated out of 980,000 yuan ($135,300).
The victim's statement was transmitted to the AI team's analysis center. Within 10 minutes, they had traced the flow of funds, successfully halting the transfer of 500,000 yuan and leading to the capture of nine suspects.
In Kunming, Yunnan province, however, police officers recently prevented an attempted scam in which the suspect used AI face-sweeping software to pretend he was a close friend of a local woman surnamed Wang. The suspect was planning to lure her to Guangdong province to deliver gold bars worth more than 300,000 yuan under the pretext of an emergency, according to a Guangming Daily report.
"The cases tell us that the safety of AI is as important as its development and application, which needs greater attention from all walks of life and stronger oversight to prevent the misuse and abuse of the technology," said Zheng Ning, head of the Law Department at the Communication University of China's Cultural Industries Management School.
She praised the wide application of AI to save time in searching and analyzing information, but also emphasized the importance of proper supervision of the technology.
Public concerns on security, privacy and authenticity related to AI have been rapidly growing, "making lots of countries, including us, have to follow up its development and draw the boundaries of what can be done and what must not be done," she added.
New risks
In October, a series of videos that used AI to imitate the voice of Lei Jun, founder and CEO of Chinese tech giant Xiaomi, went viral online, with the fake Lei seen commenting on hot social issues.
The real Lei said he was troubled by the videos, adding "I don't think using AI in this way is a good thing".
It was not the first time that someone has felt aggrieved after his or her voice was imitated by AI without permission.
In April, the Beijing Internet Court heard a lawsuit in which a voice-over artist surnamed Yin claimed that her voice had been used without her consent in audiobooks circulating online. The voice was processed by AI, the lawsuit said.
Yin sued five companies including a cultural media corporation that had provided recordings of her voice for unauthorized use, an AI software developer, and a voice-dubbing app operator.
After an investigation, the court found that the cultural media company sent Yin's recordings to the software developer without her permission. The software company then used AI to mimic Yin's voice and offered the AI-generated products for sale.
Zhao Ruigang, vice-president of the court and also the presiding judge in the case, said that the AI-powered voice mimicked Yin's vocal characteristics, intonation and pronunciation style to a high degree, adding "this level of similarity allowed for the identification of Yin's voice."
He cited the Civil Code, ruling that the behaviors of the cultural media enterprise and the AI software developer constituted an infringement of Yin's voice right.
The two defendants were ordered to pay her 250,000 yuan in compensation. The other companies, however, were not held liable for the infringement as they unknowingly used the AI-generated voice products, he said.
After announcing the verdict, Zhao said that the growing use of AI technology across various fields had raised new risks regarding personal rights, and called for tightening supervision of the technological service providers and platforms under specific provisions in current laws.
Some AI fraudsters have also taken advantage of AI-generated rumors on disasters and diseases, which has disturbed the order in cyberspace and caused public panic.
The number of economic and enterprise rumors generated by AI increased by 99 percent in the past year, according to a report released by a Tsinghua University research center in April.