People's Network
People's Network >> Powerful News

How to develop AI big model safely, normatively and sustainably?

Square warp
November 8, 2023 13:36 | Source: People's Network
Small font size

The "Smart Future - 2023 Artificial Intelligence Achievements Exhibition" hosted by People's Daily Online was performed in Wuhan recently. At the exhibition, relevant principals from the National Key Laboratory of Communication Content Cognition introduced the construction of the generative AI compliance assessment system and corpus.

At present, the construction of mainstream value corpus has made phased progress, with more than 30 million basic corpora and more than 50000 question and answer corpora completed. Relevant work achievements have boosted the safety, standardization and sustainable development of domestic AI large models, and provided more norms and references for the field of generative AI.

Why do we need to work together to build such a mainstream value corpus? Zhang Yongdong, the chief scientist of the National Key Laboratory of Communication Content Cognition of People's Daily, said in an interview with People's Daily that in the era of artificial intelligence, one of the important responsibilities of the media is to ensure the security of ideology. At present, the AI big model has become a generative information dissemination tool, so it is necessary to train the values of the big model. "The process of training a big model of artificial intelligence is like cultivating a child. How and in what environment you cultivate him from an early age will determine what kind of person he will become in the future." The mainstream media's establishment of a corpus that conforms to the Chinese people's own values is a shaping of his "growth environment".

At present, generative artificial intelligence technology (AIGC) is developing rapidly, which can not only be applied to text, but also achieve breakthroughs in video, audio and other fields. So, how can the media industry prevent people with ulterior motives from using AIGC technology to create false news and spread false news while improving productivity?

Zhang Yongdong believes that ensuring the authenticity of news has always been an important responsibility of the media. The content generated by AI technology is highly camouflaged and deceptive, which may indeed lead to more false content. "Therefore, we should use AI to fight against AI." Based on some defensive AI technologies, first judge whether the information content is generated by AI technology, and send an early warning to users. In addition, with the continuous expansion of the mainstream value corpus, it is also possible to compare the doubtful information with the data in the corpus through artificial intelligence technology, so as to help users judge whether the information is consistent with the facts. If it is a rumor, it can also list which information points are wrong.

It is understood that the AI generated content detection tool AIGCX, led by the National Key Laboratory of Communication Content Cognition and jointly launched with the University of Science and Technology of China and the Artificial Intelligence Research Institute of Hefei Comprehensive National Science Center, can quickly distinguish between machine generated text and artificially generated text. At present, the accuracy rate of Chinese text detection has exceeded 90%.

(Editor in charge: Fang Jinglun, He Yingchun)

Share to let more people see

Back to top