OpenAI's disbandment of the security team is questioned! Musk: Showing security is not their priority

2024-05-20 07:22 Source: Global Network

[Global Times special correspondent Ren Chong] The internal turmoil of American artificial intelligence research company OpenAI continues. According to a report on CNBC website on the 17th, a source said that OpenAI disbanded the AI long-term risk team - "Super Alignment" only one year after it announced the establishment of the team, and some team members were assigned to other teams of the company. Last July, OpenAI announced the establishment of this team, focusing on "scientific and technological breakthroughs to guide and control AI systems that are much smarter than ours". At that time, OpenAI said that it would devote 20% of its computing power to the team within four years. Tesla CEO Mask commented on the news that the "Super Alignment" team was disbanded: "This shows that security is not the priority of OpenAI."

Just a few days before the news was announced, two senior executives of OpenAI, Ilya Sutzkwell and Jane Reik, co founders and chief scientists, announced their departure from the company. They are the co leaders of the "Super Alignment" team. On the 17th, Lake said in a social media article, "I joined OpenAI because I think this company is the best place to carry out this research. However, I have always disagreed with OpenAI leadership's view of the company's core priorities until we reached the critical point." Lake wrote, "In the past few months, my team has been sailing against the wind. Sometimes we will struggle for computing resources, and it becomes more and more difficult to complete this key research. " Leike reminded that OpenAI must become a "safety first AI company". "Making machines smarter than humans is a dangerous attempt in itself. OpenAI bears great responsibility. But in the past few years, safety culture and processes have given way to 'shiny products'. "

"Why was the OpenAI team responsible for human security disbanded?" The US Vaux.com said on the 17th that in November last year, the board of directors of OpenAI tried to dismiss the CEO Altman, but Altman soon regained power. Since then, at least five of the company's most security conscious employees have resigned or been dismissed. The Wall Street Journal of the United States said that Sutzkwell focused on ensuring that artificial intelligence would not harm humans, while others, including Altman, were more eager to promote the development of new technologies. According to Wired magazine, Sutzkwell was one of the four board members who fired Altman last November.

A source of the company told Vaux.com that security conscious employees have lost confidence in Altman. "This is a process of trust collapsing, just like dominoes falling one by one.". They believe that Altman claims to put safety first, but his behavior is contradictory.

The TechCrunch technology blog website said on the 18th that OpenAI gave up security research and launched new products like GPT-4o, which eventually led to the resignation of two heads of the "Super Alignment" team. At present, it is not clear when or whether the technology industry will achieve the necessary breakthroughs to create artificial intelligence that can complete any task that can be completed by human beings. But the dissolution of the "Super Alignment" team seems to confirm one thing: OpenAI's leadership - especially Altman - chose to prioritize products over security measures. In the past year, OpenAI has flooded its chat robot store with spam, and has violated the platform's terms of service to grab data from YouTube... Security seems to be in a secondary position in the company - more and more security researchers have come to this conclusion, choosing another strategy.

Lei Ke also mentioned on the 17th that "we should use more resources to prepare for the next generation of models, including security, monitoring, consistency, confidentiality, social impact, etc. These problems are difficult to solve, and I am worried that we are not on the right track to solve them." According to Vaux.com, when a world leading AI security expert said, When the world's leading AI companies are not on the right track, we all have reason to be worried.

According to the Canadian Global News Network on the 18th, an international report on artificial intelligence security released on the 17th said that experts had different opinions on the risks posed by this technology. The report chaired by Bengio, the scientific director of the Institute of Artificial Intelligence in Quebec, Canada, concluded that the future trajectory of general artificial intelligence is very uncertain. According to the report, in the near future, AI may show various development tracks, including very positive and very negative results. The report outlines some risks, including that AI may cause harm through false information, fraud and cyber attacks. The report mentions that biases in AI may lead to risks, especially in "high-risk areas such as healthcare, recruitment and financial loans". Another situation is that humans may lose control of artificial intelligence.

For more information or cooperation, please follow the official WeChat of China Economic Network (name: China Economic Network, id: sourcecn)

View the rest of the full text
(Editor in charge: Ma Changyan)