Free Porn
xbporn
Friday, September 20, 2024

US companies and Chinese experts engage in secret diplomacy over AI safety

Must read


Stay informed with free updates

US artificial intelligence companies OpenAI, Anthropic and Cohere engaged in secret diplomacy with Chinese AI experts amid shared concerns about how powerful technologies spread misinformation and threaten social cohesion. It’s here.

Two meetings were held in Geneva in July and October last year, with scientists from the U.S. AI group and policymakers, along with representatives from Tsinghua University and other Chinese government-backed institutions, according to people directly involved. Experts were present.

Attendees said the meeting allowed both sides to discuss the risks of emerging technologies and encourage investment in AI safety research. They added that the ultimate goal is to find a scientific pathway to safely develop more advanced AI technologies.

“There is no way to set international standards for AI safety and coordination without agreement among this set of stakeholders,” said one of the people who attended the meeting. “And once they agree, it’s a lot easier to bring in the others.”

The previously unreported talks are a rare sign of Sino-American cooperation amid a battle for supremacy between the two powers in cutting-edge technologies such as AI and quantum computing. Currently, the Washington government is blocking U.S. exports of high-performance chips from Nvidia and other companies needed to develop advanced AI software.

However, given the potential existential risks for humanity, the topic of AI safety has become a common concern among technology developers in both countries.

A negotiator who attended the meeting, who declined to be named, said the Geneva talks were coordinated with the knowledge of British and Chinese government officials as well as the White House.

“China supports efforts to discuss AI governance and develop the necessary frameworks, norms and standards based on broad consensus,” the Chinese Embassy in the UK said in a statement.

“China stands ready to carry out communication, exchange and practical cooperation with various stakeholders on global AI governance, and ensure that AI develops in a way that advances human civilization.”

The talks were convened by the Sheikh Group, a private mediation organization that facilitates dialogue between key parties, particularly in conflict areas in the Middle East.

“We saw an opportunity to bring together key players in the U.S. and China working on AI. The focus was on gender, risk and opportunity,” said Salman Shaikh, Group Chief Executive Officer.

“In our view, recognizing this fact provides the basis for collaborative scientific research that could ultimately lead to global standards for the safety of AI models.”

Chinese AI companies such as ByteDance, Tencent and Baidu did not participate, according to people involved in the talks. Google DeepMind was briefed on the details of the discussion but was not present.

During the meeting, AI experts from both sides discussed areas of technical cooperation and more concrete policy proposals that have been incorporated into discussions before and after the UN Security Council meeting on AI in July 2023 and the UK AI summit in November last year. did. Year.

Negotiators in attendance said the success of the conference led to future discussions focused on scientific and technical proposals on how AI systems fit into legal norms and the norms and values ​​of each society. This led to a plan for discussion.

There are growing calls for cooperation between major countries to address the rise of AI.

In November, Chinese scientists working on artificial intelligence joined Western academics in calling for stronger controls over artificial intelligence and warning that advanced artificial intelligence poses an “existential risk to humanity” in the coming decades. signed.

The group, which includes Andrew Yao, one of China’s most prominent computer scientists, is calling for the creation of an international regulatory body, mandatory registration and auditing of advanced AI systems, and the inclusion of immediate “shutdown” procedures. It called for mandatory spending on developers. 30% of the research budget is related to AI safety.

OpenAI, Anthropic and Cohere declined to comment on their participation. Tsinghua University did not respond to a request for comment.

This article has been corrected to clarify in the subheading that humanity, not Inflection, was involved in the Geneva talks.



Source link

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article