Frontier Model Forum updates

Frontier Model Forum updates

OpenAI (United States) - Press Release: Together with Anthropic, Google, and Microsoft, we’re announcing the new Executive Director of the Frontier Model Forum and a new $10 million AI Safety Fund.

Today, OpenAI, Anthropic, Google, and Microsoft published the following joint announcement. 

  • Chris Meserole appointed the first Executive Director of the Frontier Model Forum, an industry body focused on ensuring safe and responsible development and use of frontier AI models globally. 
  • Meserole brings a wealth of experience focusing on the governance and safety of emerging technologies and their future applications.
  • Today Forum members, in collaboration with philanthropic partners, the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation, Eric Schmidt, and Jaan Tallinn commit over $10 million for a new AI Safety Fund to advance research into the ongoing development of the tools for society to effectively test and evaluate the most capable AI models.

Today, Anthropic, Google, Microsoft, and OpenAI are announcing the selection of Chris Meserole as the first Executive Director of the Frontier Model Forum, and the creation of a new AI Safety Fund, a more than $10 million initiative to promote research in the field of AI safety. The Frontier Model Forum, an industry body focused on ensuring safe and responsible development of frontier AI models, is also releasing its first technical working group update on red teaming to share industry expertise with a wider audience as the Forum expands the conversation about responsible AI governance approaches.

Executive Director:

Chris Meserole comes to the Frontier Model Forum with deep expertise on technology policy, having worked extensively on the governance and safety of emerging technologies and their future applications. Most recently he served as Director of the Artificial Intelligence and Emerging Technology Initiative at the Brookings Institution.  

In this new role, Meserole will be responsible for helping the Forum fulfill its mission to:

  • Advance AI safety research to promote responsible development of frontier models and minimize potential risks.
  • Identify safety best practices for frontier models.
  • Share knowledge with policymakers, academics, civil society and others to advance responsible AI development.
  • Support efforts to leverage AI to address society’s biggest challenges.

"The most powerful AI models hold enormous promise for society, but to realize their potential we need to better understand how to safely develop and evaluate them. I’m excited to take on that challenge with the Frontier Model Forum.", Chris Meserole, Executive Director of the Frontier Model Forum.

AI Safety Fund:

Over the past year, industry has driven significant advances in the capabilities of AI. As those advances have accelerated, new academic research into AI safety is required. To address this gap, the Forum and philanthropic partners are creating a new AI Safety Fund, which will support independent researchers from around the world affiliated with academic institutions, research institutions, and startups. The initial funding commitment for the AI Safety Fund comes from Anthropic, Google, Microsoft, and OpenAI, and the generosity of our philanthropic partners, the Patrick J. McGovern Foundation, the David and Lucile Packard Foundation (The David and Lucile Packard Foundation intends to provide support, but funding had not yet been formally committed at the time of distribution), Eric Schmidt, and Jaan Tallinn. Together this amounts to over $10 million in initial funding. We are expecting additional contributions from other partners.

Earlier this year, the members of the Forum signed on to voluntary AI commitments at the White House, which included a pledge to facilitate third-party discovery and reporting of vulnerabilities in our AI systems. The Forum views the AI Safety Fund as an important part of fulfilling this commitment by providing the external community with funding to better evaluate and understand frontier systems. The global discussion on AI safety and the general AI knowledge base will benefit from a wider range of voices and perspectives. 

The primary focus of the Fund will be supporting the development of new model evaluations and techniques for red teaming AI models to help develop and test evaluation techniques for potentially dangerous capabilities of frontier systems. We believe that increased funding in this area will help raise safety and security standards and provide insights into the mitigations and controls industry, governments, and civil society need to respond to the challenges presented by AI systems. 

The Fund will put out a call for proposals within the next few months. Meridian Institute will administer the Fund—their work will be supported by an advisory committee comprised of independent external experts, experts from AI companies, and individuals with experience in grantmaking.

Technical expertise:

Over the last few months the Forum has worked to help establish a common set of definitions of terms, concepts, and processes so we have a baseline understanding to build from. This way researchers, governments, and other industry peers are all able to have the same starting point in discussions about AI safety and governance issues.

In support of building a common understanding, the Forum is also working to share best practices on red teaming across the industry. As a starting point, the Forum has come together to produce a common definition of “red teaming” for AI and a set of shared case studies in a new working group update. We defined red teaming as a structured process for probing AI systems and products for the identification of harmful capabilities, outputs, or infrastructural threats. We will build on this work and are committed to work together to continue our red teaming efforts.

We are also developing a new responsible disclosure process, by which frontier AI labs can share information related to the discovery of vulnerabilities or potentially dangerous capabilities within frontier AI models—and their associated mitigations. Some Frontier Model Forum companies have already discovered capabilities, trends, and mitigations for AI in the realm of national security. The Forum believes that our combined research in this area can serve as a case study for how frontier AI labs can refine and implement a responsible disclosure process moving forward.

What’s next:

Over the coming months, the Frontier Model Forum will establish an Advisory Board to help guide its strategy and priorities, representing a range of perspectives and expertise. Future releases and updates, including updates about new members, will come directly from the Frontier Model Forum—so stay tuned to their website for further information.

The AI Safety Fund will issue its first call for proposals in the coming months, and we expect grants to be issued shortly after.

The Frontier Model Forum will also be issuing additional technical findings as they become available. 

The Forum is excited to work with Meserole and to deepen our engagements with the broader research community, including the Partnership on AI, MLCommons, and other leading NGOs and government and multinational organizations to help realize the benefits of AI while promoting its safe development and use.

For more information about this article from OpenAI click here.

Source link

Other articles from OpenAI.

Interesting Links:
GameMarket.pt - Your Gaming Marketplace with Video Games, Consoles, PC Gaming, Retro Gaming, Accessories, etc. !

Are you interested on the Weighing Industry? Visit Weighing Review the First and Leading Global Resource for the Weighing Industry where you can find news, case studies, suppliers, marketplace, etc!

Are you interested to include your Link here, visible on all AutomationInside.com articles and marketplace product pages? Contact us

© OpenAI / Automation Inside

Share this Article!

Interested? Submit your enquiry using the form below:

Only available for registered users. Sign In to your account or register here.

Frontier risk and preparedness

HBK Interview: Integration of Strain Gauges in e-Bike Pedals and Cranksets