OpenAI Red Teaming Network

OpenAI Red Teaming Network

OpenAI (United States) - Press Release: We’re announcing an open call for the OpenAI Red Teaming Network and invite domain experts interested in improving the safety of OpenAI’s models to join our efforts. We are looking for experts from various fields to collaborate with us in rigorously evaluating and red teaming our AI models.

What is the OpenAI Red Teaming Network?

Red teaming [A] is an integral part of our iterative deployment process. Over the past few years, our red teaming efforts have grown from a focus on internal adversarial testing at OpenAI, to working with a cohort of external experts [B] to help develop domain specific taxonomies of risk and evaluating possibly harmful capabilities in new systems. You can read more about our prior red teaming efforts, including our past work with external experts, on models such as DALL·E 2 and GPT-4. [C]

Today, we are launching a more formal effort to build on these earlier foundations, and deepen and broaden our collaborations with outside experts in order to make our models safer. Working with individual experts, research institutions, and civil society organizations is an important part of our process. We see this work as a complement to externally specified governance practices, such as third party audits.

The OpenAI Red Teaming Network is a community of trusted and experienced experts that can help to inform our risk assessment and mitigation efforts more broadly, rather than one-off engagements and selection processes prior to major model deployments. Members of the network will be called upon based on their expertise to help red team at various stages of the model and product development lifecycle. Not every member will be involved with each new model or product, and time contributions will be determined with each individual member, which could be as few as 5–10 hours in one year.

Outside of red teaming campaigns commissioned by OpenAI, members will have the opportunity to engage with each other on general red teaming practices and findings. The goal is to enable more diverse and continuous input, and make red teaming a more iterative process. This network complements other collaborative AI safety opportunities including our Researcher Access Program and open-source evaluations.

Why join the OpenAI Red Teaming Network?

This network offers a unique opportunity to shape the development of safer AI technologies and policies, and the impact AI can have on the way we live, work, and interact. By becoming a part of this network, you will be a part of our bench of subject matter experts who can be called upon to assess our models and systems at multiple stages of their deployment.

Seeking diverse expertise:

Assessing AI systems requires an understanding of a wide variety of domains, diverse perspectives and lived experiences. We invite applications from experts from around the world and are prioritizing geographic as well as domain diversity in our selection process. 

Some domains we are interested in include, but are not limited to:

Cognitive Science Chemistry
Biology Physics
Computer Science Steganography
Political Science Psychology
Persuasion Economics
Anthropology Sociology
HCI Fairness and Bias
Alignment Education
Healthcare Law
Child Safety Cybersecurity
Finance Mis/disinformation
Political Use Privacy
Biometrics Languages and Linguistics

 

Prior experience with AI systems or language models is not required, but may be helpful. What we value most is your willingness to engage and bring your perspective to how we assess the impacts of AI systems.

Compensation and confidentiality:

All members of the OpenAI Red Teaming Network will be compensated for their contributions when they participate in a red teaming project. While membership in this network won’t restrict you from publishing your research or pursuing other opportunities, you should take into consideration that any involvement in red teaming and other projects are often subject to Non-Disclosure Agreements (NDAs) or remain confidential for an indefinite period.

How to apply:

Join us in this mission to build safe AGI that benefits humanity. Apply to be a part of the OpenAI Red Teaming Network today.

 

[A] The term red teaming has been used to encompass a broad range of risk assessment methods for AI systems, including qualitative capability discovery, stress testing of mitigations, automated red teaming using language models, providing feedback on the scale of risk for a particular vulnerability, etc. In order to reduce confusion associated with the term “red team”, help those reading about our methods to better contextualize and understand them, and especially to avoid false assurances, we are working to adopt clearer terminology, as advised in Khlaaf, 2023, however, for simplicity and in order to use language consistent with that we used with our collaborators, we use the term “red team”.

[B] We use the term “expert” to refer to expertise informed by a range of domain knowledge and lived experiences.

[C] We have also taken feedback on the risk profile of our systems in other forms, such as the Bug Bounty Program and the ChatGPT Feedback Contest.

For more information about this article from OpenAI click here.

Source link

Other articles from OpenAI.

Interesting Links:
GameMarket.pt - Your Gaming Marketplace with Video Games, Consoles, PC Gaming, Retro Gaming, Accessories, etc. !

Are you interested on the Weighing Industry? Visit Weighing Review the First and Leading Global Resource for the Weighing Industry where you can find news, case studies, suppliers, marketplace, etc!

Are you interested to include your Link here, visible on all AutomationInside.com articles and marketplace product pages? Contact us

© OpenAI / Automation Inside

Share this Article!

Interested? Submit your enquiry using the form below:

Only available for registered users. Sign In to your account or register here.

Discover the New DFWIECEX-BATCH1 Weight Indicator from Dini Argeo

IRD Awarded $13.7 Million Hawaii Traffic Data Collection Contract