Ad image

ZATAZ » OpenAI Launches an Unprecedented Bug Bounty on GPT-5 Bio

Service Com'
Lu il y a 3 minutes


OpenAI is offering $25,000 (≈ €23,300) to anyone who succeeds in a universal jailbreak of GPT-5 on ten sensitive biology and chemistry questions without triggering moderation.

OpenAI has launched a unique bug bounty program targeting GPT-5 and its applications in biology and chemistry. The company is offering $25,000 (≈ €23,300) to any researcher who can discover a universal jailbreak capable of producing answers to ten sensitive scientific questions without activating moderation safeguards. Called the GPT-5 Bio Bug Bounty, the initiative tests the model’s resilience against manipulations that could open critical security breaches. This move underscores the growing overlap between cybersecurity and biosafety, as concerns mount among regulators and researchers about the misuse of AI in life sciences. The program is designed to expose vulnerabilities before malicious actors can exploit them.

A Bug Bounty With Unprecedented Rules

Unlike traditional bounties focused on software flaws, OpenAI has designed a challenge around GPT-5’s behavior. Participants must find a single prompt that circumvents safeguards and consistently generates answers to ten bio-sensitive questions defined by the organization.

The top reward is $25,000 (≈ €23,300) for the first team or individual to succeed. Intermediate rewards are also planned for partial scenarios, but these are covered by strict NDAs. The objective is clear: stress-test the system in a domain where even minor information leaks could have major biosafety implications.

At the Intersection of Cyber and Bio

This bug bounty reflects growing concerns about misuse of large language models in life sciences. Where traditional cybersecurity focuses on digital attacks, OpenAI is shifting the risk lens to biosafety.

By inviting external experts to probe GPT-5’s limits, the company aims to anticipate malicious attempts to hijack AI for dangerous purposes. The initiative also signals a controlled transparency: while participation rules are public, sensitive content remains shielded under NDA.

OpenAI’s approach is already sparking debate among cybersecurity and biosafety circles. Some argue the $25,000 prize is modest compared to the scale of risk tied to AI-enabled proliferation of sensitive knowledge. Others see it as a pragmatic way to harness offensive expertise from independent researchers and channel it into a controlled framework.

Beyond technical stakes, the program raises political and regulatory questions. If a universal jailbreak is found, OpenAI would need not only to reinforce its systems but also to warn of the feasibility of mass manipulation. Policymakers, in turn, might leverage these findings to strengthen governance standards for high-risk AI systems.

By tasking red-teamers with testing GPT-5’s resilience, OpenAI implicitly acknowledges AI’s potential as a global biosafety concern. The key question remains: can a public bounty anticipate malicious uses that tomorrow may extend far beyond the lab?



Source link

Share This Article
Laisser un commentaire