
OpenAI is strengthening the security of its products and infrastructure by expanding its Bug Bounty program and launching new cybersecurity initiatives. Against the backdrop of a surge in advanced AI agents and growing efforts to embed them in everyday digital workflows, the company is emphasizing collaboration with researchers and partners to identify and mitigate vulnerabilities at early stages.
A pivotal update involves increasing the maximum reward in the Bug Bounty program from $20,000 to $100,000—an effort to attract top-tier experts capable of discovering rare and critical flaws that pose risks to user safety and trust. In addition, OpenAI has introduced a limited-time incentive: between March 26 and April 30, researchers will receive doubled payouts for reports that fall under high-priority categories, such as IDOR (Insecure Direct Object Reference) vulnerabilities.
Launched in April 2024, the Bug Bounty program has become a vital channel of engagement with the cybersecurity community. The latest campaign encompasses a broad range of vulnerability types and underscores OpenAI’s commitment to rewarding impactful and high-quality discoveries.
OpenAI has also announced enhancements to its cybersecurity research grant initiative. The program now embraces new domains, including the protection of AI agents, model privacy, AI-powered vulnerability remediation, and integrated security systems. Over the past two years, more than 1,000 proposals have been reviewed, resulting in funding for 28 projects. Topics under investigation include prompt injection attacks, secure code generation, and autonomous defense mechanisms.
As part of its expansion, OpenAI is introducing “microgrants”—API credits designed to help teams rapidly prototype solutions. The grant program now welcomes a broader array of applications, with special attention to priority areas such as data leak prevention and advanced threat detection.
The company has also announced plans to collaborate with both academic institutions and industry to uncover vulnerabilities in open-source software. OpenAI intends to share discovered flaws with the wider community to foster the development of AI tools capable of autonomously detecting and resolving code defects.
To bolster real-time protection of its infrastructure and models, OpenAI is expanding the deployment of its own AI models to detect and respond to threats. A notable initiative in this direction is its partnership with SpecterOps, a firm renowned for simulating realistic adversarial attacks. This collaboration will entail continuous stress testing across cloud and production environments.
Such a strategy embeds security across every layer—from internal systems to AI operational mechanics. OpenAI emphasizes that as new agents like Operator and Deep Research emerge, the need for robust resilience intensifies. To meet this demand, the company is crafting multilayered defenses involving adversarial prompt protection, enhanced access controls, cryptographic safeguards, and fortified security architectures.