ADVERTISEMENT
OpenAI has issued a warning that its next wave of artificial intelligence systems could elevate cybersecurity threats, saying the models may soon be powerful enough to help craft zero-day exploits or support sophisticated digital intrusions.
In an update published on Wednesday, the company said that as its models become more capable, they could potentially assist attackers in breaking into highly secured networks or coordinating complex operations against enterprise and industrial systems. Because of that risk, OpenAI said it is increasing efforts to train and deploy systems that can support defensive cybersecurity, including tools that allow security teams to review code, identify weaknesses, and speed up patching processes.
Also read: India’s AI Copyright Gamble: DPIIT’s hybrid licensing plan sparks battle over consent and fair pay
To limit potential misuse, the Microsoft-backed firm outlined a layered security strategy built around strict access controls, hardened infrastructure, outbound-traffic monitoring, and additional protective checks. OpenAI also plans to roll out a tiered access program that will give vetted researchers and organisations working in cyber defence access to more advanced features that are otherwise restricted.
The company further announced the creation of the Frontier Risk Council, a new advisory group that will bring experienced security professionals into ongoing discussions about how to identify and manage high-end risks associated with emerging AI capabilities.
Also read: India proposes landmark AI law: Developers must pay royalties for training content
The council will begin by concentrating on cybersecurity challenges but may expand to other areas where the most advanced AI systems could pose safety concerns. The latest measures reflect a growing industry focus on balancing the benefits of more capable models with the need to prevent their exploitation by threat actors.