ADVERTISEMENT
OpenAI chief executive Sam Altman recently raised concerns that advanced artificial intelligence tools could be misused to engineer a pandemic on the scale of COVID-19. The warning was issued during a discussion on the potential downsides of AI, where it was noted that the growing application of AI in biological research carries significant risks if not properly managed. Altman spoke about this in an interview with Tucker Carlson.
The rapid improvement of AI models in biology was emphasised, that could make it possible to simulate or design pathogens with dangerous efficiency. While these same capabilities could accelerate breakthroughs in medicine, such as drug discovery and genetic research, the possibility of misuse has been highlighted as a pressing challenge for the industry.
Examples were also provided of how AI systems are already shaping societal behaviour in subtle ways. It was observed that the stylistic patterns of large language models (LLMs) have begun influencing human communication, illustrating how the widespread adoption of AI can create unintended cultural effects.
Experts have previously cautioned that AI-driven tools capable of generating protein structures or simulating genetic engineering processes hold dual-use potential. On one hand, they can drive scientific progress; on the other, they could be exploited for harmful purposes, including the creation of synthetic pandemics.
The issue of AI accountability continues to dominate debate within the technology sector, with industry leaders stressing the need for safeguards to prevent misuse while ensuring that innovation can proceed responsibly.