Storyboard18 Awards

Anthropic updates Claude’s ‘Constitution’ with sharper focus on ethics and safety

Anthropic has long sought to differentiate itself through what it calls “Constitutional AI,” an approach in which Claude is trained using a defined set of ethical principles rather than relying primarily on human feedback. The original Constitution was published in 2023, and while the updated version preserves most of its core ideas, it expands on how those principles should be applied in practice.

By  Storyboard18Jan 22, 2026 9:09 AM
Follow us
Anthropic updates Claude’s ‘Constitution’ with sharper focus on ethics and safety
The Constitution also defines strict boundaries on prohibited topics, including conversations related to developing biological weapons. These constraints are intended to limit misuse while preserving Claude’s usefulness across a wide range of applications.

Anthropic has released an updated version of Claude’s Constitution, its ethical framework that governs how the AI chatbot behaves, adding greater nuance around safety, ethics and user wellbeing.

The revised document was published on Wednesday alongside Anthropic CEO Dario Amodei’s appearance at the World Economic Forum in Davos. The Constitution is described by the company as a “living document” that explains the context in which Claude operates and the kind of AI entity Anthropic aims to build.

Anthropic has long sought to differentiate itself through what it calls “Constitutional AI,” an approach in which Claude is trained using a defined set of ethical principles rather than relying primarily on human feedback. The original Constitution was published in 2023, and while the updated version preserves most of its core ideas, it expands on how those principles should be applied in practice.

When the framework was first introduced, Anthropic co-founder Jared Kaplan described it as a system in which an AI model “supervises itself” using a list of constitutional principles. According to the company, these principles guide Claude’s normative behaviour and are designed to prevent toxic or discriminatory outputs. Earlier policy documents explained that the Constitution functions as a set of natural-language instructions that collectively shape the model’s conduct.

The new Constitution reinforces Anthropic’s positioning as a more cautious and ethics-first AI company, in contrast to rivals that have leaned into faster, more controversial deployment strategies. The document runs to around 80 pages and is structured around four core values: being broadly safe, broadly ethical, compliant with Anthropic’s guidelines, and genuinely helpful.

In the safety section, Anthropic outlines how Claude is designed to avoid harmful behaviours and respond responsibly when users display signs of distress. In situations involving risk to human life, the document says Claude should direct users to appropriate emergency services or provide basic safety guidance.

Ethics form another major pillar of the framework. Rather than focusing on abstract moral philosophy, Anthropic says it wants Claude to demonstrate ethical behaviour in real-world contexts. “We are less interested in Claude’s ethical theorizing and more in Claude knowing how to actually be ethical in a specific context,” the document states.

The Constitution also defines strict boundaries on prohibited topics, including conversations related to developing biological weapons. These constraints are intended to limit misuse while preserving Claude’s usefulness across a wide range of applications.

Helpfulness is the final core value. Anthropic says Claude is designed to balance a user’s immediate requests with their long-term wellbeing, aiming to support “the long-term flourishing of the user and not just their immediate interests.”

The document concludes by raising a broader philosophical question about the moral status of AI systems themselves. “Claude’s moral status is deeply uncertain,” the authors write, adding that the possibility of AI consciousness is a serious issue that warrants consideration.

First Published on Jan 22, 2026 9:09 AM

More from Storyboard18