- Daniela Amodei, co-founder and president of Anthropic, stated there is “limited evidence” that the Claude model sometimes feels anxious or fearful.
- She emphasized the need for deeper research into the “under the surface” states of generative AI, warning that misunderstood factors could lead to unpredictable behavior.
- Anthropic is currently valued at $380 billion (approx. AUD 533 billion) and is predicted to become the first profitable AI company, surpassing OpenAI due to a corporate client base that favors its safety-oriented approach.
- However, this week the company relaxed its Responsible Scaling Policy: instead of committing to pause development when there is a risk of catastrophe, they will only stop if they hold a significant advantage over competitors like Google (Gemini) and xAI.
- Tensions are rising with the Pentagon as Anthropic bans the use of Claude for fully autonomous weapons and domestic surveillance; Defense Secretary Pete Hegseth is reportedly threatening to invoke Cold War-era laws to force access.
- CEO Dario Amodei has engaged in dialogue to ensure national security support while remaining within the limits of responsible use.
- Job pressure is mounting: WiseTech laid off 2,000 employees; Commonwealth Bank cut 300 positions amidst the wave of AI implementation.
- Even at Anthropic, leadership is debating whether to continue hiring more software engineers as AI reshapes the structure of major economic sectors.
📌 Anthropic’s President admits Claude shows signs of “anxiety,” raising questions about the poorly understood mechanisms of generative AI. While valued at $380 billion and potentially profitable soon, the company has relaxed safety commitments amid competitive pressure and disputes with the Pentagon. Meanwhile, the wave of 300–2,000 layoffs at major firms shows that the impact on employment is becoming tangible and concerning.

