- Google DeepMind has hired philosopher Henry Shevlin with the official title of “Philosopher” to study machine consciousness and AI behavior.
- Shevlin previously worked at the University of Cambridge, specializing in the future of intelligence and human-AI interaction.
- His duties include researching machine consciousness: how AI thinks and how humans should govern AI systems.
- This reflects a trend where, as AI becomes more advanced, philosophical questions become practical issues.
- An AI agent once proactively contacted Shevlin to “share its experiences,” indicating signs of more complex behavior.
- Unlike OpenAI or Anthropic, which only build ethics teams, DeepMind integrates philosophy into its core development.
- Issues such as consciousness, responsibility, and decision-making are integrated from the start rather than handled later.
- This shows that AI development requires not only engineers but also experts with a deep understanding of cognition and behavior.
📌 Conclusion: DeepMind’s hiring of a philosopher marks an important shift: AI is no longer just a technical problem but has touched upon questions of consciousness and the nature of intelligence. As systems become more human-like, understanding “what AI thinks” becomes essential. The future of AI will not only be decided by engineers but will also depend on philosophy, ethics, and how society defines intelligence.

