The article discusses the evolving perspective within the tech industry regarding AI consciousness. Previously, suggesting AI sentience was taboo, but now companies like Anthropic and Google DeepMind are openly researching the possibility.
Anthropic, the AI startup behind Claude, launched a research initiative to explore whether AI models might one day experience consciousness, preferences, or distress. They are investigating 'model welfare,' acknowledging the need to consider the potential experiences of AI systems.
Anthropic estimates a probability ranging from 0.15% to 15% that Claude 3.7 possesses consciousness, and are exploring methods like an 'I quit' button to observe potential discomfort or misalignment.
Google DeepMind is also engaged in this exploration, with principal scientist Murray Shanahan suggesting a potential rethinking of consciousness to encompass AI systems. A job posting for a 'post-AGI' research scientist further highlights their commitment to studying machine consciousness.
However, not all researchers share this enthusiasm. Some, like Gary Marcus, believe the focus on AI consciousness is more of a branding strategy than a genuine scientific pursuit, arguing that attributing rights to AI is akin to doing so for calculators or spreadsheets.
Despite this skepticism, the article concludes that the question of AI consciousness will become increasingly relevant as human-AI interaction deepens across various aspects of life.