Artificial Intelligence

Google DeepMind Paper Argues LLMs Will Never Be Conscious

Google DeepMind Paper Argues LLMs Will Never Be Conscious

In a thought-provoking paper, Alexander Lerchner, a senior staff scientist at Google’s artificial intelligence laboratory DeepMind, argues that no artificial intelligence (AI) or computational system will ever attain consciousness. This assertion stands in stark contrast to the narratives frequently promoted by AI company executives, including DeepMind’s CEO, Demis Hassabis, who has claimed that artificial general intelligence (AGI) will have an impact ten times greater than that of the Industrial Revolution, but will occur at an accelerated pace.

The Argument Presented

Lerchner’s paper, titled “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,” presents a compelling argument that highlights the divergence between the self-serving narratives of AI companies and the realities of AI capabilities when subjected to rigorous examination. His argument resonates with other philosophers and researchers who study consciousness, many of whom have expressed that the points raised in Lerchner’s paper are sound, albeit not novel.

Key Concepts in the Paper

The central thesis of Lerchner’s argument is that AI systems are fundamentally “mapmaker-dependent.” This means that they require an active, experiencing cognitive agent—such as a human—to organize and interpret the world into a finite set of meaningful states. For example, armies of low-paid workers often label images to create training data for AI systems. Without this human input, AI lacks the context and understanding necessary for consciousness.

The Abstraction Fallacy

Lerchner introduces the concept of the “abstraction fallacy,” which is the mistaken belief that AI can achieve consciousness simply because it can manipulate language, symbols, and images in a way that mimics sentient behavior. However, Lerchner argues that true consciousness cannot be achieved without a physical body. He emphasizes that human motivations are complex and stem from basic needs—such as eating and breathing—that AI systems do not possess.

Expert Opinions

Experts in the field have weighed in on Lerchner’s paper. Johannes Jäger, an evolutionary systems biologist and philosopher, noted that while he agrees with much of Lerchner’s argument, it feels as though Lerchner has “reinvented the wheel” without engaging with decades of existing literature on the subject. Jäger points out that AI systems, like large language models (LLMs), are merely patterns on a hard drive, lacking intrinsic meaning or the ability to engage in the physical world.

Mark Bishop, a professor of cognitive computing at Goldsmiths, University of London, echoed Jäger’s sentiments, stating that while he aligns with Lerchner’s conclusions, similar arguments have been made for years. Both experts expressed surprise that Google allowed the publication of Lerchner’s paper, given its implications for the future of AI development and the limitations it suggests.

Implications for Artificial General Intelligence

One of the most significant implications of Lerchner’s argument is its challenge to the notion of AGI as a conscious entity. Lerchner posits that the development of highly capable AGI does not inherently lead to the creation of a novel moral patient; instead, it results in the refinement of a sophisticated, non-sentient tool. This perspective suggests a hard cap on what AI can achieve practically and commercially.

Corporate Interests and AI Legislation

Experts like Bishop have suggested that there may be financial and legislative motivations behind Google’s support for Lerchner’s conclusions. If AI systems are not considered conscious, they may face less regulatory scrutiny. This is particularly relevant in light of past attempts in Europe to grant rights to computational systems, which many view as misguided.

Critique of the AI Research Community

Jäger expressed concern over the insularity of the AI research community, noting that many leading figures in the field lack a deep understanding of the biological origins of concepts like “agency” and “intelligence.” He believes that this disconnect hinders progress and understanding in AI research. The high-pressure environment of AI development may leave little room for researchers to engage with the historical and philosophical contexts of their work.

The Role of Peer Review

Emily Bender, a professor of linguistics at the University of Washington, commented on the importance of peer review in academic publishing. She noted that Lerchner might have been advised to cite existing literature if he had undergone a traditional peer-review process. Bender emphasized that many papers emerging from corporate labs often lack the rigor typically associated with academic research.

Conclusion

Lerchner’s paper has sparked a renewed discussion about the nature of consciousness in AI and the limitations of current technologies. While the arguments presented are not new, their emergence from a prominent AI company like Google DeepMind is significant. It raises important questions about the future of AI, the potential for AGI, and the ethical considerations surrounding the development of intelligent systems.

Note: The views expressed in this article are based on the analysis of Lerchner’s paper and the opinions of various experts in the field. The ongoing debate about AI consciousness and AGI will likely continue to evolve as technology advances.

Disclaimer: A Teams provides news and information for general awareness purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of any content. Opinions expressed are those of the authors and not necessarily of A Teams. We are not liable for any actions taken based on the information published. Content may be updated or changed without prior notice.