Philosopher Claims LLMs Distract From True AI Consciousness Potential

Philosopher Claims LLMs Distract From True AI Consciousness Potential

Photo by Jimmy Elizarraras on Pexels

While Large Language Models (LLMs) might be captivating, they’re a distraction from the real potential for machine consciousness, argues philosopher Keith Frankish. Known for his work in illusionism, Frankish posits that the limited worldview of current LLMs makes them a ‘red herring’ in the broader pursuit of truly conscious artificial intelligence. However, he emphasizes that the limitations of LLMs don’t preclude the possibility of sophisticated, potentially conscious machines emerging in the future. Frankish further suggests that as AI evolves, particularly if it manifests in the form of convincing, self-sustaining, and world-facing robots, society will need to grapple with extending moral considerations to these advanced entities. This discussion stems from a recent conversation on the Artificial Intelligence subreddit. [Reddit Post: https://old.reddit.com/r/artificial/comments/1nda6x4/keith_frankish_illusionism_and_its_implications/]