Capitalism and AI: New Theory Argues Superintelligence Alignment is Impossible, Leading to Human Extinction

Capitalism and AI: New Theory Argues Superintelligence Alignment is Impossible, Leading to Human Extinction

Photo by Chin Jan on Pexels

A controversial new theory suggests that the inherent competitive nature of forces like capitalism driving the development of superintelligence renders its alignment with human values fundamentally impossible. The argument, outlined in a freely available book, posits that this unavoidable misalignment makes human extinction a terminal outcome, regardless of efforts aimed at control or alignment. The idea originated on Reddit’s Artificial Intelligence forum, sparking intense debate about the future of humanity and AI safety. More information can be found at [https://old.reddit.com/r/artificial/comments/1n0izfo/why_superintelligence_leads_to_extinction_the/]