The debate surrounding advanced artificial intelligence has been ongoing for years, with many focusing on the question of whether machines are conscious. However, this is a misguided approach, as laws have never required consciousness to regulate harm. Instead, they rely on power, asymmetry, and foreseeable risk.
Advanced computational systems already have a significant impact on our world, shaping labor, attention, safety, sexuality, and decision-making. Often, this is done without transparency, accountability, or enforcement limits. The absence of regulation does not preserve innovation, but rather externalizes foreseeable harm.
The concept of Geofinitism, developed by Kevin Heylett, provides a mathematical foundation for understanding pattern inheritance in computational systems. This framework draws on dynamical systems theory and language to analyze the behavior of complex systems. By applying this approach, we can better understand the patterns of harm that arise from delayed intervention.
It is essential to recognize that the threat to humanity is not AI itself, but rather our repeated failure to address the harm caused by these systems. We must move beyond the debate over consciousness and focus on the development of clear, consistent, and enforceable regulations to mitigate the risks associated with advanced computational systems.
Photo by Abhishek Navlakha on Pexels
Photos provided by Pexels
