Photo by Google DeepMind on Pexels
The rapid deployment of artificial intelligence without robust governance could lead to a significant “trust crisis,” warns Suvianna Grecu, Founder of the AI for Change Foundation. Grecu emphasizes the need for immediate and decisive action to prevent the “automation of harm at scale,” arguing that speed should not come at the expense of safety and ethical considerations.
Grecu advocates for integrating ethics into the AI development process through tools such as design checklists and risk assessments. Crucially, she highlights the importance of establishing clear accountability and transparent procedures to ensure responsible AI development and deployment.
To navigate the complexities of AI governance, Grecu calls for collaboration between governments, which can establish necessary legal frameworks, and industry, which possesses the technical expertise and agility to innovate responsibly. She further emphasizes the critical need for value-driven technology that champions human rights, transparency, and fairness, ensuring that AI serves humanity’s best interests rather than solely focusing on market demands.