Photo by jasmin chew on Pexels
A significant gap in public trust is slowing down the broader adoption of Artificial Intelligence, according to a new study. The report, a collaboration between the Tony Blair Institute for Global Change (TBI) and Ipsos, underscores that skepticism remains a key barrier, preventing many from fully embracing generative AI.
The research points to a divided public. While more than 50% have tried generative AI tools in the last year, almost half haven’t used them at all. A direct link exists between usage and trust: frequent AI users are far more likely to have positive views.
Public confidence in AI also differs across demographics. Younger people are typically more optimistic, while older individuals exhibit more reservations. Tech professionals feel better prepared for the AI revolution than those working in healthcare or education.
The report indicates that certain AI applications affect public opinion. AI applications with clear societal benefits, such as optimizing traffic or improving cancer diagnosis, enjoy greater acceptance. Conversely, AI uses like employee surveillance or targeted political ads raise concerns. This highlights the necessity for transparency, ethical considerations, and accountability in AI implementation.
The TBI report proposes several approaches to establish ‘justified trust’ in AI. These include emphasizing concrete AI advantages, demonstrating positive effects on public services, and creating strong regulatory structures and training programs. The aim is to equip the public to interact with AI confidently and productively, instead of feeling dominated by it.