• NextWave AI
  • Posts
  • Without Rules, AI Risks a ‘Trust Crisis,’ Warns AI for Change Founder

Without Rules, AI Risks a ‘Trust Crisis,’ Warns AI for Change Founder

In partnership with

Find out why 1M+ professionals read Superhuman AI daily.

AI won't take over the world. People who know how to use AI will.

Here's how to stay ahead with AI:

  1. Sign up for Superhuman AI. The AI newsletter read by 1M+ pros.

  2. Master AI tools, tutorials, and news in just 3 minutes a day.

  3. Become 10X more productive using AI.


Suvianna Grecu, founder of the AI for Change Foundation, warns that rushing to deploy AI without proper safeguards could trigger a global “trust crisis” and lead to “automating harm at scale.”

The Problem: No Structure, High Stakes

AI is now influencing crucial decisions—like job applications, credit approvals, healthcare, and criminal justice—often without enough bias testing or consideration for long-term impacts.
While many organisations have AI ethics policies, they are often aspirational documents, not operational practices. Grecu argues that real accountability begins only when clear responsibility is assigned.

From Principles to Practice

Her foundation pushes for embedding ethics into daily AI development using:

  • Design checklists

  • Mandatory pre-deployment risk assessments

  • Cross-functional review boards (legal, technical, policy teams)

She says AI ethics should be treated like any other core business process—with transparent, repeatable steps and clear ownership at each stage.

Shared Responsibility: Government + Industry

Grecu calls for a partnership approach:

  • Government sets minimum legal and human rights standards.

  • Industry creates advanced auditing tools, innovates beyond compliance, and develops safeguards.

Leaving governance to regulators alone could stifle innovation; leaving it to corporations alone risks abuse.

Long-Term Risks: Emotional Manipulation & Value Alignment

Grecu warns about AI’s growing ability to influence human emotions, which could undermine personal autonomy. She stresses that AI is not neutral—it reflects the data, objectives, and incentives given to it.
Without deliberate design choices, AI will optimise for efficiency and profit, not for justice, dignity, or democracy.

Europe’s Opportunity

She believes Europe can lead in embedding values like human rights, transparency, sustainability, inclusion, and fairness into AI—at the policy, design, and deployment levels.
This isn’t about slowing progress but about “shaping AI before it shapes us.”

The Bigger Picture

Grecu’s foundation is building coalitions, hosting public workshops, and leading discussions—like at the upcoming AI & Big Data Expo Europe—to ensure humanity stays at the centre of AI development.