Disinformation, deepfakes, and the crisis of public trust

Artificial intelligence has not created the crisis of democratic trust, but it has dramatically accelerated its erosion. In digital ecosystems where information circulates at scale and speed, AI-driven disinformation and synthetic media now operate as structural stressors on democratic safeguards. Governance challenges no longer concern isolated incidents of false content, but the systemic weakening of trust, accountability, and institutional legitimacy.

Understanding this shift requires viewing AI governance not only as a regulatory exercise, but as a defensive architecture for democratic resilience, grounded in cybersecurity, digital forensics, and evidence-based oversight.

Disinformation as a governance problem

Disinformation is often treated as a content moderation issue. In reality, it is a governance failure that manifests when institutions are unable to preserve informational integrity under adversarial pressure.

AI systems amplify this failure by enabling scalable manipulation: automated content generation, micro-targeted narratives, and adaptive influence campaigns that evolve faster than institutional responses. In such environments, truth becomes contestable not because evidence disappears, but because the capacity to verify, contextualise, and trust evidence collapses.

Cybersecurity and governance intersect here. Disinformation campaigns frequently exploit compromised accounts, breached datasets, and platform vulnerabilities, blurring the boundary between information operations and cyber operations.

Deepfakes and the collapse of evidentiary certainty

Deepfakes represent a qualitative shift in the threat landscape. Unlike traditional disinformation, synthetic media attacks the evidentiary foundations of democratic discourse. When audio, video, and images become plausibly deniable, the burden of proof shifts from the producer of falsehood to the recipient of information.

From a digital forensics perspective, this erosion of evidentiary certainty has profound implications. Democratic institutions rely on shared standards of proof (journalistic verification, legal admissibility, public accountability). Deepfakes undermine these standards by introducing perpetual doubt, even when content is authentic.

AI governance must therefore address not only detection technologies, but also institutional processes for evidence validation, contestation, and response.

Public trust as a security asset

Public trust is often discussed as a social value. In democratic systems, it is also a strategic security asset. Trust enables compliance with law, acceptance of electoral outcomes, and legitimacy of public decisions.

AI-enabled disinformation directly targets this asset. When citizens can no longer distinguish between authentic and manipulated content, skepticism becomes generalized. This environment benefits adversarial actors, who require not belief in false narratives, but disbelief in any authoritative account.

Cybersecurity incidents, data breaches, and opaque algorithmic practices further compound this dynamic. Each failure reinforces the perception that institutions are either incapable or untrustworthy stewards of digital systems.

The role of digital forensics in democratic defence

Digital forensics provides a critical counterweight to AI-driven manipulation. By preserving artefacts, reconstructing provenance, and validating authenticity, forensic methods restore accountability to the information ecosystem.

However, forensic capability alone is insufficient without governance integration. Evidence must be actionable within institutional frameworks: electoral bodies, courts, regulators, and media organisations must be equipped to interpret and act on forensic findings rapidly and transparently.

This integration transforms forensics from a reactive tool into a pillar of democratic governance.

Governing AI in democratic contexts

AI governance frameworks increasingly recognise risks to fundamental rights and democratic processes. Yet governance must move beyond high-level principles toward operational safeguards:

  • traceable content provenance,
  • mandatory disclosure of synthetic media in political contexts,
  • auditability of recommendation and amplification algorithms,
  • institutional readiness to respond to information integrity crises.

Without such measures, AI systems risk becoming unaccountable participants in democratic decision-making, shaping perception without responsibility.

Conclusion

The erosion of democratic safeguards in the age of AI is not an abstract concern. It is a measurable outcome of disinformation, deepfakes, and cybersecurity failures converging within fragile governance structures.

Effective AI governance must therefore be understood as democratic defence. By embedding cybersecurity resilience, digital forensic accountability, and institutional oversight into AI lifecycles, democracies can resist the corrosive effects of synthetic manipulation and restore trust where it matters most.

Without this integration, the question is no longer whether AI will influence democratic processes, but whether democratic systems can govern that influence before trust collapses entirely.

References

  1. European Union
    Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).
    Official Journal of the European Union, 2024.
  2. OECD
    Information Integrity in the Digital Age: Governance, Trust and Democracy.
    Organisation for Economic Co-operation and Development, 2024.
  3. ENISA (European Union Agency for Cybersecurity)
    Threat Landscape for Information Manipulation and Influence Operations.
    ENISA, 2023.
  4. Chesney, R., & Citron, D.
    “Deepfakes and the New Disinformation War: The Coming Age of Post-Truth Geopolitics.”
    Foreign Affairs, Vol. 98, No. 1, 2019.
  5. Wachter, S., Mittelstadt, B., & Floridi, L.
    “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation.”
    International Data Privacy Law, Vol. 7, No. 2, 2017.

No responses yet

Rispondi

Scopri di più da Federica Bertoni

Abbonati ora per continuare a leggere e avere accesso all'archivio completo.

Continua a leggere