Between peace and war, law and non-law, human and machine

Contemporary strategic competition is increasingly defined not by clear thresholds, but by zones of ambiguity. Artificial intelligence and cyber capabilities have expanded these grey zones, blurring distinctions that once structured international order: peace versus war, lawful versus unlawful conduct, human versus machine agency.

In this environment, power is exercised without declaration, responsibility is contested, and escalation unfolds incrementally. Understanding these dynamics requires reframing AI governance, cybersecurity, and digital forensics as instruments for navigating ambiguity, not merely tools for control.

The erosion of traditional thresholds

Classical security frameworks rely on identifiable triggers: armed attack, territorial violation, formal attribution. AI-enabled cyber operations undermine these thresholds by enabling persistent, low-intensity activity below the level of armed conflict.

Intrusions, influence operations, data manipulation, and algorithmic interference rarely qualify as acts of war, yet they produce strategic effects. They shape political outcomes, economic stability, and institutional trust without crossing legally defined red lines. As a result, the distinction between peace and conflict becomes procedural rather than substantive.

Cyber power thus operates continuously, not episodically.

Law in the shadow of ambiguity

Legal systems are particularly strained in grey zone environments. International law, domestic regulation, and oversight mechanisms are built around attribution, intent, and proportionality—concepts that become elusive in AI-assisted operations.

Automated systems distribute agency across data pipelines, models, and human operators. Responsibility becomes fragmented, allowing actors to exploit legal uncertainty. Actions may be harmful without being clearly illegal, strategic without being openly hostile.

From an AI governance perspective, this creates a critical gap: law governs acts, while AI-enabled power often manifests as influence, optimisation, and manipulation rather than discrete acts.

Human–machine hybridity in decision-making

Grey zones are not only legal or geopolitical; they are also cognitive and organisational. AI systems increasingly support targeting, prioritisation, risk assessment, and response decisions. Even when humans remain formally in control, machine outputs shape perception and choice.

This hybridity complicates accountability. When outcomes result from human–machine interaction, responsibility is shared but unevenly visible. Decision-makers may rely on algorithmic recommendations while retaining nominal authority, creating accountability dilution rather than delegation.

Digital forensics becomes essential here, not only to analyse incidents, but to reconstruct how decisions emerged from human–machine systems under uncertainty.

Escalation without intent

One of the most destabilising features of strategic grey zones is the possibility of escalation without deliberate intent. Automated monitoring, predictive analytics, and AI-assisted response systems compress decision time and amplify signals.

In such contexts, misinterpretation, model error, or biased data can trigger responses that exceed political intent. Escalation emerges not from aggression, but from automation-driven feedback loops interacting across adversarial systems.

Governance frameworks designed for human deliberation struggle to manage these dynamics.

Cybersecurity and forensics as stabilising forces

Cybersecurity and digital forensics offer more than defensive capability; they provide interpretive stability. Forensic attribution, evidence preservation, and contextual analysis help distinguish noise from intent, accident from strategy.

However, forensic insight must be integrated into governance processes. Evidence that arrives too late, or without institutional pathways for action, cannot prevent escalation. Grey zones reward speed and ambiguity; governance must therefore prioritise traceability, contestability, and timely interpretation.

Governing power in the grey zones

AI governance in strategic contexts cannot rely solely on compliance or risk classification. It must address how power is exercised under ambiguity.

This includes:

  • defining limits on automated decision support in security contexts,
  • embedding forensic review into AI-assisted operations,
  • clarifying responsibility across human–machine systems,
  • coordinating legal, technical, and diplomatic responses.

Such governance does not eliminate grey zones, but it reduces their destabilising potential.

Conclusion

AI and cyber power are reshaping strategic competition by expanding grey zones between peace and war, law and non-law, human and machine. These zones thrive on ambiguity, speed, and fragmented responsibility.

Managing them requires more than technological superiority. It demands governance frameworks capable of operating within uncertainty, preserving accountability where thresholds dissolve.

Cybersecurity, digital forensics, and AI governance together form the architecture through which power can be constrained, escalation moderated, and strategic stability maintained. Without this integration, grey zones risk becoming the dominant terrain of conflict, not because war is declared, but because governance cannot keep pace with automation.

References

  1. United Nations Institute for Disarmament Research (UNIDIR)
    Artificial Intelligence, Cybersecurity and International Stability.
    UNIDIR, Geneva, 2023.
  2. NATO Cooperative Cyber Defence Centre of Excellence (CCDCOE)
    International Cyber Norms and the Strategic Grey Zone.
    NATO CCDCOE, Tallinn, 2022.
  3. European Union Agency for Cybersecurity (ENISA)
    Cyber Operations in Hybrid and Grey Zone Contexts.
    ENISA, 2024.
  4. Schmitt, M. N. (ed.)
    Tallinn Manual 2.0 on the International Law Applicable to Cyber Operations.
    Cambridge University Press, 2017.
  5. Kello, L.
    “The Meaning of the Cyber Revolution: Perils to Theory and Statecraft.”
    International Security, Vol. 38, No. 2, 2013

No responses yet

Rispondi

Scopri di più da Federica Bertoni

Abbonati ora per continuare a leggere e avere accesso all'archivio completo.

Continua a leggere