Loading...

    Diplomacy in the Age of AGI

    Designing Societal Defences for AI-Driven Security and Governance

    Authors

    Introduction: From Existential Risk to Security Architecture

    Artificial intelligence is reshaping the structure of power faster than any prior technology. Yet, the governance conversation remains fragmented—split between technical alignment research and moral philosophy—while the security and diplomatic dimensions of AI risk remain underdeveloped. On one side, we have the technical alignment community focusing on how to make advanced AI systems behave safely—through methods like reinforcement learning from human feedback, interpretability, scalable oversight, or robustness testing. On the other side, ethics communities focus on what values or principles AI should embody—such as justice, fairness, human rights, autonomy, the “good society.” If nuclear weapons introduced deterrence as the logic of the twentieth century, advanced AI may introduce coordination under opacity as the logic of the twenty-first.

    When we speak of “AI safety,” we often refer to technical robustness or alignment with human values. But the survival problem before us is much broader: how to maintain international stability and institutional coherence in a world where cognitive capability itself becomes a variable of power. This requires not only engineering but diplomacy, foresight, and the deliberate design of what we might call societal defences—systems of trust, transparency, and adaptive governance resilient enough to withstand the pressures of transformative intelligence.

    The ideas here are informed by systems-thinking practice, research on AI corporate accountability with The Midas Project, and engagement across foresight and diplomacy communities. My claim is simple: to make AI “go well,” humanity must treat it as a strategic governance challenge, demanding the same disciplined cooperation that once stabilised nuclear deterrence.

    (c) Josephine Schwab
    (c) Josephine Schwab

    Diplomacy and the Logic of Containment

    Diplomacy, at its essence, is the management of uncertainty through communication. Historically it has served to prevent misunderstandings from cascading into existential catastrophe—from the Concert of Europe to the Cuban Missile Crisis. The same strategic logic must now be applied to AI.

    Unlike nuclear arms, AI systems proliferate through weights, data, and code rather than fissile material. Containment, therefore, cannot rely on physical inspection alone; it must rely on confidence-building measures, such as information-sharing, model reporting standards, joint evaluation protocols, and reciprocal transparency commitments. These are diplomatic instruments, not merely regulatory ones.

    Corporate actors—OpenAI, Anthropic, DeepMind, and their emerging peers—have become de facto sovereigns in capability development. Engaging them requires Track II diplomacy: structured dialogues between non-state actors that complement formal treaties. AI corporate accountability, which my Veritas research will explore, functions as precisely such a bridge—aligning private incentives with public security through disclosure, auditability, and shared oversight mechanisms.

    The aim is not to slow innovation arbitrarily, but to stabilise expectations: to create predictable norms of behaviour under conditions of accelerating capability. Diplomacy remains the only mechanism humanity has ever invented for that purpose.

    Peace and Security in the Algorithmic Century

    Advanced AI introduces novel risks to global peace and stability. In terms of strategic instability, autonomous decision-making systems in defence or cyber domains could shorten reaction times beyond human capacity, heightening escalation risk. Second, in informational warfare: synthetic media and generative propaganda already erode epistemic trust, undermining the deliberative foundations of democracy. And third, in economic asymmetry: control over frontier models and compute resources could produce a concentration of power rivaling the industrial monopolies of previous centuries.

    Macroeconomic transitions are likely to follow the contours of automation bifurcation: productivity gains accruing to capital while displacing cognitive labour. States unable to integrate displaced workforces may experience fiscal contraction and social unrest. The emergence of “AI OPECs”—cartels controlling compute or training data—could reshape trade relations and trigger new forms of resource nationalism. 

    Politically, most parties are only beginning to future-proof themselves against these shifts. Some European Greens and Liberals have incorporated AI ethics into digital charters, although newer transnational movements such as Volt Europa––where I contribute to Chapter strategy––have started to treat longterm existential risk as a governance challenge, rather than a niche concern. Yet mainstream parties still rarely treat existential security as a core policy category. This vacuum leaves space for techno-populism and for geopolitical blocs to form around competing AI ideologies: democratic-pluralist, authoritarian-centralised, or corporate-technocratic. In this volatile landscape, diplomacy becomes crisis prevention: establishing channels between these paradigms before misperception evolves into confrontation.

    Governance Fragility and the Institutional Lag

    Existing institutions were designed for threats that are linear, territorial, and observable. AI development, by contrast, is exponential, transnational, and partially opaque. This mismatch produces what we might call institutional lag—the delay between technological transformation and the adaptive response of governance systems.

    Europe illustrates both the promise and peril of complex governance. The EU, OSCE, and Council of Europe create multiple nodes of accountability, yet coordination remains slow. The EU AI Act (2024) establishes vital regulatory baselines, but without dynamic feedback loops it risks ossification. Regulation must learn at the pace of innovation; otherwise, even compliance becomes performative.

    Corporate opacity compounds this fragility. Without standardised disclosure on model capabilities, training data, and risk evaluation, policymakers are negotiating in informational darkness. Here again, diplomacy offers a template: verification regimes akin to those in arms control. Instead of centrifuge counts, we would verify compute flows; and instead of enrichment thresholds, capability thresholds. Such analogies may appear extreme, but they reflect a shared logic—containment through cooperative transparency.

    Systems Thinking and Foresight as Preventive Diplomacy

    From a systems-thinking perspective, governance failures are not isolated errors but feedback breakdowns. A small misalignment at the policy level can cascade into global fragility if signals are delayed or ignored. Resilient systems are characterised by redundancy, diversity, and learning capacity.

    Applying this to AI governance, foresight functions as preventive diplomacy: scenario planning, red-teaming, and cross-sector simulations allow states and firms to anticipate failure modes before they materialise. The EU’s Joint Research Centre and initiatives like Futures4Europe already experiment with foresight methodologies, but integrating expertise from these communities into AI policy processes could provide early warning of strategic misalignment or technological shock.

    I also argue that a second layer of foresight must be participatory: technocratic governance alone risks legitimacy collapse. Deliberative assemblies—citizen panels, cross-party foresight councils, and equivalents of global town halls—can anchor governance in democratic consent. My experience reviewing for the Journal of Deliberative Democracy has shown that inclusion improves both legitimacy and epistemic quality: in other words, diverse publics can spot blind spots that experts overlook. Participation is not antithetical to security; it is part of it. When societies perceive AI governance as opaque or elite-driven, compliance erodes and extremism fills the void.

    (c) Josephine Schwab
    (c) Josephine Schwab

    Europe’s Strategic Readiness

    Europe is often praised for normative leadership—GDPR, digital-rights frameworks, and now the AI Act. Yet normative leadership without strategic readiness is somewhat fragile. Operationally, Europe depends heavily on foreign compute infrastructure and private frontier models; strategically, it lacks a unified foresight capability comparable to the U.S. National Intelligence Council or China’s State AI Planning Office. Whilst Europe possesses a diverse foresight ecosystem—JRC, ESPAS, EEAS, and several national foresight bodies—it lacks a centralised, directive foresight authority capable of integrating technological and security planning at the continental scale. Is this weakness, or strength? Time will tell.

    Nevertheless, Europe’s polycentric diplomacy—its capacity to negotiate overlapping jurisdictions—can become an asset. We can envision a European AI Safety Accord built on three pillars:

    – Interoperable standards for risk classification and model auditing.
    – Transparent compute registries to monitor large-scale training runs.
    – Multilateral crisis hotlines between states and frontier labs for incident reporting.

    Such frameworks could mirror the 1975 Helsinki Accords––a landmark Cold War–era agreement that stabilised postwar borders––which used soft law and confidence-building to reduce systemic tension. The objective is not control, but stability through communication—a principle at the heart of both diplomacy and AI safety. To operationalise this, Europe must integrate foresight units across key ministries––Foreign Affairs, Defence, Economy, Justice, and more—to ensure that anticipatory governance is integrated into security planning, industrial policy, and human-capital development alike. Europe might also support cross-sector fellowships, and cultivate “diplomatic technologists” fluent in both governance and AI risk science. For now, we are already seeing success with Finland’s Parliamentary Committee for the Future.

    The Future of International Justice

    AI will soon transform the international justice system—from evidence gathering to sentencing analytics. Machine-learning tools already assist in identifying war-crimes imagery [Deepfakes in the Dock (2024), WITNESS] and analysing atrocity data. Through networks such as the Platform for Peace & Humanity, where I contribute as a news writer on international justice, we see practitioners grappling with the evidentiary and ethical implications of digital transformation in conflict monitoring. These communities are beginning to articulate normative guardrails for the responsible use of AI in accountability processes—a microcosm of the broader governance challenge AI now poses to law and diplomacy alike. 

    Yet automation raises profound questions of legitimacy and accountability. If AI models contribute to human-rights violations—through surveillance, misinformation, or autonomous targeting—how will international law attribute responsibility? Existing doctrines of command responsibility presume human intent. We may require a new category: algorithmic liability.

    Conversely, AI could strengthen justice by improving transparency and access, such as by providing searchable jurisprudence, multilingual translation, and predictive caseload management. But without safeguards, such systems risk encoding bias or geopolitical manipulation. The deeper concern lies in epistemic capture: if courts begin to rely on opaque algorithms for evidentiary or predictive reasoning, the authority of human judgement is eroded. International law must therefore establish auditable AI systems and digital chain-of-custody standards.

    In short, I do not believe that AI will replace international justice, but it will certainly test its epistemic foundations. Diplomatic coordination—among states, firms, and civil society—will be required to ensure that automation strengthens, rather than supplants, the rule of law.

    Macroeconomic and Political Transitions Ahead

    The diffusion of generative and decision-making AI will alter global macroeconomics as profoundly as did industrialisation. According to OECD projections (2024), automation could affect up to 27 per cent of European employment within a decade. Fiscal systems reliant on income taxation will strain, and capital-intensive economies may experience widening inequality.

    Geopolitically, control over compute and rare-earth supply chains could redefine alliances. The “chip diplomacy” already visible between the U.S., Taiwan, and the Netherlands illustrates a shift from energy to compute as the currency of power. Europe must therefore approach AI not merely as a regulatory issue, but as a strategic industrial policy issue.

    Politically, we can expect these trends:

    – Technocratic centralisation, as governments seek control over complex systems.
    – Populist backlash, exploiting fears of automation and surveillance.
    – Transnational civic networks forming to advocate for ethical governance.

    Parties that ignore these shifts risk obsolescence. Future-proofing requires foresight integration into policymaking—treating long-term risk as a governance discipline, not a campaign theme.

    The shared insight from AI safety, public security, foresight, and diplomacy is that defending civilisation from AI risk requires not only smarter machines, but smarter coordination. In diplomatic terms, this means replacing the zero-sum logic of past arms races with collective resilience agreements—frameworks where transparency and accountability are treated as global public goods. And here, Europe can lead by example, leveraging its multilateral tradition to prototype cooperative security mechanisms before irreversible capability asymmetries emerge.

    Conclusion: Foresight as a Diplomatic Imperative

    AI governance is not a technical sidebar to international politics; it is international politics, and it will determine whether the next century is defined by cooperation, or collapse. I believe that effective diplomacy offers the grammar through which humanity can manage this transformation: dialogue under uncertainty, transparency without naivety, and deterrence without hostility. But diplomacy must now operate between humans and systems, governments and corporations, experts and citizens as well.

    We are entering an era when peace will depend on our ability to coordinate with entities faster and possibly more complex than ourselves. Building societal defences—multi-layered, adaptive, and participatory—is thus not optional, but existential.