The convening offered a highly needed space where theory met practice; where geopolitical realities, technical architectures, and governance responses were debated not as abstractions, but as institutional design challenges.
The discussions, grounded in the principles of the Digital Statecraft Manifesto, revealed a field at an inflection point. Digital statecraft is no longer just about digitizing services or regulating platforms at the margins. It is about rethinking the state itself as a coordinator in a world where AI systems, data infrastructures, and global platforms increasingly mediate social, economic, and political life.
Below are my ten high-level takeaways from the convening — signals, perhaps, from the frontier of digital statecraft. In keeping with the spirit of the convening — held under Chatham House Rules — I will not attribute specific remarks to individuals, but instead reflect some of the collective insights that emerged across the discussions.

DSA Convening at Jesus College. Picture by Zeynep Engin
- From Sovereignty to Agency (and Self Determination)
One of the most debated themes was the rise of and growing call for “AI sovereignty.” While traditionally framed as a nation’s ability to manage dependencies by building and controlling its own AI systems, participants pushed toward a more nuanced understanding — one centered on agency, including self-determination.
Sovereignty, as was discussed, spans multiple dimensions: technical (compute and infrastructure), data (access to local and high quality datasets), regulatory (rule-setting power), knowledge (skills and talent), and cultural (alignment with societal values and protecting cultural heritage).
Yet sovereignty alone risks becoming a defensive posture. Agency, by contrast, emphasizes the capacity to act: to determine outcomes in alignment with preferences and expectations of communities, reduce asymmetries with dominant private actors, and make meaningful choices in a constrained technological landscape. In sum, it enables digital self determination.
This shift aligns with the Manifesto’s call to center purpose and legitimacy. The question is not whether states can “own” AI, but whether they can use it to advance public value in ways that are accountable, inclusive, and context-sensitive.
2. The State and Meta-Coordination
A recurring insight was that the state’s historical role — as a coordinator of markets and society — is being challenged by digital platforms. Platforms like Uber or Airbnb do not own assets; they coordinate interactions. Increasingly, AI systems perform similar coordination functions, often more efficiently and at scale. This raises a profound question: if coordination migrates to private AI systems, what remains of the state’s core function?
Participants argued that digital statecraft requires reclaiming and reinventing this role. The state must evolve into a meta-coordinator — designing the rules, infrastructures, and incentives that shape how coordination happens across public and private actors.
This is where the Manifesto’s emphasis on participation and resilience becomes critical. Coordination cannot be outsourced entirely; it must be governed.
3. The AI Stack, Exceptionalism and the Politics of Dependency
Understanding digital statecraft today requires understanding the AI stack — from energy and chips to models and applications. The convening highlighted how deeply interdependent this stack is. A handful of companies control foundational models and infrastructure, while supply chains span continents.
At the same time, participants cautioned against AI exceptionalism. While the scale and speed of AI are novel, many of the underlying governance challenges — around concentration, dependency, standards, and access — are not. They echo earlier experiences with digital infrastructure, telecommunications, and data governance. Treating AI as entirely unique risks overlooking these lessons and reinventing rather than adapting proven approaches.
This creates both concentration risks and governance opportunities. On one hand, the ecosystem is fragile and unequal. On the other, its relative narrowness offers leverage points for intervention. For states, the challenge is not to control the entire stack, but to identify strategic entry points — whether in data governance, standards-setting, or public infrastructure — and to build capabilities where they matter most.
4. The Contextual Value of Digital Public Infrastructure
Digital Public Infrastructure is positioned as purpose-agnostic, modular, and reusable…more like a “wheel” than a finished product. Its principles — interoperability, minimalism, inclusivity, decentralization, and privacy by design — could offer a blueprint for embedding governance directly into technical systems. In this sense, DPI may represent digital statecraft encoded in architecture. It enables innovation while maintaining public oversight. It allows data to remain distributed while being usable. And it creates a foundation upon which both public and private actors can build.
At the same time, the relevance and design of DPI are highly context-dependent. In environments where legacy systems are fragmented, exclusionary, or underdeveloped, DPI can serve as a transformative foundation: leapfrogging constraints and enabling new ecosystems. However, in contexts where legacy infrastructures function well or where strong, mature digital ecosystems already exist, the role of DPI may be more incremental: augmenting, interconnecting, or layering additional capabilities rather than replacing existing systems. In such cases, a “plus-one” approach, building on top of what works, may be more effective than wholesale reinvention.
5. Governing What We Cannot See
A central tension emerged around the opacity of AI systems, and leveraging that opacity to spread disinformation. As models become more complex, they become less explainable. Future systems may operate beyond human comprehension.This creates a governance paradox: how do you govern systems you cannot fully understand?
In this context, participants emphasized that social license matters as much as legal license. Legitimacy cannot rest on compliance alone; it requires ongoing engagement with communities, transparency about risks and trade-offs, and mechanisms for contestation and redress. Especially when systems are opaque, trust must be earned through process, not just promised through regulation.
Participants pointed also to several other approaches:
- Embedding governance in code (e.g., APIs, federated systems);
- Developing traceability and provenance mechanisms;
- Shifting focus toward outcomes — such as performance, error rates, and impact — rather than relying solely on understanding internal logic.
The discussion echoed the Manifesto’s call for adaptive governance — systems that evolve with technology rather than lag behind it.
6. Trust vs. Trustworthiness
Another critical distinction was between trust and trustworthiness. Trust can be misplaced; trustworthiness must be demonstrated. Examples such as independent oversight panels, transparent audits, and open reporting mechanisms illustrated how institutions can build earned legitimacy.
In the context of AI, this means moving beyond broad ethical commitments toward verifiable, operational practices. It requires showing — not just stating — how systems are governed, evaluated, and corrected. Crucially, trustworthiness is neither uniform nor universal; it is contextual, shaped by culture, experience, and power dynamics. Building trustworthiness, therefore, demands approaches that are responsive to diverse contexts, especially in cross-cultural and global settings.
7. From IT Projects to Organizational Transformation
One of the more practical insights concerned how AI is implemented within organizations. Participants noted that successful initiatives were often driven not by IT departments, but by HR, finance, or operational units closer to core mission and delivery. Why? Because they focused on real problems — inefficiencies, frustrations, unmet needs — rather than abstract technological possibilities.
This points to a broader conclusion: institutional capacity is often the real bottleneck. Policies and strategies alone are insufficient. What matters is building enduring capabilities — data stewards, interdisciplinary teams, and adaptive processes that can translate ambition into action.
It also suggests that digital statecraft is not just a policy challenge; it is an organizational one. It requires rethinking workflows, incentives, and cultures of collaboration across silos. The most effective strategies start with people and problems, not technology — and scale through sustained investment in institutional capability.
8. The Geopolitics of Fragmentation
The convenings also underscored a shift from global collaboration to national approaches to AI. While this fragmentation reflects geopolitical realities, it risks undermining, for instance, scientific exchange and collective problem-solving. At the same time, regional collaborations, such as within the Association of Southeast Asian Nations, may offer more viable pathways forward. In this context, participants stressed that technical and governance interoperability is essential to sustain collaboration and avoid siloed systems, even as political approaches diverge.
Yet beyond the national and regional levels, an additional layer is increasingly salient: the local. Building on the principle of subsidiarity, participants emphasized that the governance and innovation of AI may, in practice, be most effective when anchored closer to communities. This perspective, which I have described as AI Localism, recognizes that cities, municipalities, and local institutions are often better positioned to experiment with context-specific use cases, build social license, and align AI systems with local needs and values.
Digital statecraft must therefore navigate not only a balance between nationalism/regionalism and interdependence, but also a multi-level governance architecture that integrates local, national, regional, and global efforts. Complete autonomy is neither feasible nor desirable; strategic collaboration is essential — but so too is empowering local actors to lead where they are best placed to act.
9. Cultural Sovereignty and Identity
Beyond economics and security, AI raises deeper questions of culture and identity. With many models predominantly trained on English-language and Western-centric data, there is a growing risk of cultural homogenization — where dominant narratives are amplified while others are rendered invisible.
At the same time, efforts to “lock” culture within national or linguistic boundaries risk constraining creativity, exchange, and innovation. The challenge, then, is not isolation but balance: preserving diversity while enabling meaningful cross-cultural interaction.
In contexts such as Indigenous knowledge systems, this tension becomes even more acute — requiring safeguards against both marginalization and extraction. Protecting cultural integrity must go hand in hand with ensuring agency over how data is used, shared, and represented.
This dimension of digital statecraft is often overlooked, yet it is central to any vision of governance that is truly inclusive, pluralistic, and reflective of the societies it seeks to serve.
10. From Solutions to Questions to Public Value
Finally, a key takeaways was methodological: digital statecraft requires a shift from solutions to questions. Rather than asking “how do we deploy AI?”, digital statecraft requires leaders to ask:
- What problems are we trying to solve?
- What are the questions that matter?
- Who defines those problems and questions?
- What data is needed to answer these questions — and who controls it?
This aligns closely with the idea that questions are infrastructure: they shape research agendas, policy priorities, and ultimately, societal outcomes. At the frontier of digital statecraft, the ability to ask better questions may be as important as the ability to build better systems.
Conclusion: Designing the State of the Future
The convenings at Cambridge made one thing clear: digital statecraft is not a niche domain. It is the core challenge of governance in the 21st century. It requires rethinking the state as:
- A meta-coordinator in a platform-dominated world;
- A designer of infrastructures, not just policies or trainings;
- A steward of questions, data and AI for public value;
- A trusted institution in an age of opacity and uncertainty.
The Digital Statecraft Manifesto provides a compass. But the journey is only beginning.

DSA Manifesto at https://zenodo.org/records/17037682
What emerged from Cambridge is not a set of answers, but a recognition: the future of governance will be shaped not just by how we build or use technology, but by how we design the institutions and frameworks that govern it.