Governing AI in a Fractured World: The U.S.-China Trust Dilemma
Mistrust between Washington and Beijing leaves global AI risks unchecked.
Imagine this: a powerful earthquake strikes near a densely populated border region. U.S. and Chinese AI systems pick up different aftershock patterns and infrastructure risks. But with data-sharing frozen, neither side alerts the other. Rescue teams mobilize too late in key zones, and preventable damage becomes reality. The technology existed, and so did the warning signs. But the cooperation did not.
Artificial intelligence has placed the world on the cusp of radical change, from digital assistants to advanced military strategy and everything in between. Yet the two countries leading these efforts, the United States and China, can barely speak to each other about the technology shaping the future.
A spiral of mistrust has halted formal cooperation and stalled informal exchanges, allowing the global risks of artificial intelligence to go unchecked. How did we arrive at this point, and what are the consequences of maintaining the status quo? These questions must be answered before exploring how limited trust might be built, because without meaningful cooperation, the promise of AI may be eclipsed by dangers that no nation can manage alone.
The Spiral Takes Hold, and the Costs Are Real
The U.S.–China technology rivalry has shaped the bilateral relationship. From telecommunications and cybersecurity to semiconductors, each frontier has carried strategic weight. Artificial intelligence is simply the latest, though the stakes are now far higher. Unlike past technologies, AI has the potential to reshape how every previous tool is used and to alter the foundations of economic and military power.
The United States and China approach AI governance with fundamentally different values. Washington emphasizes innovation, market leadership with rapid adoption, limited state interference, and the protection of free expression. Beijing insists on “correct political direction” with “core socialist values,” risk mitigation, economic development, and promoting Chinese models abroad. These opposing visions are reinforced by mutual narratives of suspicion. U.S. officials worry about data security, disinformation, military applications, and economic competition. Chinese leaders, in turn, fear U.S. dominance in AI and the risks it poses to their national security and growth.
As of mid-2025, the United States has levied computer chip export controls, pressured partners to align with U.S. suppliers, and expanded its Entity List to limit cooperation with Chinese AI firms deemed security threats. These measures achieve near-term strategic goals but come at a cost. The chilling effect is evident: foreign foundries hesitate to work with Chinese chipmakers, Beijing has warned its AI leaders against travel to the United States, and companies on both sides face steep costs for even informal cooperation.
In essence, export restrictions and technology controls begat Chinese crackdowns and self-reliance drives, which in turn fed U.S. anxieties about a tech powerhouse outside democratic norms — a cycle that deepened with every new restriction or security scare. Rather than convergence, two parallel governance systems are taking shape, each developing on its own terms and with little chance of alignment.
The lack of mutual verification, assurance, and cooperation on AI has consequences beyond geopolitics. Crucially, limiting technology sharing between countries could undermine the safety of global AI frameworks due to systemic information gaps. The longer the two powers remain locked in suspicion, the harder it becomes to coordinate even on threats they both face.
Soft Law, Stalled by a Lack of Trust
Formal treaties on AI between Washington and Beijing are almost unimaginable in the current climate, which is why soft law — non-binding principles, voluntary norms, and industry-led best practices — should, in theory, be the easiest space for cooperation. There is already a foundation for international soft law: UN resolutions, G7 AI principles, collaborative bodies such as OECD AI, and multi-stakeholder dialogue indicate that normative common ground exists. Notably, in March 2024, the U.S. introduced a UN resolution on “safe, secure, and trustworthy AI,” which China co-sponsored—an unusual act of diplomatic reciprocity in AI governance. Though nonbinding, it marked genuine international soft-law alignment.
Unfortunately, efforts of soft law to instill responsible AI governance, which depend on transparency and minimal information-sharing to function, are undermined by mutual suspicion. U.S. firms worry that even informal dialogue with Chinese counterparts could invite increased scrutiny or sanctions; Chinese companies fear that openness will expose them to accusations of information leakage or compliance violations. The result is that the most flexible, low-stakes avenue for collaboration is instead another victim of the spiral of mistrust.
At the heart of the problem is not just different rules, but the absence of trust. For soft law to work, companies and regulators must be willing to exchange at least minimal information. The trust deficit prevents not only binding treaties, which are already unlikely, but also the informal guardrails that could keep AI risks manageable. Without a baseline of confidence that cooperation will not be weaponized, even voluntary principles remain frozen on parallel tracks.
Pathways Forward
If formal treaties are unlikely and soft law is ineffective in the age of strategic competition, what remains? The answer may lie in starting small, in areas where cooperation is both less politically sensitive and clearly in the public interest. Experts have already identified overlap between American and Chinese AI governance initiatives, revealing “promising topics” with opportunities for productive dialogue.
Additionally, technical verification of AI development and deployment — modeled loosely on arms control — is emerging as a concrete path for cooperation. Recent research suggests that technical and personnel-based AI security verification mechanisms could effectively verify safety standards. While still early-stage, these proposals exhibit how verification might one day allow states to confidently oversee large-scale AI development without requiring full transparency or sacrificing national security.
From a practical standpoint, disaster response, climate modeling, and public health are natural candidates. Each offers a way to prove that technical collaboration can reduce shared risks without compromising national security. Even limited data exchanges, such as jointly testing best practices for disaster prediction, could begin to chip away at the mistrust.
Confidence-building steps do not need to be grand. Track-two dialogues between academics and technical experts, standardized reporting practices for AI incidents (e.g., the Incident Database), or narrowly scoped partnerships in multilateral settings such as the United Nations could serve as trust incubators. The aim is not immediate harmony but to demonstrate that cooperation is possible at all.
Ultimately, avoiding the worst outcomes of AI will require some baseline of trust between the United States and China. While neither side is ready for sweeping agreements, a few small bridges could keep the vicious cycle of suspicion from hardening into permanence. With small bridges, the door for future alignment can stay open.
Editorial contributions by Rachael Rhine Milliard
The views and information contained in this article are the author’s own and do not necessarily represent those of The Asia Cable.