AI Safety · Systemic Risk · Geopolitics

The gap between safety research and governance is the risk

Syntony AI Labs is an open research initiative developing evaluation frameworks and policy scholarship at the intersection of frontier AI safety, systemic risk, and global geopolitics.

Founded by Nathan Heath, Syntony AI Labs publishes open research on AI safety evaluation, governance, and emerging technology risk.

01 The Initiative

"The mechanisms of deterrence, multilateral coordination, and systemic risk are not new. What is new is the technology."

Evaluation-grounded research that speaks governance

I develop open evaluation methodologies, assessment frameworks, and translational research designed to explore the policy and governance implications of advanced AI systems across technical, institutional, and geopolitical contexts. My research is informed by hands-on adversarial evaluation of frontier systems and published openly at peer-review standards.

Most safety research stays inside the technical community. Most governance work proceeds without it. Through Syntony AI Labs, I work at the boundary — producing research that is methodologically rigorous and institutionally legible, bridging my background in classical geopolitics with the complex threat models of frontier AI.

Open
All frameworks and tools I develop are published under open license
Evaluation-Grounded
Informed by hands-on adversarial testing of frontier AI systems
Translational
Designed to move between technical and policy audiences by default
/ˈsɪntəni/ • noun

A state of resonance.
Tuned to the exact same frequency.

In 1897, early radio pioneers faced a critical problem: transmitters were drowning each other out in a chaotic airspace. The solution, patented by physicist Oliver Lodge, was syntonic telegraphy. By carefully tuning both the transmitter and the receiver to resonate on the identical frequency, clear communication finally emerged from the noise.

Today, the deployment of frontier AI systems faces a similar crisis of resonance. Technical models are frequently misaligned with human intent, and global regulatory bodies operate on entirely different wavelengths than the laboratories building the technology.

Technical Syntony

Alignment. Ensuring that the high-dimensional internal representations of frontier models resonate accurately with human values, intent, and safety parameters without destructive interference.

Governance Syntony

Multilateralism. Architecting shared evidentiary standards and cross-border regulatory frameworks so that international institutions can coordinate on identical policy wavelengths.

02 Research Focus

Areas of Inquiry

My work is dedicated to the intersection of frontier AI safety, systemic risk, and global geopolitical strategy.

Adversarial Evaluation

Developing open, reproducible evaluation protocols for frontier models — leveraging direct red-teaming experience to build scalable safety assessment methodologies.

AI & Global Geopolitics

Analyzing how frontier AI systems intersect with international power dynamics, economic competition, and the strategic stability of multilateral alliances.

Governance & Risk

Translating empirical failure modes into governance architectures, regulatory models, and institutional design recommendations for the international community.

Systemic Dynamics

Testing multi-agent systemic flows to map emergent risks and establishing dynamic benchmarking paradigms for interactive, frontier model ecosystems.

Emerging Tech Analysis

Analyzing the trajectory of dual-use emerging technologies to anticipate strategic shifts and inform long-range policy interventions.

Research Translation

The systematic practice of making technical safety research legible to policy communities and governance requirements legible to technical teams.

The Translational Triad

True alignment requires structural resonance across all three nodes. A failure in empirical evaluation leaves policy blind. A failure in mapping systemic dynamics limits us to static, obsolete benchmarks. And a failure in multilateral governance renders even the best technical safeguards geopolitically useless.

A failure in one node is a failure of the entire system.

Empirical Evaluation Multilateral Governance Systemic Dynamics

What evaluators learn must reach the people designing governance. That translation is the work.

03

Frontier AI & Governance

This site includes work published through Syntony AI Labs as well as selected research, writing, and speaking engagements originally produced through other institutions.

04 Approach

How I operate

I publish open research at the intersection of AI safety evaluation and global geopolitics. Every framework, tool, and publication is designed to be used, tested, and built upon.

Open-Source by Default

All frameworks, evaluation protocols, and publications are released under open license. Reproducibility is a requirement, not a feature.

Evaluation-Informed

My work is grounded in hands-on adversarial testing of frontier systems — not theoretical speculation about models I have never evaluated.

Publication-Grade Rigor

Formal methodology, transparent assumptions, honest uncertainty quantification. Designed to withstand peer review at top venues.

Translation as Core Method

Every output is produced in forms legible to both technical and policy audiences. This is not outreach — it is the research methodology itself.

Internationally Oriented

AI governance is a cross-border coordination problem. My research accounts for that from the first line, not as an afterthought.

Independent

Aligned with the work, not with institutional pressures. My research is conducted and published entirely in the public interest.

05 About Me
Nathan Heath

Nathan Heath

Founder & Lead Investigator

I am an AI Safety Researcher and national security analyst with over 13 years of experience working at the intersection of global geopolitics and emerging technology. My career began in classical geopolitics—analyzing deterrence, alliance stability, and institutional conflict. Today, through Syntony Ai Labs, I apply that foundational expertise to the most critical systemic challenge of our era: the safe deployment and governance of frontier AI.

I currently serve as a Senior Research Scientist at National Security Innovations. I also serve as a consulting AI Safety & Security Researcher with OpenAI and Anthropic, and as a Security Fellow with the Truman National Security Project.

I completed my M.A. in Law and Diplomacy from The Fletcher School and studied political economy and strategy at Oxford and Harvard. My work has appeared in venues including the United Nations, the University of Cambridge, the RAND Corporation, War on the Rocks, and The Washington Post.

Open research. Open tools. Open source. AI safety and governance research is a public good. I publish accordingly — all work on this site is available under open license for the research community to use, extend, and challenge.

Open Source

Let's find syntony

I welcome collaboration with researchers, evaluation teams, governance bodies, and institutions working on frontier AI safety and oversight.