Bernie Sanders and the Bipartisan Blind Spot on AI Safety
A few days ago, I expressed a harsh view: there isn’t a single person on Capitol Hill I could fully trust to handle AI safety with the seriousness it demands. Recent events have forced me to revise that assessment, at least in one case.
Senator Bernie Sanders (I-VT) recently hosted an event on Capitol Hill featuring leading AI scientists from the United States and China. The discussion focused on the profound risks posed by advanced artificial intelligence and the urgent need for international cooperation on regulation. During the event, Sanders openly admitted his limited technical familiarity—famously noting he doesn’t even know how to turn on his TV. Yet that humility may be part of what makes his approach refreshing in a city often dominated by performative expertise.
The core idea emerging from the discussion is pragmatic: politicians lack deep technical knowledge of AI, while the entrepreneurs racing to build it are locked in fierce competition for market share and dominance. Neither group is ideally positioned to set the rules alone. The proposed bridge? Bring together AI scientists from the world’s two leading powers—the US and China—let them identify shared risks and safety protocols, and then have legislators actually listen and act on that expert consensus.
This isn’t a call for naive collaboration on cutting-edge capabilities. It’s a recognition that certain safety and governance issues transcend national rivalry. Scientists like MIT’s Max Tegmark, along with experts from Tsinghua University and the Beijing Institute of AI Safety and Governance, participated in the dialogue. Their involvement underscores that AI safety is not purely a competitive domain but a shared global risk-management challenge—comparable in some ways to nuclear non-proliferation or pandemic preparedness.
AI development differs fundamentally from past technologies. We cannot afford the traditional regulatory playbook of waiting for disasters—seat-belt-style deaths on a societal scale—before implementing guardrails. The potential for loss of control, massive economic disruption, or existential risks is discussed seriously by many experts building these systems. Proactive, evidence-based regulation is essential, and it requires input from those who understand the technology’s trajectory, not just those who fund or vote on it.
Politicians hold the regulatory power, but they need credible, non-partisan expertise. Tech leaders are incentivized by competition, speed, and profits. Independent scientists—particularly those focused on safety and alignment—can help fill the gap. An unlikely figure like Bernie Sanders shining a spotlight on this, by convening experts across borders, highlights a rare moment of potential progress in Washington.
Whether this translates into meaningful legislation remains to be seen. But acknowledging the problem and elevating expert voices is a necessary first step. In the high-stakes arena of AI, humility about what we don’t know may prove more valuable than confidence from those with skin in the game. The conversation Sanders helped facilitate is one worth watching—and building upon.
A few days ago, I expressed a harsh view: there isn’t a single person on Capitol Hill I could fully trust to handle AI safety with the seriousness it demands. Recent events have forced me to revise that assessment, at least in one case.
Senator Bernie Sanders (I-VT) recently hosted an event on Capitol Hill featuring leading AI scientists from the United States and China. The discussion focused on the profound risks posed by advanced artificial intelligence and the urgent need for international cooperation on regulation. During the event, Sanders openly admitted his limited technical familiarity—famously noting he doesn’t even know how to turn on his TV. Yet that humility may be part of what makes his approach refreshing in a city often dominated by performative expertise.
The core idea emerging from the discussion is pragmatic: politicians lack deep technical knowledge of AI, while the entrepreneurs racing to build it are locked in fierce competition for market share and dominance. Neither group is ideally positioned to set the rules alone. The proposed bridge? Bring together AI scientists from the world’s two leading powers—the US and China—let them identify shared risks and safety protocols, and then have legislators actually listen and act on that expert consensus.
This isn’t a call for naive collaboration on cutting-edge capabilities. It’s a recognition that certain safety and governance issues transcend national rivalry. Scientists like MIT’s Max Tegmark, along with experts from Tsinghua University and the Beijing Institute of AI Safety and Governance, participated in the dialogue. Their involvement underscores that AI safety is not purely a competitive domain but a shared global risk-management challenge—comparable in some ways to nuclear non-proliferation or pandemic preparedness.
AI development differs fundamentally from past technologies. We cannot afford the traditional regulatory playbook of waiting for disasters—seat-belt-style deaths on a societal scale—before implementing guardrails. The potential for loss of control, massive economic disruption, or existential risks is discussed seriously by many experts building these systems. Proactive, evidence-based regulation is essential, and it requires input from those who understand the technology’s trajectory, not just those who fund or vote on it.
Politicians hold the regulatory power, but they need credible, non-partisan expertise. Tech leaders are incentivized by competition, speed, and profits. Independent scientists—particularly those focused on safety and alignment—can help fill the gap. An unlikely figure like Bernie Sanders shining a spotlight on this, by convening experts across borders, highlights a rare moment of potential progress in Washington.
Whether this translates into meaningful legislation remains to be seen. But acknowledging the problem and elevating expert voices is a necessary first step. In the high-stakes arena of AI, humility about what we don’t know may prove more valuable than confidence from those with skin in the game. The conversation Sanders helped facilitate is one worth watching—and building upon.
Max Tegmark’s Work on AI Safety: A Physicist’s Push for Provable Control and Human-Centered Futures
Max Tegmark, an MIT professor of physics and AI researcher, is one of the most prominent and persistent voices in AI safety. As co-founder and president of the Future of Life Institute (FLI), he has bridged technical research, public advocacy, policy influence, and international dialogue. His approach combines rigorous scientific analysis with a clear normative stance: AI should enhance and remain under human control, not replace or escape it. Core Philosophy and Public InfluenceTegmark’s 2017 bestseller Life 3.0: Being Human in the Age of Artificial Intelligence popularized the idea of humanity transitioning to a stage where intelligent beings (including AI) can redesign both their hardware and software. He frames AI as potentially the best or worst thing to happen to humanity, urging proactive steps to steer toward positive outcomes.
He co-organized the influential 2015 Puerto Rico AI conference and helped draft the Asilomar AI Principles (2017), which emphasize research funding for safety, value alignment, and avoiding arms races in lethal autonomous weapons. Through FLI, he has supported grants, policy work, and open letters, including efforts to pause giant AI experiments and, more recently, calls to restrict superintelligence development.
Tegmark frequently highlights existential risks—scenarios where advanced AI could lead to loss of human control, civilizational collapse, or worse. He argues that capabilities are advancing far faster than safety measures and pushes for treating AI like other high-stakes domains (e.g., nuclear power, aviation, or pharmaceuticals) with rigorous standards, testing, and regulation. Key Technical Contributions
He supports whistleblower protections, external red-teaming, pre-deployment safety testing, and regulatory floors. FLI’s work has influenced discussions on the EU AI Act and global summits. Tegmark stresses that voluntary corporate efforts are insufficient; enforceable standards are essential. Recent Activity (2025–2026)
For primary sources, see his MIT page, FLI resources, the Provably Safe Systems paper, and the ongoing AI Safety Index. Tegmark remains actively involved in both deepening the science of safe AI and pushing for the policy and cultural shifts needed to implement it.
Max Tegmark, an MIT professor of physics and AI researcher, is one of the most prominent and persistent voices in AI safety. As co-founder and president of the Future of Life Institute (FLI), he has bridged technical research, public advocacy, policy influence, and international dialogue. His approach combines rigorous scientific analysis with a clear normative stance: AI should enhance and remain under human control, not replace or escape it. Core Philosophy and Public InfluenceTegmark’s 2017 bestseller Life 3.0: Being Human in the Age of Artificial Intelligence popularized the idea of humanity transitioning to a stage where intelligent beings (including AI) can redesign both their hardware and software. He frames AI as potentially the best or worst thing to happen to humanity, urging proactive steps to steer toward positive outcomes.
He co-organized the influential 2015 Puerto Rico AI conference and helped draft the Asilomar AI Principles (2017), which emphasize research funding for safety, value alignment, and avoiding arms races in lethal autonomous weapons. Through FLI, he has supported grants, policy work, and open letters, including efforts to pause giant AI experiments and, more recently, calls to restrict superintelligence development.
Tegmark frequently highlights existential risks—scenarios where advanced AI could lead to loss of human control, civilizational collapse, or worse. He argues that capabilities are advancing far faster than safety measures and pushes for treating AI like other high-stakes domains (e.g., nuclear power, aviation, or pharmaceuticals) with rigorous standards, testing, and regulation. Key Technical Contributions
- Provably Safe Systems (2023 paper with Steve Omohundro): This argues that the only reliable path to controllable AGI is building systems that provably satisfy human-specified requirements. They advocate using advanced AI itself for formal verification and mechanistic interpretability to turn opaque neural networks into more understandable, verifiable architectures. The paper outlines challenge problems and stresses that black-box systems are insufficient for superintelligence-level safety.
- Mechanistic Interpretability and Intelligible Intelligence: Tegmark’s MIT group applies physics and information theory tools to make neural networks more transparent—discovering symbolic formulas, invariants, symmetries, and modular structures. The goal is replacing uninspectable “black boxes” with systems whose behavior can be formally guaranteed.
- AI Safety Index (via FLI): Tegmark has driven public benchmarking of leading AI companies (e.g., Anthropic, OpenAI, Google DeepMind, xAI, Meta, and Chinese firms) on domains like risk assessment, existential safety planning, governance, and transparency. Reports (including the Summer 2025 edition) typically award low grades overall (mostly C to F), highlighting inadequate existential safety strategies despite companies’ AGI ambitions. The index aims to create a “race to the top” through transparency and pressure, while underscoring the need for binding regulation.
- Keep the Future Human initiative/essay: Tegmark advocates “closing the gates” to smarter-than-human, autonomous, general-purpose AGI and superintelligence. Instead, focus on narrow, controllable AI tools that amplify human capabilities. He outlines risks of an uncontrollable intelligence explosion and argues for international coordination (including US-China) to prevent rogue or misaligned systems.
He supports whistleblower protections, external red-teaming, pre-deployment safety testing, and regulatory floors. FLI’s work has influenced discussions on the EU AI Act and global summits. Tegmark stresses that voluntary corporate efforts are insufficient; enforceable standards are essential. Recent Activity (2025–2026)
- Promoting the AI Safety Index updates and calling out gaps in company practices.
- Advocating “pro-human” AI futures where systems remain tools under human control, with explicit rejection of AI personhood.
- Continued research on guaranteed safe AI and interpretability.
- Public commentary on timelines, warning that unregulated races toward AGI heighten catastrophic risks.
For primary sources, see his MIT page, FLI resources, the Provably Safe Systems paper, and the ongoing AI Safety Index. Tegmark remains actively involved in both deepening the science of safe AI and pushing for the policy and cultural shifts needed to implement it.
US-China AI Safety Cooperation: Competition Amid Shared Risks
The United States and China lead global AI development, creating both intense rivalry and powerful incentives for cooperation on safety. While strategic competition dominates—particularly around chips, models, compute, and military applications—both nations recognize that advanced AI poses transnational risks that neither can manage alone. These include malicious use by non-state actors (e.g., bioweapons design, cyberattacks, or disinformation at scale), loss of control over increasingly autonomous systems, and accidents that could cascade globally. Recent Momentum and High-Profile EngagementsCooperation has advanced through a mix of official (Track 1), semi-official (Track 1.5), and unofficial (Track 2) channels:
Recent signals (potential Trump-Xi AI discussions, continued Track II work, and expert consensus on shared risks) suggest pragmatic engagement may expand in 2026, even as competition intensifies.
Success depends on keeping dialogues narrowly focused on verifiable safety science, building expert relationships over time, and maintaining realistic expectations. AI's dual-use nature means the US and China will likely continue racing—while quietly working to ensure the race doesn't end in mutual (or global) disaster.
This remains a dynamic, high-stakes area. Progress on safety cooperation could serve as a rare stabilizing force in an otherwise tense bilateral relationship.
The United States and China lead global AI development, creating both intense rivalry and powerful incentives for cooperation on safety. While strategic competition dominates—particularly around chips, models, compute, and military applications—both nations recognize that advanced AI poses transnational risks that neither can manage alone. These include malicious use by non-state actors (e.g., bioweapons design, cyberattacks, or disinformation at scale), loss of control over increasingly autonomous systems, and accidents that could cascade globally. Recent Momentum and High-Profile EngagementsCooperation has advanced through a mix of official (Track 1), semi-official (Track 1.5), and unofficial (Track 2) channels:
- Bernie Sanders' Capitol Hill Event (April 2026): Senator Bernie Sanders convened US and Chinese AI experts, including Max Tegmark (MIT/Future of Life Institute), Xue Lan (Tsinghua University), and Zeng Yi (Beijing Institute of AI Safety and Governance). The discussion emphasized existential risks, the need for international regulation, and "safe zones" for collaboration on safety standards, protocols, and risk monitoring. Chinese participants stressed that safety is a mutual interest: "If one country is not safe, all of us are not safe."
- Official Dialogues: The first intergovernmental US-China AI dialogue occurred in Geneva in May 2024. Both countries signed the 2023 Bletchley Declaration on AI safety and supported UN resolutions on trustworthy AI. As of early 2026, discussions continue around potential new bilateral talks, possibly tied to Trump-Xi summits, focusing on non-state actor threats, crisis hotlines, and guardrails.
- Track II and Expert Efforts: Initiatives like the International Dialogues on AI Safety (IDAIS) have produced joint statements identifying "red lines" (e.g., autonomous replication, deception). Think tanks (Brookings, RAND, Yale Paul Tsai China Center) and scientists facilitate ongoing exchanges on governance, evaluations, and best practices.
- Risk Assessment and Testing Protocols: Shared methodologies for red-teaming, dangerous capabilities evaluations (e.g., biosecurity, cyber), and wet-lab proxy studies. Focus on how to test rather than specific findings.
- Nonbinding Guidelines and Standards: Common baselines for high-risk uses (cyber, chemical, biological), incident reporting, and safeguards against misuse. This could reduce "safety arbitrage" where bad actors exploit weaker models.
- Crisis Management Mechanisms: AI-specific hotlines or communication channels to clarify incidents, reduce miscalculation, and coordinate on non-state threats.
- Broader Governance: Capacity building for the Global South, alignment on preventing AI in nuclear command systems, and research into verifiable safety techniques (e.g., interpretability, watermarking).
- Geopolitical Friction: Export controls, espionage concerns, and zero-sum perceptions hinder deep technical sharing. Critics argue dialogues risk technology transfer or providing China leverage.
- Asymmetric Capabilities: China lags in some independent safety evaluation infrastructure; the US worries about models with weaker safeguards proliferating.
- Incentives: Both sides prioritize leadership and military edge. Trust is low, and voluntary measures may prove insufficient without enforcement.
- Political Headwinds: Events like the Sanders panel drew conservative criticism for engaging Chinese experts amid rivalry.
Recent signals (potential Trump-Xi AI discussions, continued Track II work, and expert consensus on shared risks) suggest pragmatic engagement may expand in 2026, even as competition intensifies.
Success depends on keeping dialogues narrowly focused on verifiable safety science, building expert relationships over time, and maintaining realistic expectations. AI's dual-use nature means the US and China will likely continue racing—while quietly working to ensure the race doesn't end in mutual (or global) disaster.
This remains a dynamic, high-stakes area. Progress on safety cooperation could serve as a rare stabilizing force in an otherwise tense bilateral relationship.
🛡️ Bernie Sanders and the Global Quest for AI Safety https://t.co/54nLaJuQKq @SenSanders @BernieSanders @AOC @ZohranKMamdani
— Paramendra Kumar Bhagat (@paramendra) May 11, 2026