Anthropic Unveils Transparency Rules for Frontier AI Models
Last Week in AI Policy #25 - July 15, 2025
Delivered to your inbox every Monday, Last Week in AI Policy is a rundown of the previous week’s happenings in AI governance in America. News, articles, opinion pieces, and more, to bring you up to speed for the week ahead.
Policy
Anthropic released a ‘transparency framework’ for frontier models
Published on July 7, Anthropic has presented a ‘targeted transparency framework’ for frontier models that can in their words ‘be applied at the federal, state, or international level.’
Anthropic became the first AI company to release a Responsible Scaling Policy (RSP) in September 2023, and this soon set an industry standard.
The framework, however, signals a shift in focus for Anthropic as it seeks to make a wider impact on policy.
While it is very light on detail, Anthropic intentionally designed the framework this way to keep it “lightweight and flexible”.
The Proposed Frontier Model Transparency Framework outlines a four step approach.
Scope:
The framework is clear to identify frontier models as its sole target, exempting small firms and start-ups.
It distinguishes between those who would and would not have to comply by a combination of computing power, computing cost, evaluation performance, annual revenue and R&D expenditure.
Pre-Deployment Requirements:
Suggested requirements for developers within the scope of the framework include the creation of Secure Development Frameworks (SDF, another term for RSP).
This means companies must publish a strategy on how they intend to map safety onto increasing advancements in their product, and then ensure their own guidelines are followed prior to release.
Anthropic explains the purpose of SDFs is to prevent or mitigate Catastrophic Risks, defined as chemical, biological, radioactive, nuclear (CBRN) or a model causing harm by acting in a way contrary to its intended use.
The framework then provides a 7-point standard for SDFs including, specifying the model it applies to, describing how risks are addressed and mitigated, outlining whistleblower protections, and retaining copies of SDFs and updates for 5 years.
Minimum Transparency Requirements:
Developers must disclose their current SDF on a public-facing website, publish a system card at the time that the model is deployed, certify adherence to the SDF and describe any action taken to mitigate risks.
Companies are permitted to redact only information considered a trade secret, confidential business information, or that could compromise public safety. Any redactions must be identified and justified in the public version.
Enforcement:
The framework prohibits ‘intentionally false or materially misleading statements related to SDF compliance,’ authorises the attorney general to pursue civil penalties for violations, but gives companies 30 days to cure any noncompliance before action is taken.
U.S. and Israel commit to AI innovation partnership with signing of MoU
U.S. Secretary of Energy Chris Wright and Secretary of the Interior Doug Burgum signed a Memorandum of Understanding (MoU) with Israel Prime Minister Benjamin Netanyahu and Israel Ambassador to the US Yechiel (Michael) Leiter on Tuesday.
The agreement was presented by the National Energy Dominance Council (NEDC), established by President Trump in February with the goal of pursuing energy dominance by promoting and utilizing “affordable” and “reliable” forms of energy and reducing red tape.
The MoU establishes a formal partnership between the US and Israel with the aim of advancing collaboration on both energy and AI.
Energy Secretary Chris Wright outlined the intentions behind the MoU, stating that the formalization of collaboration between the two nations helps to “ensure the United States and Israel are leaders in AI and remain energy dominant forces as AI transforms our future.”
This partnership is a continuation of President Trump’s pivot toward bilateral agreements on AI and tech more broadly.
While the MoU is not as comprehensive or concrete as the US deals with the UAE and Saudi Arabia, it nevertheless serves as a stepping stone to future collaboration.
NOAA and Google announced they are teaming up to improve hurricane forecasting
The National Oceanic and Atmospheric Administration’s (NOAA) National Hurricane Centre (NHC) is partnering with Google to enhance hurricane and tropical weather forecasting models, it was announced on Tuesday.
Another addition to the growing list of public-private partnerships on AI, the collaboration will harness NHC’s existing communication and forecasting infrastructure and Google’s industry leading AI weather modelling capabilities with the purpose of evaluating and improving forecasts.
According to NOAA, ‘Google’s AI for weather team will provide near-real-time AI tropical cyclone forecasts to NHC’ and this will ultimately enable NHC forecasters to communicate risk to the public and save lives.
NHC director Michael Brennan commented, “the pace of weather modeling innovation is increasing and Google is a stellar partner in AI weather model development.”
While Google DeepMind’s Ferran Alet remarked, “we look forward to working together on the agency’s smooth integration of AI models into NHC operations to enhance their track and intensity forecasts.”
Virginia Governor Glenn Youngkin issued an executive order on Friday outlining intentions to pilot a nation-first “agentic AI” program to ‘streamline state regulations and guidance documents.’
The initiative followed the Virginia administration’s announcement that it had achieved its goal of reducing red tape by 25%. Youngkin is now hoping that AI can continue this work.
Virginia will give generative AI access to thousands of pages of agency rules and regulations to ‘identify redundancies, contradictions and overly complex language—all in the name of efficiency.’
Governor Youngkin expressed optimism, claiming that he believed AI could reduce red tape a further 10%, stating that “it’s about setting targets and hitting them — and then blowing through them and doing even more.”
While this seems like a worthwhile initiative, a few things are still unclear.
First, how exactly is the administration measuring red tape reduction—by word count, number of regulations, or something else?
Second, is the AI model they are using actually agentic? Will it automatically redact any inefficient language? The wording of the executive order sounds more like they will be using a standard LLM.
Third, removing redundancies may make regulations more concise—but does that actually reduce red tape?
The order states that all periodic reviews of regulation carried out by executive branch agencies after December 31, 2025 should use AI to assess inefficiencies.
Press Clips
AI Governance, Ethics and Leadership provides a history of Grok ✍
Sinification translates Liu Shaoshan’s roadmap to Chinese AI dominance ✍
Anton Leicht predicts two phases of AI automation ✍
Ryan Greenblatt sits down on the 80k hours podcast to discuss scenarios and timelines 🔉
Kalim Ahmed on the homogenizing nature of the internet, and now AI (AI Policy Perspectives) ✍
ChinaTalk’s first episode of their new podcast series ‘Overfit’ hosted by Jordan Schneider, Nathan Lambert and Jasmine Sun 🔉
A conversation between Jaap van Etten, Co-Founder and CEO of Datenna and Jeanne Meserve at Special Competitive Studies Project’s AI+ Expo event. They tackle ways to rethink tech policy in the context of US-China competition 🔉
Semianalysis onMeta's AI revival: Zuckerberg's billion-dollar bets on compute, talent, and infrastructure ✍