Anthropic, an AI safety and research company, has announced its intention to officially sign the European Union’s General-Purpose AI Code of Practice.
In a statement, Anthropic highlighted that the Code reflects core principles the company has long championed: transparency, safety, and accountability.
“We believe the Code advances the principles of transparency, safety, and accountability—values that have long been championed by Anthropic for frontier AI development,” says the company. “If thoughtfully implemented, the EU AI Act and Code will enable Europe to harness the most significant technology of our time to power innovation and competitiveness.”
Flexible, robust safety standards
Recent studies suggest AI could contribute over €1T annually to the EU economy by the mid-2030s.
The EU’s AI Code aims to create safety guidelines that are both flexible and strong, helping Europe make the most of AI technology.
As a result, Anthropic believes the Code, alongside Europe’s AI Continent Action Plan, demonstrates that flexible yet robust safety standards can empower broader AI adoption without stifling innovation.
The company supports the Code through its internal systems, including the Responsible Scaling Policy, which has been refined over the last two years.
The Code’s Safety and Security Frameworks align with Anthropic’s strategies for assessing risks, especially those related to serious threats like chemical, biological, radiological, and nuclear (CBRN) issues.
“As outlined previously, Anthropic believes the frontier AI industry needs robust transparency frameworks that hold companies accountable for documenting how they identify, assess, and mitigate risks. The EU Code establishes this baseline through mandatory Safety and Security Frameworks that build upon Anthropic’s own Responsible Scaling Policy and will describe important processes for assessing and mitigating systemic risks.”
‘Build AI in America’ push
Just 24 hours later, Anthropic announced that it is doubling down on its commitment to Build AI in America.
The company has released a new report, “Build AI in America,” calling on the US government to invest in energy and infrastructure to support the development of powerful AI models within the country.
“For the United States to lead the world in AI, we must make substantial investments in computing power and electricity that make it possible to build AI in America,” says the company.
According to the company, training frontier AI models will require massive amounts of energy—at least 50 gigawatts by 2028, or twice the peak energy demand of New York City.
“Frontier AI model training requires continuous access to firm, reliable power sources, and meeting this goal will require extraordinary U.S. energy capacity across a range of energy technologies,” the report explains. “An ‘all of the above’ approach is required to ensure American AI leadership.”
The report focuses on two main strategies: creating large-scale AI training facilities and promoting AI use across the country.
It suggests using federal lands for AI buildings, speeding up environmental permits, improving energy transmission, and investing in workforce training.
The company states, “By acting decisively now, we can ensure that the future is built here in America.”
Anthropic believes that with the right infrastructure and policies, the U.S. can stay a leader in AI and maximise its benefits in all industries.