AI Regulation in 2025
As 2025 begins, significant steps in AI legislation have materialized across the globe. The EU AI Act is nearing its first phases of implementation, aiming to establish both a unified framework and practical implication guidelines ahead of obligation deadlines. Meanwhile, on December 26th South Korea passed its AI Basic Act, becoming the second jurisdiction after the EU to introduce comprehensive AI legislation. These developments, followed by a myriad of frameworks and initiatives by the United Kingdom, United States, and Canada, amongst others, elevate a growing focus on the regulation of this rapidly advancing field. In this article, we outline the state of AI laws and frameworks around the globe, their current approach to AI safety, and the latest actions taken by key jurisdictions.
The EU & the EU AI Act
The spearhead of AI legislation, the EU AI Act has come under continuous scrutiny in 2024—from arguments of stifling innovation to leading European companies to suffer economically. However, the AI Office, and the Act itself, are turning towards a focus on harmonization in 2025, where such concerns can be debated, particularly through initiatives like the Code of Practice, which invites input from businesses, academics, and the public alike.
As such, the upcoming timeline is as follows:
- February 2025: AI that falls under “Prohibited” is banned (if you want to check your AI’s risk classification, schedule a demo with us and use our EU AI Act checker!)
- April 2025: The Code of Practice, aiming to detail general-purpose AI requirements and drafted in an iterative process having started in September 2024, is expected to be completed April 2025, with a Closing Plenary following. Moreover, in April 2025 the harmonized standards for High-risk AI are expected to be ready by the CEN and CENELEC, according to the CSIS, to allow adequate preparation time for the high-risk legal requirements in the next 30 months.
- August 2025: Enforcement of certain requirements (including notification obligations, governance, rules on GPAI models, confidentiality, and penalties (other than penalties for providers of GPAI models). However, providers of GPAI models placed on the EU market before August 2nd, 2025 are able to extend this date to August 2nd, 2027 to achieve compliance.
While the EU AI Act didn’t require enforcement upon its August 2024 entry into force, voluntary compliance is encouraged under Recital 178.
The Code of Practice has been on the minds of many. Designed to complement the intentionally high-level requirements of the EU AI Act, general purpose AI model harmonization is thus left to the Code of Practice. By engaging stakeholders from academia, industry, organizations, and the public, the Code aims to tackle the practical implementation of these requirements while addressing the systemic risks associated with general-purpose AI. As highlighted by the CSIS, the active involvement of a diverse range of stakeholders positions the Code as a reputable foundation for the responsible governance of frontier AI models and the advancement of AI safety well beyond the European Union.
On November 14, 2024, the EU AI Office published the first draft of the Code. The second draft was released Dec 19 2024, focusing on models released after August 2nd, 2025, when the new regulations take effect. Further discussions are scheduled for January 2025, with Working Group meetings covering technical risk mitigation, transparency, risk assessment, governance and copyright rules—with the third draft thereafter expected in mid-February 2025.
Drafting of the Code of Practice is taking place under strong pressure to have it ready by no later than May 2, 2025, and in effect by August 2025.
At an overarching level, countries are beginning to prepare for the Act’s effects, assigning responsible authorities for the Act. For example, on December 23rd, Luxembourg's Parliament introduced Bill No. 8476 to implement the EU AI Act within the country—specifying the relevant national bodies. We expect to see similar legislation in EU member states in the upcoming months.
FINMA & Switzerland
Switzerland often adopts a “technology-neutral” approach to regulation, instead applying sector-specific measures to achieve a sound regulatory structure. While this holds true in its developments in the financial and transportation industries, amongst others, Switzerland is expected to align its AI policy to “be compatible with international standards to ensure that regulatory rules are not fragmented,” according to White & Case.
The Swiss Financial Market Supervisory Authority (FINMA), the country's financial regulatory body, has recently taken significant steps in addressing AI governance within financial institutions. In its December 2024 statement, FINMA issued imperative guidance on governance and risk management for organizations leveraging artificial intelligence, building on its ongoing supervisory reviews of banks and insurance providers.
FINMA emphasized the importance of robust governance frameworks, centralized risk inventories, and measurable, quantitative approaches to AI risk management. The statement also highlighted areas where institutions must improve, including performance metrics, assessment rigor, and the explainability of AI models. These gaps remain critical concerns for FINMA as it seeks to ensure the safe and responsible use of AI in financial markets.
Beyond the financial sector, the Federal Council has tasked the Federal Department of the Environment, Transport, Energy, and Communications (DETEC) with exploring potential approaches to AI regulation. DETEC is expected to draft a formal regulatory proposal in 2025.
Additionally, Switzerland's Federal Act on Data Protection (FADP) governs the processing of personal data, evidently holding direct implications for AI applications. The Swiss Federal Data Protection and Information Commissioner has thus outlined expectations for AI systems to comply with data protection principles in relation to AI.
The United Kingdom
The UK government’s approach to AI regulation is outlined in its 2023 AI Regulation White Paper, as well as its subsequent written response in February 2024 to feedback received during the consultation process. Together, these documents signal that the UK does not plan to introduce broad, horizontal AI regulation in the near future. Rather, the government advocates for a "principles-based framework," encouraging existing sector-specific regulators to adapt and apply these principles to AI development and use within their respective domains.
Central to implementing this approach is the UK Government's Office for Artificial Intelligence, established to oversee the execution of the UK’s National AI Strategy. Anticipated to pose a crucial supporting role, the Office’s functions include monitoring the effectiveness of the regulatory framework, evaluating AI-related risks across the economy, and fostering compatibility with international regulatory frameworks.
The United States
At a high level, any form of binding legislation is not in the periphery of the US Congress. As noted by White & Case, given the political divisions in the US and the influence of corporate lobbying, most of the current AI-related bills are unlikely to become law.
Despite the lack of comprehensive legislation, several frameworks and guidelines provide direction for regulating AI:
- The White House Executive Order on AI (Safe, Secure, and Trustworthy Development and Use of AI): This order focuses on federal agencies and developers of foundational AI models, mandating the creation of federal standards and requiring developers of the most advanced AI systems to share safety test results and other critical information with the U.S. government. However, the incoming Trump administration has indicated plans to revoke this order.
- The White House Blueprint for an AI Bill of Rights: This blueprint outlines principles for equitable access and responsible use of AI systems. It emphasizes five key areas: ensuring safe and effective systems, protecting against algorithmic discrimination, safeguarding data privacy, providing transparency through notice and explanation, and maintaining human oversight with alternatives and fallback options.
- The Federal Trade Commission: The FTC has adopted an assertive stance on regulating AI under its existing authority, warning companies that using AI tools with discriminatory impacts, making unsubstantiated claims about AI capabilities, or deploying AI without proper risk assessment may violate the FTC Act. For example, the FTC recently banned Rite Aid from using AI facial recognition technology without implementing adequate safeguards.
From a sectoral standpoint, a few pieces of legalwork have been adopted. For example, in the insurance sector, the National Association of Insurance Commissioners issued a model bulletin highlighting the need for governance frameworks, risk management protocols, and testing methodologies. These guidelines aim to ensure that insurers use AI systems responsibly, particularly when such systems directly affect consumers.
Similarly, in the employment sector, New York City enacted Local Law 144 of 2021 to regulate automated decision-making tools in hiring. The law prohibits employers and employment agencies from using these tools unless they have undergone a bias audit within the past year.
Canada
Canada is actively working to regulate AI at the federal level through the Artificial Intelligence and Data Act (AIDA). Introduced in June 2022, AIDA passed its second reading and was referred to the Standing Committee on Industry and Technology in April 2023. However, progress on the bill has been slow, evidently leaving its future uncertain; if the review is not completed and the bill passed before the next federal election, scheduled no later than October 2025, it will cease existence and need to be reintroduced by the next government—potentially in a revised form.
In parallel, on November 12, 2024, the government announced the creation of the Canadian Artificial Intelligence Safety Institute (CAISI)—a new initiative funded as part of a broader CAD $2.4b investment in AI initiatives unveiled in the 2024 federal budget, marking a significant step in Canada's commitment to AI safety and innovation, complementing the legislative efforts of AIDA.
South Korea
On December 26th, 2024, South Korea passed the AI Basic Act, becoming the second jurisdiction in the world, after the European Union, to enact comprehensive AI legislation. The Act, which has been in development since July 2020 and reflects input gathered over four years, consolidates 19 bills into a unified framework. Passed by the 22nd National Assembly following reviews by the Science, Technology, Information, Broadcasting, and Communications Committee, as well as the Legal Affairs Committee, it will take effect in January 2026.
In particular, the law empowers the Minister of Science and ICT to establish a national AI strategy every three years, incorporating input from relevant ministries and local governments. It also formalizes the role of the National AI Committee, launched in September 2024, and establishes the AI Safety Research Institute to address risks and protect citizens' welfare.