top of page

The E.U. A.I. Act: Costs and Benefits

On Wednesday March 13 the European Parliament passed its Artificial Intelligence Act (A.I. Act), with the act being the first ever large-scale regulatory framework addressing the burgeoning technology. The act was first proposed in 2021, years before the explosion in popularity of generative A.I. models such as OpenAI’s ChatGPT, and has been a source of lengthy negotiation and compromise, undergoing numerous changes before the final text was approved by European Union member states. The act comes into force 20 days after it is published in the Official Journal, which is speculated to occur in May or June, with the provisions of the act being implemented in various stages over the next few years. 


The act categorises A.I. systems through the varying levels of risk that they pose, and attempts to address these risks through corresponding levels of enforcement. Systems which can be considered a “clear threat to the safety, livelihoods and rights of people '' such as government operated social scoring systems are categorised as unacceptable, and will be completely banned. A.I. systems used in various important contexts, such as in critical infrastructure, law enforcement, education, and justice are categorised as high risk, and will be subject to an array of safety obligations prior to their release to the market, and throughout their lifecycles. To comply with the act, A.I. technology deemed to be “high-risk” will have to undergo assessment by authorities to determine its safety and the deployers of this technology will have to maintain substantive risk management, oversight and transparency protocols. Failure to comply will result in the operator of the technology facing fines up to 35,000,000 Euros or 7% of the total worldwide annual turnover.  


The European Union in passing its A.I. Act potentially positions itself as a global leader in A.I. regulation, as the act can perhaps have a deep extraterritorial impact, with nations such as the United States perhaps being pressured to create similar legislation. Though it doesn’t seem far away in the United States, the nation has yet to pass large scale federal A.I. regulation despite efforts by Senate Majority Leader Chuck Schumer, with the only sweeping action being Biden’s Executive Order establishing standards for A.I. safety. The U.K. government has engaged with the topic of A.I. safety, hosting the world's first major A.I. Safety Summit in November 2023, and recently creating the AI Safety Institute. Despite these measures, however, the U.K. has yet to pass a large-scale regulatory act similar to that of the E.U., and both the U.S. and U.K. may soon adopt similar legislation to stay aligned with the E.U. on the issue to improve worldwide regulation, and facilitate trade and cooperation. 


Before any similar large-scale regulatory regimes take shape around the world, however, the act will likely have a quick impact on A.I. safety. Operators and manufacturers of A.I. technology and systems will conform to the new regulations to continue operating in the sizable E.U. market. The Brookings Institute suggests these regulations will be adhered to not only in the E.U. market, but in international markets as a whole, with compliance to different standards in each market being costly and inefficient. As such, the creators of A.I. technology may consistently adhere to E.U. regulation, potentially keeping artificial intelligence systems safe not only in Europe, but worldwide.


Though this act may set a precedent for global A.I. regulation, its success is by no means a forgone conclusion. The act has a number of problems which may make its implementation difficult, or cause deeply negative ramifications. Though the safety requirements of the act are apparent, how successful they can be implemented may be largely dependent on the clarity and efficiency of yet to be established standards. The act’s requirements include the use of risk-management systems, and the use of “high quality datasets” in training A.I. models, however as A.I. remains such a new and dynamic technology, there are little metrics that one can measure to determine whether a risk is high, or a dataset is high quality. The act's legal requirements lack clarity, and the success of the act may be largely dependent on how the European Union approaches implementation.


Additionally, overly tough A.I. regulation could cripple European A.I. innovation, with the cost of adhering to and implementing the various obligations of the act being too much for smaller enterprises and startups, leaving new developments in the technology firmly in the hands of American companies such as OpenAI. Rather than setting a new standard, if the costs of meeting the obligations of the A.I. act are too high, companies based outside of Europe may shutdown their service in the E.U., and companies based within Europe may simply move to regions with less stringent requirements. OpenAI CEO Sam Altman has already threatened in May to “cease operating” in the E.U. entirely if the company can’t comply with the new regulations, and though other tech companies have publicly expressed approval with its passing into law, the act had previously faced intense lobbying efforts from big tech and smaller European A.I. startups such as France’s MistralAI. 


The A.I. act could potentially be enormously beneficial in keeping individuals safe worldwide from the risks posed by the exciting but largely unregulated technology of artificial intelligence. For the act to be successful in its implementation, however, it is imperative that the European Union provides legal clarity surrounding the act's implementation, and avoids imposing costs on companies that hinders the development of the technology and the sector. With artificial intelligence being so nebulous and dynamic, this act is enormously important for the future of the technology and the regulation surrounding it, either serving as a cautionary tale or a shining example for future legislation. 


bottom of page