top of page

Can London become the Centre of AI?

In June, Prime Minister Rishi Sunak met with Joe Biden in Washington, D.C. Instead of negotiating a much-anticipated trade deal between the U.S. and the UK, Sunak announced something different: a goal to make the UK the new centre of AI by hosting the first global summit on AI regulation this fall.

Later that month, London Tech Week took place. This conference brought leaders from Microsoft, Google, Bloomberg and more to discuss developments in cyber security, the future of work, ClimateTech, FinTech and AI. Then, as the cherry on top to an already successful month, OpenAI, the developer of ChatGPT, announced they would open their first office outside the U.S. in London.

It seems that Sunak’s goal of making the UK the new global centre of AI, and thus placing the country at the forefront of technological development, is falling into place. However, there are still some considerations at stake before the UK can be crowned the new global Silicon Valley. This article will examine three considerations in particular: the General Data Protection Regulation (GDPR), the EU AI Act and the Microsoft/Activision Blizzard merger.

The General Data Protection Regulation

What is it?

Implemented in 2018, the GDPR is a set of rules concerning the collection and use of personal data. When one applies for a job, signs up for a loyalty programme or uses a service like Citizens Advice, one typically provides personal data, such as their name, email address, phone number and more. The GDPR ensures that any information that is provided is used according to principles which include fairness, transparency, purpose, and accuracy, amongst others. Given these principles and other rules set out in the GDPR, consumers have rights regarding the processing of personal data, such as to have their data updated if it is incorrect, to stop or restrict the processing of one’s data and to freely access any personal data that has been provided. There are also stronger legal protection grounds for information that is considered sensitive, such as race, sexual preference and political opinions.

Overall, the GDPR is a way of protecting a consumer’s personal data from abuse or exploitation. The clarity and transparency of the GDPR also means that consumers are more aware of their rights and can be more certain of their autonomy over their own personal data.

How might it impact AI regulation and development in the UK?

There are two ways in which the GDPR may impact AI regulation and development in the UK.

The first concerns AI directly. Since AI services have become more accessible in the past year, it is to be expected that people have provided various bits of personal data to machines that enhance decision making or streamline normally time-consuming processes. Some, but not all, of these activities relate directly to the GDPR, for they involve organisations processing the personal data of users.

For example, the GDPR does not provide protection if one uses an AI chatbot to enhance decision making by asking it what stocks one should buy or where one should go for university, because it is common sense one should not use the sole advice of ChatGPT when making important decisions. Whilst gaining this information may involve input of personal data, the GDPR alone does not protect from the consequences of such decision making.

On the other hand, if an organisation uses AI to process job applications, this activity would likely come under the GDPR because it involves profiling. The Information Commissioner’s Office (ICO) defines profiling as an activity that involves “collect[ing] and analys[ing] personal data on a large scale, using algorithms, AI or machine-learning.”

The GDPR is specifically designed to address these risks. It specifies that automated decision making can only take place in four circumstances: where there is human involvement, where an individual has provided explicit consent, where it is needed for performance of contract or where is authorised by law. There are other considerations as well, involving additional safeguards to make sure an individual’s interests are protected. More information is available on the ICO website.

Technology companies are therefore likely to consider the GDPR when choosing the UK as a base, since collecting personal data is a common activity for any business.

This leads us to the second way in which the GDPR may impact AI regulation and development in the UK: the fact that the U.S. has no equivalent to the GDPR, at least at a national (federal) level. Instead, data protection is regulated at a state level. As of 2022, five states, Utah, Colorado, Virginia, Connecticut and California have some kind of consumer privacy law. More than fifteen states are considering similar legislation of their own, whilst the other thirty states have not introduced comprehensive bills or have inactive bills.

When assessing a potential decision between Silicon Valley and London, it is worth comparing the GDPR to the California Consumer Privacy Act (CCPA). Although both regulations address data privacy and protection, they are more different than the same.

One significant consideration is that the scope of the CCPA applies only to ‘large companies,’ which is defined as companies with personal information from over 50,000 California consumers or with revenue of over 25 million USD. On the other hand, the GDPR applies to all businesses that hold any personal data of EU residents. Thus, the scope of the GDPR is much wider and could even apply to small businesses.

Yet another significant consideration is the fine for if a breach is discovered. The CCPA only fines up to 2,500 USD per violation, while the GDPR can fine up to 20 million EUR (22.25 million USD). Such a big difference in financial punishment is likely to have an effect on how closely regulations are adhered to in the U.S. versus the UK.

Overall, the GDPR is likely to affect both AI regulation and development in terms of how personal data is used and whether companies will be incentivised to start-up in the UK given its strict requirements, especially compared to looser regulations such as the CCPA. Whilst the GDPR is not necessarily a hindrance to the UK becoming the centre of AI development, it is definitely a consideration companies will have in mind.

The EU AI Act

What is it?

The EU AI Act is a proposed European law on AI. The law assigns applications of AI to three categories based on different levels of risk: unacceptable risk, high-risk and low-risk or unregulated activities. An example of an activity that is unacceptable risk could include government-run social scoring. An example of an activity that is ‘high-risk’ could include CV-scanning tools used to rank job applicants, which could have unprecedented biases.

The law took a major step on June 12th, and has progressed to the final trilogue stage of the EU’s regulatory process. Lawmakers voted to approve the text of the law, which details regulatory measures such as placing transparency requirements on generative AI tools like ChatGPT, banning real-time facial recognition and requiring companies to declare whether copyrighted material has been used to train AIs (though interestingly, there is no similar requirement to declare whether personal data has been used for training).

How might it impact AI regulation and development in the UK?

The EU AI Act has the potential to act as a reference point for how the UK might have regulated AI if the AI boom had happened pre-Brexit. Now, since exiting the EU, the UK has free reign over its own AI policy. The UK recently published a white paper on AI regulation in March 2023, which the government describes as a “pro-innovation approach.” The extent to which the UK decides to diverge from EU standards will allow investors, entrepreneurs and consumers to reconsider how attractive the UK can be as a primary location for technological development.

One thing to watch out for is how OpenAI has already expressed discontent with the EU AI Act, with CEO Sam Altman saying that the company could choose to “cease operations” in the EU if it is unable to comply with this regulation. Though his comment is not solely reflective of the technology industry, it suggests that the UK may benefit by continuing down a route of more liberal legislation, as it seems to be doing.

The white paper shows that the UK has taken a creative approach to developing its own AI act, which involves what a Taylor Wessing analysis describes as a “principles-based, sector-focused, regulator-led approach instead of […] umbrella legislation requiring a host of definitions which may become quickly outdated.” However, it has also been described as “a patchwork of non-binding guidance underpinned by laws which are not directly related to AI.”

There is a balance to be struck between innovation and safety before the UK’s AI regulation is fully fledged. It will be interesting to see how the UK continues to shape regulation after Sunak’s highly anticipated global AI summit this fall, and how it differs from the EU AI Act.

The Microsoft/Activision Blizzard Merger

Yet another development to watch is Microsoft’s ongoing legal battle with the Competition Markets Authority (CMA) to acquire Activision Blizzard, the creators of blockbuster video games such as Call of Duty and Guitar Hero. Microsoft has also been a lead investor in OpenAI since 2019.

Recently, the U.S. Federal Trade Commission (FTC) lost an appeal to block the acquisition. Microsoft also came to a deal with Sony to keep Call of Duty available on PlayStation consoles for the next 10 years. Sony was previously concerned that the acquisition would allow Microsoft to restrict availability of Call of Duty exclusively to their Xbox consoles.

The CMA (the UK’s equivalent to the FTC), has also previously attempted to block the acquisition. However, the FTC’s loss and the Sony deal has led to a rescheduled trial so Microsoft and the CMA can adapt to and consider new developments. The UK was previously committed to a steadfast rejection of the deal, and expressed discontent at the EU Commission’s ruling in favour of it in May.

The trial has now been moved from the end of July to an anticipated date in October. Whether the CMA or Microsoft are able to present a better case will impact more than just the parties involved.

The decision has the potential to influence how other countries view the UK’s potential to become a leader in technology and AI. If the deal passes, technology companies may notice the UK’s willingness to change its reputation away from one of tight regulation. A passed Microsoft/Activision Blizzard deal in the UK may suggest the beginning of a departure towards a more liberal, U.S.-style regulatory stance and will be a positive signal to OpenAI, which is heavily backed by Microsoft.

If the UK develops a rhythm in promoting looser regulation, future technology start-ups may gain confidence to root their businesses in the UK. This would be a big step for Sunak’s goal to place the UK at the centre of AI and thus global technological development.


The GDPR, the EU AI Act and the Microsoft/Activision Blizzard deal are all tell-tale signs of how the UK is seeking to position itself as the centre of AI in the future. There are, of course, many more considerations this article has not examined, such as how the UK public may respond to liberal tech and AI regulation after years of being used to tight regulation. However, one thing is for sure: Sunak’s goals mean that changes are happening right beneath our feet. The next landmark to look out for will be Sunak’s global AI summit this fall.


bottom of page