top of page

Corporate Liability and AI: Who’s Responsible When Machines Decide?

This article examines corporate liability for AI with particular focus on the UK, while drawing on comparative developments in the EU and beyond where they affect or inform UK companies.


Artificial intelligence (AI) is rapidly reshaping corporate decision-making across sectors—from automated credit scoring in financial services to algorithmic hiring in human resources. As AI systems grow more autonomous and become embedded in critical business functions, a pressing legal question arises: who is liable when machines make decisions that lead to harm?


Traditional models of corporate liability—founded on concepts such as human intent, negligence, and misconduct—are being tested by the emergence of opaque, non-human agents. This article explores corporate liability in the age of AI, focusing on three interrelated dimensions: whether companies can delegate liability to AI systems, the evolving duties of directors in managing AI-related risks, and the legal implications of emerging regulatory regimes such as the EU Artificial Intelligence Act and New Product Liability Directive for UK firms.


Can Companies Delegate Liability to AI?


A foundational issue is whether corporations can deflect liability by attributing harmful or discriminatory decisions to AI tools. In principle, under English corporate law, the answer is no. Liability remains with the company as a legal person, regardless of whether decisions are made by humans or machines.


Where AI is deployed in consequential domains—such as determining creditworthiness, selecting candidates for employment, or monitoring compliance—corporate responsibility persists. However, attribution becomes legally and practically complex when companies use “black box” AI systems whose internal decision-making logic is neither transparent nor comprehensible, even to their developers.


The difficulty of attributing liability becomes even more acute in sectors where AI makes or assists in high-impact decisions. As highlighted in HFW’s analysis of the commodities sector, AI is increasingly used for forecasting, scheduling, and even executing trades, sometimes with minimal human oversight. In the Singaporean case Quoine Pte Ltd v B2C2 Ltd [2020], liability was traceable to human error in deterministic systems. Although not a UK case, it illustrates the evidentiary challenges that English courts may also face where AI systems operate autonomously. The pressing question remains: to what extent should legal responsibility follow the chain of AI deployment—from developer to integrator to end-user?


Director Duties and Risk Management


The integration of AI systems into business operations implicates directors’ fiduciary and statutory duties, particularly under the Companies Act 2006. Section 172 imposes a duty to promote the success of the company, while section 174 requires directors to exercise reasonable care, skill, and diligence. These obligations take on a new and complex

dimension when decisions are shaped by probabilistic algorithms that may generate discriminatory, erroneous, or non-compliant outcomes.


Directors must ensure that their firm’s use of AI aligns with its broader ethical commitments and legal obligations. This includes overseeing procurement processes to ensure that AI systems are rigorously evaluated for bias, accuracy, and fairness prior to deployment.


Ongoing monitoring is essential: directors cannot assume that a system which performed adequately at one point in time will continue to do so under changing conditions or datasets.


Legal precedent is beginning to emerge. In the Canadian case Moffatt v Air Canada [2024], courts held integrators liable when AI-driven tools caused consumer harm, even where the AI was treated as an internal function (e.g., a website chatbot). While not binding in the UK, the case illustrates a judicial willingness to place responsibility on the party controlling the deployment environment—an approach UK courts may find persuasive. This supports the view, also reflected in EU Parliamentary resolutions, that the entity "in control of the risks" should bear legal responsibility. Directors must therefore take ownership of the deployment environment and understand not just the functionality of AI, but also its limitations and potential for harm.


As noted in guidance from Clifford Chance (2024), boards must develop sufficient understanding of the capabilities and limitations of AI to discharge their oversight responsibilities effectively. This does not mean that all directors must be technical experts. Rather, they must be equipped to ask the right questions, scrutinise risk reports, and challenge technical assumptions where necessary. They should also ensure that appropriate internal governance structures are in place, such as audit trails for algorithmic decisions, interdisciplinary compliance reviews, and escalation mechanisms for AI-related concerns.


The EU Artificial Intelligence Act and the New Product Liability Directive


The EU Artificial Intelligence Act (AI Act), finalised in 2024 and expected to come into force in 2026, represents a landmark development in global technology regulation. It introduces a risk-based framework that classifies AI systems as unacceptable, high, limited, or minimal risk. Each classification comes with a set of legal obligations proportional to the potential harms associated with the AI system in question. While not part of UK law, it has significant extraterritorial reach: UK-based companies offering AI systems within the EU or processing data relating to EU residents will fall within its scope.


Of particular importance for corporate actors are the provisions concerning high-risk AI systems—a category that includes tools used in recruitment, credit scoring, biometric identification, and access to essential public or private services. For such systems, firms must implement robust risk assessment and mitigation strategies, maintain traceability and logging mechanisms, ensure human oversight throughout the system’s lifecycle, and uphold stringent data governance standards.


Non-compliance can lead to significant sanctions, with fines of up to €35 million or 7% of global turnover, aligning with the GDPR’s enforcement model. Crucially, the AI Act has extraterritorial application. This means that UK-based companies offering AI systems within the EU or processing data relating to EU residents will fall within its scope. Accordingly, UK firms must begin adapting their internal policies and technical processes in anticipation of cross-border legal exposure.


The Act is complemented by the New Product Liability Directive (2024/2853), which modernises existing strict liability frameworks by explicitly extending coverage to software and AI systems. Again, while not part of UK law, the Directive will affect UK firms trading in the EU. Under the Directive, AI-related harm that arises from defective software—even if the defect was not the developer's fault—may trigger strict liability.


The Directive allows for presumed defectiveness in complex AI systems and imposes broader disclosure obligations on defendant companies, marking a shift toward more claimant-friendly norms. As Taylor Wessing notes, even intangible software code can now fall within the scope of product liability, provided it causes physical harm, mental injury, or data destruction.


Corporate AI Governance as a Legal Standard


Across jurisdictions, a coherent shift is emerging: AI governance is becoming a legal standard, not merely a best practice. As regulatory frameworks solidify, companies are being called upon to embed AI oversight into their existing risk management and compliance structures.


This involves developing internal ethical frameworks that guide the use and design of AI, informed by legal obligations and societal expectations. Firms must invest in educating senior leadership and board members about the capabilities and risks of AI technologies, ensuring that strategic decisions are made with a clear understanding of both commercial benefits and legal pitfalls.


Rather than relying on purely automated pipelines, companies are encouraged to retain meaningful human oversight, particularly in decisions that affect fundamental rights. For example, a firm deploying AI in hiring should ensure that final decisions are subject to human review and that the process is regularly audited for fairness. Similarly, financial institutions using AI to assess creditworthiness must be able to explain and justify adverse decisions to affected individuals.


Impact assessments are becoming standard practice for AI deployment, mirroring the data protection impact assessments required under the GDPR. These assessments must evaluate the potential harms of the AI system, identify safeguards, and ensure that the system aligns with the company’s legal obligations and ethical values.


While the UK’s approach to AI regulation remains more flexible and principles-based than the EU’s, it is far from laissez-faire. The 2023 UK AI Regulation White Paper sets out five cross-sector principles—safety, transparency, fairness, accountability, and contestability—which are likely to underpin forthcoming sector-specific guidance in domains such as financial services, healthcare, and employment.


The Information Commissioner’s Office (ICO) has also issued guidance on AI and data protection, while the Financial Conduct Authority (FCA) is increasingly engaging with AI risks in financial markets. These principles, though not yet legally binding, are already shaping expectations around corporate conduct.


Conclusion: Accountability in the Algorithmic Age


Corporate reliance on AI tools creates novel tensions between automation and accountability. While companies cannot legally delegate liability to machines, the opacity and autonomy of AI systems may erode the evidentiary basis for attributing fault unless robust governance mechanisms are established.


Boards must take a proactive role in understanding, overseeing, and regulating AI use—both internally and across their value chains. This includes establishing clear lines of accountability, maintaining comprehensive documentation of algorithmic processes, and engaging with emerging legal standards such as the EU AI Act and the New Product Liability Directive, insofar as they affect UK firms.


The trajectory of corporate liability in the age of AI is becoming clear. Automation does not dilute legal responsibility—it intensifies it. Directors and legal advisors who fail to grasp this shift, risk not only regulatory penalties but reputational damage in a business environment increasingly attuned to the ethical and social risks of digital technologies. As AI continues to expand into legally sensitive domains such as finance, recruitment, and surveillance, firms must treat algorithmic governance not as a compliance checkbox but as a core strategic imperative.


Image by Wilfredor via Wikimedia Commons




bottom of page