While the development of Artificial Intelligence for use in the legal sector has been underway for some time, its uptake has lagged behind other comparable industries devoted to transforming data into human-oriented solutions, such as healthcare and marketing. In recent years, however, law firms have begun looking toward machine learning for ways to streamline legal processes beyond simple automation — a process further accelerated by the industry's large-scale digitisation during COVID-19.
Currently, the legal sector primarily uses Artificial Intelligence for legal data processing, as smart automated tools, in the augmentation of legal research, and in predictive analyses. While the integration of new technologies brings great benefit to the legal industry, each function also entails their own unique ethical challenges.
Legal Data Processing
Artificial Intelligence, through the process of cognitive computing, can be used to extract, organise and analyse text, image, and speech data.
In law firms, this is commonly used in contract creation, during which Natural Language Processing (machine learning technology that enables computers to understand human language) assesses a proposed contract to highlight discrepancies and correct errors. Versions of this type of AI-based software have been developed by companies such as Lawgeex, Klarity, Clearlaw, and LexCheck.
NLP technology is also used in contract analytics, where key clauses are extracted and contextualised for business owners and stakeholders to aid compliance, ensure best practice, and flag contracts that require renewal. Software produced by companies such as Kira Systems, Seal Software, Lexion, Evisort, and Paperflip can also identify and automatically update clauses to adhere to new legislation.
While this process greatly decreases the margin of human error in data-centric tasks, it necessitates both the mass digitisation of paper documents and AI’s unhindered access to Big Data, many of which contain confidential and personal data. As the capabilities of such software continues to expand, safeguards for data protection and client confidentiality must be put in place for its sustained and ethical use.
A simpler form of machine learning, AI technology can also be used to automate tasks through the systemisation of data-centric processes.
This form of AI has already achieved widespread adoption in the legal sector, streamlining law firm administration processes. Most notably, smart time-recording software allows invoices to be automatically generated by identifying billable hours, produces client letters, and identifies clients that may be affected by legislative changes within the firm databases. Lawyers also have a possible ethical obligation to their clients to adopt such software, as it increases productivity and efficiency while decreasing billable hours in their clients’ best interests.
Automation is also commonly used in the creation of bespoke contracts where context-specific required documents and information can be requested in an efficient "fill-in-the-blanks" format. With the integration of NLP and speech recognition technology, automated contract creation software can also be used by non-lawyers using Chatbots. While there is the ethical impetus to widen legal access through such means (especially for those who cannot afford legal advice), this could also pose an ethical challenge of promoting the unauthorised practice of law.
Legal Research Augmentation
Like automation, AI-based research software also relies heavily on the language learning capabilities of NLP technology which enables machine learning algorithms to gain a semantic understanding of natural language data and queries.
Unlimited by Boolean logic, programmes such as LexisNexis, ROSS, and Westlaw Edge analyse natural language search queries and return case files with relevance beyond matching keywords. A similar iteration of this search algorithm is also often used in e-disclosure processes, during which the relevance of documents are ranked according to context-specific precedence.
While the technology remains in its early stages, lawyers using these search engines must practice due diligence to ensure their practice remains ethically sound. This includes attaining a good level of familiarity with the capabilities and limitations of the Artificial Intelligence involved, as well as being prepared for the possibility of algorithm personalisation leading to information bias and blind spots.
Predictive Legal Analytics
This process involves Artificial Intelligence inferring axioms from case result databases and predicting outcomes of pending cases based on the precedences learned. This form of machine learning is, in many respects, the most complex as its decisions often contain a moral element.
Predictive analytics are currently used in litigation to predict outcomes, plan strategies, speed up settlement negotiations, minimise the number of cases that go to trial, and to evaluate litigation finances. Lex Machina and Blue J Legal are two such systems that analyse case precedence to provide insights on current cases.
Despite the technology’s potential, its foundation on case precedence has proven problematic, in that it appears to be automating the systemic biases that contemporary legal practices are endeavouring to offset. The algorithms have absorbed decades of racial and gender bias in litigation case precedence — a ProPublica investigation into criminal risk assessment tools revealed that Black defendants are labelled as reoffenders 45 percent of the time compared to the 24 percent of their white counterparts.
As this predictive software advances through development, there will likely be attempts to solve the ethical challenge of its inherent biases. Additional steps must also be taken to increase the transparency of its processes, and the public should likewise be educated on the infallible impartiality of artificial intelligence.
With the full integration of AI into modern legal practices both inevitable and imminent, the industry must welcome innovation while safeguarding its ethical integrity through rigorous testing, monitoring, and regulation.