In 2023, the proliferation of artificial intelligence (A.I.) technology has given rise to various complications. Generative A.I.'s effects on copyright law have stood out, along with A.I. related cyber security concerns regarding data protection. Nonetheless there are additional less conspicuous implications of A.I. advancement, notably in the biometrics sector, which have gone unnoticed for some.
The Biometrics industry readily incorporated A.I. developments into its methods, revolutionising identity verification. A.I. is now used in face recognition, fingerprint recognition, iris recognition, and behaviour/emotion recognition and detection. A.I.'s widespread adoption is attributed to its ability to overcome longtime false acceptance and false rejection problems in facial recognition software. The capability of A.I. to use data pools for error reduction, complicated classification, and analysis has proven invaluable for biometric software.
Yet the A.I.-powered biometric identification methods are still in development and are not without their vulnerabilities. A.I. can be used against itself to “deep fake” the system. A.I. generated false digital fingerprints, 3D printed masks, and voice alteration can all trick A.I. software. In addition, hackers can prey on A.I. tools by poisoning their data samples with inaccurate information.
Biometric security, or insecurity, is relevant in both the public and private sectors. Biometric data is used for the increasingly common digital visa, the use of which has direct impacts on national security. On October 18th, the European Parliament voted for the digitalisation of visas into biometric form. First-time applicants will have their fingerprints and facial biometrics collected, beginning on a voluntary basis in 2025. However mandatory use of biometrics is likely to begin in the 2030s. This news comes as the EU attempts to increase the safety and security of border crossing through the elimination of falsification of visas while also making the acquisition of a visa more cost-efficient.
Yet given the insecurity of biometric software, its increased use may not have benefits that outweigh its costs. Unclear employer liabilities, regulatory gray areas around unproven technology, and the ethical issues surrounding racial discrimination and surveillance through biometrics are just a few of the possible legal issues involved in increased biometrics usage. All of these problems could be further exacerbated by A.I. Some countries have taken initiative to understand possible regulatory measures that could address these issues. For instance, the Canadian privacy commissioner, as of October 11, launched a public consultation on “new draft guidance on biometric technologies”. This includes important information on privacy obligations and how to handle biometric information.
Consultations like this also have relevance to the private sector which is not immune to the challenges posed by increased A.I. use, as demonstrated by the Clearview A.I. settlement in May 2022. Clearview A.I. inc faced a £7.5 million fine from the U.K. Information Commissioner’s Office (ICO) for creating a database of 20 billion facial images without the consent of the individuals whose images were used. The company sourced these images from the internet and social media platforms to create a facial recognition service. The service allowed users to find related images of individuals based on the photos they submitted on the app. This can be used for The U.K. sought jurisdiction because of the large number of images gathered from U.K. residents, in the ICO’s perspective, breaching U.K. data protection laws. The U.K. is not the only nation that has rejected this breach of security by Clearview. France, Greece, and Italy have all penalised the company. Not only is our biometric data potentially unsafe in the hands of those we voluntarily offer it to, but it can be taken without our permission using A.I.
However, a three person U.K. tribunal has now agreed with Clearview’s appeal of the decision, citing that this regulation was acting beyond the ICO’s purview. Nonetheless, the illegality of the software itself was not assessed and no new guidance had been developed for future cases such as this. Individually, Clearview responded to the criticism constructively, and altered their data collection methods.
The confluence of A.I. and biometrics is complicated, with implications that extend even beyond privacy and security. Navigating this development will require numerous public consultations to develop robust regulatory frameworks which must consider in turn a range of ethical issues.