How have the European Commission proposed we regulate live facial recognition technologies?
Facial recognition is increasing in popularity globally, including in the UK and as a result is becoming a growing concern to regulators. In the UK,… Read more
Facial recognition is increasing in popularity globally, including in the UK and as a result is becoming a growing concern to regulators. In the UK, it has been used by South Wales Police in large sporting events and is currently being used by the Met Police for crime detection purposes and to assist them in making arrests. The High Court in Cardiff ruled in May 2019 that the use of facial recognition technology by the South Wales Police was lawful despite this ruling it is likely we will see some form of regulation on the use of live facial recognition technology by police forces. On the 29 January 2020, the European Commission (the “Commission”) published a white paper for its proposals on areas of regulation for AI and set out how to achieve them (the “White Paper”). Before the White Paper was published many thought the Commission would propose a moratorium on the use of facial recognition technology in public spaces. Instead the Commission have suggested regulating the software behind the technology that police forces rely on.
The White Paper begins by setting out policy proposals for promoting the uptake of AI, which is twinned with proposals for a regulatory framework focused on high-risk AI. It is likely police forces using live facial recognition to tackle crime would be governed by this framework. Live facial recognition technology makes it possible to map individuals’ faces using CCTV cameras and create a biometric template. A biometric template is a mathematical representation of an individual’s facial features. This biometric template is then compared to a database or police watchlist to return a match. The system will return a match result indicating the likelihood that the two images are the same person. Large amounts of data (containing individuals’ faces) are used to teach a computer how to detect a face and tell different individuals apart. Live facial recognition can also be referred to as remote biometric identification.
The Commission appointed 52 experts (comprising representatives from academia, interested business groups and the government) to a ‘high-level expert group’ on artificial intelligence. The high-level expert group published guidelines, in April 2019, which set out the 7 requirements that AI systems should adhere to be deemed “trustworthy”. Trustworthy AI obeys to all applicable laws and regulations, respects ethical principles and values and is technically robust. The Commission used these nonbinding guidelines to suggest key requirements that a future regulatory framework for AI could have.
- Training data: The effectiveness of an AI system depends heavily on the quality of the data sets used to train the algorithm. The Commission suggests the proposed requirements could ensure data sets used to train AI:
- are sufficiently broad to cover all relevant scenarios to ensure safety,
- are sufficiently representative to avoid discrimination,
- respect privacy and personal data.
- Data and record keeping: A regulatory framework that ensures developers keep accurate records and documentation on how the AI system was trained could help trace any problematic actions or decisions and increase accountability.
- Information provision: providing transparent information on the AI’s systems and limitation and ensuring citizens are clearly informed when they’re interacting with an AI system, could help build citizens trust and give authorities more visibility on how issues arose within the AI system.
- Robustness and accuracy: AI systems should be technically robust to ensure they behave in the manner intended.
- Human oversight: ensuing that humans have some involvement either before, or after a decision is made by the AI will make sure humans ultimately stay in control and can hopefully limit adverse effects.
- Specific requirements for certain AI applications, such as those used for the purposes of remote biometric identification: for the reasons discussed below, using live facial recognition technology in public carries very specific risks, and for this reason the Commission suggests a broad debate into whether public authorities should be using it for this purpose.
The White Paper suggests that any future regulatory regime only applies to high-risk AI. The Commission proposes regulating the use of AI in certain sectors, or for certain purposes because the use of AI in these circumstances has a greater chance of jeopardising individuals’ rights. The Commission suggest regulating AI when it is used in a risky sector (such as healthcare, transport and certain public services) and when it was used in a risky way. The Commission recognises that using AI for certain circumstances should always be considered high-risk, regardless of the sector it is used in, for example when AI is used to remotely identify individuals or make recruitment decisions.
The White Paper proposes limiting the scope of the regulatory regime to high-risk AI, because the majority of times AI is used, there is little or no risk to individuals’ rights. Google Lens is an AI powered technology that detects an object, understands what it is and offers feedback, using AI in the private sector for methods such as these generally isn’t risky for users. However, the use of AI by police forces risks infringing fundamental rights such as privacy and non-discrimination. Therefore, it is important that these AI systems are reliable and accurate.
Police forces often justify the risk of them relying on this technology, with the benefit that it could help reduce crime rates. The Met Police are currently using live facial recognition technology for this reason. The system used by the Met Police is similar to the one discussed above; when the system matches an individual (who walked in front of the CCTV) with a biometric template on a watchlist it sends an alert to an officer at the scene. The officer will then compare the camera image to the individual and decide whether to speak to them. The Met Police trialled live facial recognition technology in London between 2016 and 2019, in 10 separate trials and have now begun operational use in different locations across London. However, there are concerns that the technology is not producing accurate results. An independent review evaluated the final six trials run by the Met Police, and across these six trails the live facial recognition technology made 42 matches. Out of these 42 matches the report authors could only verify 8 were correct. This low level of accuracy means the technology could incorrectly identify a civilian, who has never committed a crime, as being on a watchlist. This risk to individuals is why the Commission suggests regulating high-risk AI, and not all uses of AI.
The White Papers requirements, if implemented, would try to ensure the software police forces are relying on is:
- Transparent about the accuracy of its results. This means if the system is only 60% confident that a video of an individual, taken on CCTV, is a match to an individual on a watchlist it informs the police officer of this level of confidence.
- Consistent in its results, this means if the system was 90% confident that individual A is on a watchlist on a certain day, that it is still 90% confident the same individual is on the watchlist on a different day, and the level of confidence hasn’t changed.
- Protecting itself from overt attacks and subtle manipulation, for example adopting ‘anti-spoofing’ systems which would prevent fraudsters being able to confuse the AI into producing inaccurate results by creating masks, sculptures or prints.
The accuracy of live facial recognition technology, such as that deployed by the Met Police isn’t fully known. To our knowledge, there is no regulation in place for ensuring a minimum level of accuracy. Some worry the White Paper fails to address the fundamental concerns with how police forces are using these technologies, but suggesting that we bring in a set of standards to ensure accuracy, robustness and transparency is a reasoned response from the Commission and would be a useful starting point in ensuring that individuals can trust the technology recording them.
Share this blog
- Adtech & martech
- Artificial intelligence
- EBA outsourcing
- Cloud computing
- Complex & sensitive investigations
- Cryptocurrencies & blockchain
- Data analytics & big data
- Data breaches
- Data rights
- Digital commerce
- Digital content risk
- Digital health
- Digital media
- Digital infrastructure & telecoms
- Emerging businesses
- Financial services
- KLick DPO
- KLick Trade Mark
- Open banking
- Software & services