How have the European Commission proposed we regulate live facial recognition technologies?
Facial recognition is increasing in popularity globally, including in the UK and as a result is becoming a growing concern to regulators. In the UK,… Read more
Facial recognition is increasing in popularity globally, including in the UK and as a result is becoming a growing concern to regulators. In the UK, it has been used by South Wales Police in large sporting events[1] and is currently being used by the Met Police for crime detection purposes and to assist them in making arrests[2]. The High Court in Cardiff ruled in May 2019 that the use of facial recognition technology by the South Wales Police was lawful[3] despite this ruling it is likely we will see some form of regulation on the use of live facial recognition technology by police forces. On the 29 January 2020, the European Commission (the “Commission”) published a white paper for its proposals on areas of regulation for AI and set out how to achieve them (the “White Paper”). Before the White Paper was published many thought the Commission would propose a moratorium on the use of facial recognition technology in public spaces.[4] Instead the Commission have suggested regulating the software behind the technology that police forces rely on.
The White Paper begins by setting out policy proposals for promoting the uptake of AI, which is twinned with proposals for a regulatory framework focused on high-risk AI. It is likely police forces using live facial recognition to tackle crime would be governed by this framework. Live facial recognition technology makes it possible to map individuals’ faces using CCTV cameras and create a biometric template. A biometric template is a mathematical representation of an individual’s facial features. This biometric template is then compared to a database or police watchlist to return a match. The system will return a match result indicating the likelihood that the two images are the same person[5]. Large amounts of data (containing individuals’ faces) are used to teach a computer how to detect a face and tell different individuals apart[6]. Live facial recognition can also be referred to as remote biometric identification.
The Commission appointed 52 experts (comprising representatives from academia, interested business groups and the government) to a ‘high-level expert group’ on artificial intelligence[7]. The high-level expert group published guidelines, in April 2019, which set out the 7 requirements that AI systems should adhere to be deemed “trustworthy”. Trustworthy AI obeys to all applicable laws and regulations, respects ethical principles and values and is technically robust[8]. The Commission used these nonbinding guidelines to suggest key requirements that a future regulatory framework for AI could have.
- Training data: The effectiveness of an AI system depends heavily on the quality of the data sets used to train the algorithm. The Commission suggests the proposed requirements could ensure data sets used to train AI:
- are sufficiently broad to cover all relevant scenarios to ensure safety,
- are sufficiently representative to avoid discrimination,
- respect privacy and personal data.
- Data and record keeping: A regulatory framework that ensures developers keep accurate records and documentation on how the AI system was trained could help trace any problematic actions or decisions and increase accountability.
- Information provision: providing transparent information on the AI’s systems and limitation and ensuring citizens are clearly informed when they’re interacting with an AI system, could help build citizens trust and give authorities more visibility on how issues arose within the AI system.
- Robustness and accuracy: AI systems should be technically robust to ensure they behave in the manner intended.
- Human oversight: ensuing that humans have some involvement either before, or after a decision is made by the AI will make sure humans ultimately stay in control and can hopefully limit adverse effects.
- Specific requirements for certain AI applications, such as those used for the purposes of remote biometric identification: for the reasons discussed below, using live facial recognition technology in public carries very specific risks, and for this reason the Commission suggests a broad debate into whether public authorities should be using it for this purpose.
The White Paper suggests that any future regulatory regime only applies to high-risk AI. The Commission proposes regulating the use of AI in certain sectors, or for certain purposes because the use of AI in these circumstances has a greater chance of jeopardising individuals’ rights. The Commission suggest regulating AI when it is used in a risky sector (such as healthcare, transport and certain public services) and when it was used in a risky way. The Commission recognises that using AI for certain circumstances should always be considered high-risk, regardless of the sector it is used in, for example when AI is used to remotely identify individuals or make recruitment decisions.
The White Paper proposes limiting the scope of the regulatory regime to high-risk AI, because the majority of times AI is used, there is little or no risk to individuals’ rights. Google Lens is an AI powered technology that detects an object, understands what it is and offers feedback[9], using AI in the private sector for methods such as these generally isn’t risky for users. However, the use of AI by police forces risks infringing fundamental rights such as privacy and non-discrimination. Therefore, it is important that these AI systems are reliable and accurate.
Police forces often justify the risk of them relying on this technology, with the benefit that it could help reduce crime rates. The Met Police are currently using live facial recognition technology for this reason[10]. The system used by the Met Police is similar to the one discussed above; when the system matches an individual (who walked in front of the CCTV) with a biometric template on a watchlist it sends an alert to an officer at the scene. The officer will then compare the camera image to the individual and decide whether to speak to them. The Met Police trialled live facial recognition technology in London between 2016 and 2019, in 10 separate trials[11] and have now begun operational use in different locations across London[12]. However, there are concerns that the technology is not producing accurate results[13]. An independent review evaluated the final six trials run by the Met Police, and across these six trails the live facial recognition technology made 42 matches. Out of these 42 matches the report authors could only verify 8 were correct[14]. This low level of accuracy means the technology could incorrectly identify a civilian, who has never committed a crime, as being on a watchlist. This risk to individuals is why the Commission suggests regulating high-risk AI, and not all uses of AI.
The White Papers requirements, if implemented, would try to ensure the software police forces are relying on is:
- Transparent about the accuracy of its results. This means if the system is only 60% confident that a video of an individual, taken on CCTV, is a match to an individual on a watchlist it informs the police officer of this level of confidence.
- Consistent in its results, this means if the system was 90% confident that individual A is on a watchlist on a certain day, that it is still 90% confident the same individual is on the watchlist on a different day, and the level of confidence hasn’t changed.
- Protecting itself from overt attacks and subtle manipulation, for example adopting ‘anti-spoofing’ systems which would prevent fraudsters being able to confuse the AI into producing inaccurate results[15] by creating masks, sculptures or prints.
The accuracy of live facial recognition technology, such as that deployed by the Met Police isn’t fully known. To our knowledge, there is no regulation in place for ensuring a minimum level of accuracy. Some worry the White Paper fails to address the fundamental concerns with how police forces are using these technologies[16], but suggesting that we bring in a set of standards to ensure accuracy, robustness and transparency is a reasoned response from the Commission and would be a useful starting point in ensuring that individuals can trust the technology recording them.
[1] https://www.theguardian.com/technology/2020/jan/12/anger-over-use-facial-recognition-south-wales-football-derby-cardiff-swansea
[2]https://www.telegraph.co.uk/news/2020/02/27/scotland-yard-make-first-arrest-using-live-facial-recognition/
[3] https://www.judiciary.uk/wp-content/uploads/2019/09/bridges-swp-judgment-Final03-09-19-1.pdf
[4] https://www.euractiv.com/section/digital/news/leak-commission-considers-facial-recognition-ban-in-ai-white-paper/
[5]https://fra.europa.eu/en/publication/2019/facial-recognition-technology-fundamental-rights-considerations-context-law
[6] https://www.theguardian.com/technology/2019/jul/29/what-is-facial-recognition-and-how-sinister-is-it
[7] https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence
[8] https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai
[9] https://www.pocket-lint.com/apps/news/google/141075-what-is-google-lens-and-how-does-it-work-and-which-devices-have-it
[10] https://www.met.police.uk/advice/advice-and-information/facial-recognition/live-facial-recognition/
[11] https://www.wired.co.uk/article/london-met-police-facial-recognition
[12] http://news.met.police.uk/news/met-begins-operational-use-of-live-facial-recognition-lfr-technology-392451
[13] https://www.bbc.co.uk/news/uk-51237665
[14] https://www.essex.ac.uk/news/2019/07/03/met-police-live-facial-recognition-trial-concerns
[15]https://towardsdatascience.com/anti-spoofing-techniques-for-face-recognition-solutions-4257c5b1dfc9
[16] https://qz.com/1805847/facial-recognition-ban-left-out-of-the-eus-agenda-to-regulate-ai/
Share this blog
Share this Blog
- Adtech & martech
- Agile
- Artificial intelligence
- EBA outsourcing
- Brexit
- Cloud computing
- Complex & sensitive investigations
- Connectivity
- Cryptocurrencies & blockchain
- Cybersecurity
- Data analytics & big data
- Data breaches
- Data rights
- Digital commerce
- Digital content risk
- Digital health
- Digital media
- Digital infrastructure & telecoms
- Emerging businesses
- Financial services
- Fintech
- Gambling
- GDPR
- KLick DPO
- KLick Trade Mark
- Open banking
- Retail
- SMCR
- Software & services
- Sourcing
- Travel