Biometrics in banking and payments (…but first let me take a selfie)
We are all used to using passwords, for lots of different things, but in truth they are an inconvenience at best. Passwords are a user… Read more
We are all used to using passwords, for lots of different things, but in truth they are an inconvenience at best.
Passwords are a user experience nightmare for bank customers and the wish to avoid going through endless “forgotten password” loops leads customers to compromise their own security by using the same password for multiple services and choosing overly simple combinations – alarmingly, the most commonly used password is “123456”[i]. Therefore, “if they are hacked, the intruder has access to almost all their data.”[ii] So if, for many people, the only way to retain easy access to multiple services in an increasingly online world is to heavily compromise their own security, one has to ask whether passwords are really fit for purpose. And it is not just individual failings that bring the efficacy of passwords into doubt. They can be brute-forced by determined hackers, but more commonly are simply stolen in large numbers from online services – as recent thefts from Dropbox and Yahoo have demonstrated. In both cases, millions of sets of user credentials were stolen and subsequently made available for purchase on the dark web.
It is not surprising that fraud in this area is a significant issue for banks; in fact, cybersecurity is now the top risk for every bank in the country. Citibank currently sets aside $400 million each year for losses it expects to incur as a result of fraud.[iii] This has created an enormous market for start-ups seeking to disrupt the sector. If a bank can cut their fraud related losses by just 10 or 20%, the volume of money in play is extremely significant. Given this and the surge in mobile banking and payments it is not surprising that biometric technology is beginning to take over from traditional passwords.
Use of biometrics
The term biometrics describes the use of unique physical features such as a fingerprint or retina as a method of authenticating a user’s identity. These systems don’t require the user to remember anything, and biometric data is harder to use than passwords in the event that it is stolen. Generally speaking, for these reasons biometrics are perceived to be more secure than passwords, and as such we are seeing biometric authentication entering the mainstream.
The most famous example is perhaps Apple’s Touch ID, allowing users to unlock their phone, purchase apps and even log onto online banking apps such as Nationwide and HSBC. Touch ID is popular; it is convenient and there is no risk of a forgotten password, something the average user does once a week. There are, however, issues, Apple currently refuses to release their near field communication (NFC) [for more information on NFC click here] data to anyone, as it would, “compromise the security of its platform.”[iv] This is causing conflict in Australia where Apple refused to let several banks access the NFC radio and create contactless payment systems on top of the iPhone. It has also created a global need for companies to implement their own fingerprint and other biometric methods of verification via mobile apps. One such company, Onegini found that the number of transactions through banking apps rose by 100% when users could elect to authorise payments with their fingerprint.
Elsewhere in financial services, after. surveying 10,000 consumers and discovering that forgotten passwords led to 33% of people abandoning purchases and 66% missing out on limited items such as event tickets[v], MasterCard now allows users to verify their identities using a selfie. In order to utilise Selfie Pay the image data from a selfie is used to create a unique code which is then compared with the encrypted data from the selfie taken when making a purchase. The user must also blink, so as to prove that a hacker is not simply using a photograph to trick the system.
Both Barclays and HSBC plan to increase their use of voice recognition so as to speed up the security clearing process for telephone banking. As well as being more convenient for customers, this also reduces the time taken to deal with telephone queries, and therefore reduces call centre costs for the bank.
In addition to common biometrics (fingerprint, face and voice) companies have begun to explore more obscure possibilities. Fujitsu have developed “palm technology” using infrared cameras to measure the oxygen reduction in blood as it returns to the heart making veins in the palm visible. While, this technology is too expensive to be used in mobile phones, its contactless nature presents global application in the health sector.
The above are examples of biological biometrics (face, finger, voice etc), however, behavioural biometrics also play a role. The Spanish bank, Cecabank is now completely paperless and as part of this process adopted the use of biometric signatures. The technology is extremely detailed and even identifies the pressure and flight of the pen. To date the bank has only faced two legal claims in relation to their implementation of the digital signature.
Given banks’ never-ending quest to improve the customer experience and reduce their cost base around customer interactions, together with the growing prevalence of biometric sensors in popular consumer hardware, it is reasonable to expect other financial institutions to follow suit and for other even more innovative forms of biometric identification to become apparent in the near future.
The ‘spoofability’ of particular biometrics in banking
Although biometrics boast a number of advantages over passwords, the technology does have its downsides. In the same way that passwords can be stolen, so too can biometric data. For example, when the Office of Personnel Management was hacked in the United States, criminals stole the fingerprints of some 5.6 million US government employees. Unlike a password, you can’t change your fingerprint: those fingerprints and identities are forever compromised.
In common with other types of authentication such as passwords, biometrics compare an input (for example a user’s fingerprint or a scan of their retina) with a base document or record held on file to check whether the two match. There are several ways that such “static” biometrics can be spoofed. An imprint of the fingerprint could be stolen, and presented at the point of authentication in place of the real thing. Alternatively, if a hacker can change the base document, then the authentication system could be made to think that another person’s fingerprint is that of the authorised user.
In addition, biometric data, unlike traditional passwords, cannot be ‘hashed’. Hashing can be explained using four characteristics as follows:
- hash values have a fixed number of digits;
- if the same password is hashed, the same hash value is delivered;
- if passwords are even slightly different the hashed values will be totally different; and
- it is nearly impossible to reconstruct a password from its hash value.
Given the nature of fingerprints, they cannot be cryptographically hashed. The finger will press the scanner in a different way, angle or position each time. There might be something on your finger or scanner that would interfere with the sensor etc, therefore the hashed data would be impossible to compare to a stored template. Similar problems exist with facial recognition; Mastercard’s Selfie Pay app has been spoofed by people taking photographs and animating them by drawing on eyelids. It may be for similar reasons that Google has admitted on its website that facial recognition is “less secure than a pattern, PIN or password”[vi].
This is where “dynamic” forms of biometrics, such as voice recognition or personal typing patterns – the modern equivalent of handwriting analysis – may well be a safer option, and companies like Nuance and Behaviosec have made great strides in this area. Whereas a fingerprint can be spoofed just by presenting it in isolation at the right time, it is far harder both to mimic a person’s voice quality and to do so in a way that is responsive to the context at the time of the security check (a recording of the fraud victim’s voice saying the same phrase over and over again in a loop is unlikely to get past the most basic of security checks). Similarly, it would be very difficult to accurately imitate the way another person types, and to do so accurately at a speed that would not be suspect. In addition, with typing analysis of this type, authentication can run continuously in the background, and can be used in fraud detection: just as it can identify the real person, it can also be used to positively identify the fraudster. But even these forms of “dynamic” authentication are not impregnable: if the “yes” signal emanating from a successful comparison can itself be spoofed, then the battle is lost no matter what input is used.
Biometrics and the GDPR
The US government fingerprint theft example above throws into sharp focus the need to protect this source data appropriately, which is why biometric data is expressly included in the GDPR as a “special category” of personal data. The provisions around processing sensitive data in the GDPR are broadly similar to those contained in the Data Protection Directive, although it should be noted that under Article 9(4) of the GDPR, member states have the right to impose further conditions or limitations on sensitive data such as biometric, health or genetic data. It is therefore reasonable to expect that national differences on rules around the processing of such data will remain, and banks will need to pay close attention to any UK amendments in this area.
Consent is likely to be the most prevalent provision relied upon in relation to banking and payments, and processors should ensure that:
- each time the data is processed, a separate consent is granted;
- the consent in question is not contingent on other factors;
- the consent is clear and not commingled with other information;
- silence does not amount to consent; and
- data subjects are made aware of their right to withdraw consent at any time.
At a time when financial fraud in the first half of 2016 reached a value of almost £400 million[vii], there can be little doubt that increasing the standard of security offered by banks has to be a major focus. With personal information exposed in data breaches increasingly being exploited as the basis for fraud, assuring the identity of customers has never been more central to the fight to reduce losses. The advent of the GDPR will place even greater emphasis on the need to process and store customer data securely: failure to do so could result in significant maximum fines (4% of global turnover) not to mention severe reputational damage and remediation costs.
Biometrics and PSD2
Under the Second Payment Services Directive (PSD2), which has to be transposed into national law by 13 January 2018, there is a requirement for ‘strong customer authentication’ when users access their account online, initiate electronic payment transactions or carry out any action through a remote channel which implies a risk of payment fraud or other abuses. The definition of strong customer authentication is covered by Article 4(30) of PSD2 and states that authentication must be based on two or more of the following independent elements:
- knowledge (something only the user knows);
- possession (something only the user possesses); and/or
- inherence (something the user is).
Biometric data clearly constitutes ‘inherence’: it is data about what the user is. However, inherence alone is not enough. This issue can be resolved to an extent when biometrics are used via mobile payments as the user’s phone constitutes ‘possession’ and therefore fulfils the required second element.
Biometrics is likely to continue to impact the payments marketplace and financial institutions will seek opportunities to make payments easier for customers whilst bolstering their fraud protection. The European Banking Authority (EBA) have been tasked with developing the detailed requirements for strong customer authentication. Their draft paper has been released with the final standards expected to be published in January 2017. The draft Regulatory Technical Standards (RTS) standards show an appreciation for those making payments to be aware of how much money they are sending and to whom, something easily overlooked in an environment where biometric technology seeks to make payments quicker, easier and require less attention. The principle of dynamic linking has been introduced by the EBA. It requires that:
- the payer be aware at all times of the amount of the transaction and the payee;
- the authentication code generated for each payment be specific to the transaction and payee; and
- the underlying technology ensures confidentiality, authenticity and integrity of the amount of transaction and payee and the information displayed to the payer through all phases of the authentication procedure.
The draft RTS have come under recent criticism. The European Parliament’s PSD2 negotiating team argues that it is not clear on whether strong customer authentication exemptions should be regarded as optional or mandatory. This creates a problem, “according to the draft RTS no risk-based analysis would be possible outside of the very narrow set of exemptions listed by the EBA”, however, this is inconsistent with Level 1 legislation where such restriction is not foreseen.”[viii]
What is clear is that while regulation around payments becomes more stringent, it is possible that in the future, customer authentication may require all three of the elements mentioned above. If this should occur, then it is likely that biometrics combining both biology and behaviour will surge in popularity. For example, a voice based authentication system whereby users are required to speak (inherence) their password (knowledge) into their own phone (possession) in order to validate a payment.
For all the advances in technology and market penetration in recent years, biometrics is still really in its infancy as the market wakes up to the authentication capabilities of the supercomputers that so many of us carry around in our pockets. So far the only big users of biometric data have been banks, government institutions – and Apple / Samsung, all of whom have a generally very good record around confidentiality. For banks in particular, they are of course very used to holding sensitive data, and in practice many of them may well choose to protect biometric data by banks in much the same way as credit card data (think PCI DSS). However, as the use of biometric data becomes more commonplace, it could well be that other types of services, less experienced at guarding such information, start to hold biometric data on the basis that it smooths the customer experience but perhaps without the stringent security processes of a bank. Banks and other financial institutions will of course remain major targets for hackers due to the value of the potential payload; but as the technology becomes more prevalent, there may well emerge numerous soft targets to attack – potentially leaving many at risk of having their biometric credentials compromised with long-lasting effects.
Further, the holding of biometric data, with the regulatory and reputational risks it poses, will undoubtedly pose an administrative and compliance burden for any organisation – even banks who are used to protecting sensitive data. It may be that new business models arise in the secure holding of biometric records, enabling businesses both within and outside the financial sector to outsource the protection of the data to specialist third parties, who would hold the records on their behalf and serve up tokenised, non-sensitive versions of it for authentication purposes – thereby allowing them to use the benefits that biometrics can bring to the customer experience, whilst avoiding the regulatory burden and legal and reputational risk of holding it themselves. Perhaps a service of that type could even become the next big thing in commercial banking.