giancarlomangiagli.it

Strong authentication in banking environments: my suggestions for client’s safety

PSD2

There is an article inside the EU directive 2015/2366 (also known as PSD2 in which intermediary payment service providers are mainly interested) which requires the use of strong authentication systems when clients try to access online banking services. Specifics can be found inside the EBA-Op-2019-06 document wrote by the European Banking Authority itself; what’s addressed in said documents is to combine at least three of the following requisites: inherence, possession, and knowledge. “Inherence” means user’s physical features, such as fingerprints, appearance, voiceprint, and retina. “Possession” is in relation to objects (that can be both physical and abstract) that only the client has access to, such as a one-of-a-kind software, a smart card, a personal smartphone or SIM card (essential to get an OTP text message). Lastly, “knowledge” could mean a password or a PIN code or yet some kind of mnemonic information related to the client’s personal life.

PSD2 is a great way to implement one’s systems and services for those banks able to see the big picture, with a big benefit in terms of agility in inner processes and competitive market. At the same time there’s still a high degree of flexibility when it comes to the part in the directive about strong authentication. Which means that the matter can either be dealt with either major or minimum effort, to one’s discretion, and still be in compliance with applicable laws. Every option under the directive can be combined in various ways, even drawing straws (hence the minimum effort scenario). The combinations that guarantee the highest security level and, in the meantime, a satisfying level of privacy are very few, and need a major effort when it comes to implementing them. What follows is my attempt to further discuss this topic.

Issues with client’s privacy, and password & text vulnerability

As of today, many bank agencies adapted to PSD2 using different procedures; most of them have a prominent feat in common: PIN and/or passwords and/or the use of a secret question, which are then combined with other means such as:

  1. OTP via text message
  2. Software token (via smartphone/tablet)
  3. Hardware token (such as the small device with a screen provided by banks)
  4. Biometric data (mainly fingerprints provided via smartphone/tablet)

Unfortunately, we still get reports of illegitimate access and money theft even by those banks in line with the directive. Just try and look up “sim swap” on the web. Speaking of which, what banks should highlight is how cloning a SIM card is a major issue they’re not in control of, thus mainly a phone operator’s responsibility, since there’s still a delay in improving safety measures against the problem. One solution would be the introduction of “eSIM”s. Even if true the issue on phone operators’ part, there’s still a lack of elements to keep discussing this particular issue. What I want to highlight now is the most evident weak spots in the 4 examples of strong authentication we’ve already listed.

Let’s start with knowledge, that is the most common element in most of the authentication systems in banks - as we have already said, it’s passwords/PIN/secret questions – and has as a weak spot the user itself. If the user chooses a weak password or gives that to someone else, or if one is provided by falling into a phishing e-mail trap, then there isn’t much to be done. Luckily there is still an in-extremis solution, which is hastily changing a password once one realizes the mistake that has been made. This works if the other factors coupled (inherence and/or possession) are properly combined.

The OTP via text message is just as vulnerable as cloning a SIM card is easy – which means that it doesn’t always happen because it’s not that easy, but at times it does. But even if cloning a SIM card was harder, by implementing for example the use of eSIMs or with systems meant to verify IDs (double check with the picture in the original ID, access to the information system known as SCIPAFI and so on), we cannot exclude the possibility of a shop playing dirty and cloning each SIM it sold.

The software token would usually require a registration procedure which then creates a unique identifier associated to a device (such as a smartphone or a tablet). In this way, the only person allowed to get the identification is the one with the device. This system has two main weak link in the chain, so to speak. The first one is in the registration procedure which usually require the use of a combination of OTP via text message and keywords of any kind, hence the cloning issue of the SIM card we’ve already talked about, or phishing mail, or any other method of misappropriation of credentials knowledge-based. The second weakness is about the protection of the device itself. If the user has no strong enough protection on the device he’s using to operate the software token (such as a PIN with memory encryption), and said device is stolen, then the user might get into serious troubles. Moreover, if that same device held annotations of access credential (via a notepad-like app) - well that’s a recipe for disaster.

The hardware token is probably the one with least weak spots when it comes to both security and privacy. One big problem is the cost for banks, especially if they get their hardware from high-end manufacturers. And getting to those manufacturers is what really makes a difference security-wise. The only possible weak spot (a remote possibility) can be found in the way those cryptographic keys are made. Keys that the manufacturer - and the manufacturer only – knows and holds, and should be careful enough to carefully store.

Biometric data are quite safe in terms of security, on different levels. Fingerprints and voiceprint aren’t easy to replicate, and when it comes to the retina things are even trickier. It’s even harder to get this kind of data without alarming the user. But what happens when someone finds a way to steal an individual’s biometric data? That individual’s been robbed of something for the rest of their life. An app that reminds you that you still have “eight fingerprints, a retina, your voice and your left side” left would come in handy in that case. Would everybody feel comfortable entrust a third-party with their biometric data then? A bank wouldn’t memorize any not encoded data into its archives, preferring to proceed with hashing operations meant to prevent the reconstruction of the original data. But what if just for once, being for any kind of mistake, that bank forgets to encode the data? What if those data is stolen during because of a cyber intrusion? We should not forget that even encrypted biometric data can be considered sensible since it’s been proven possible the reconstruction of the original data (see case reports on the web). To recap, the real issue with biometric data isn’t a matter of security even though, theoretically speaking, any kind of credential can be stolen; the real issue is privacy itself: by giving those kind of data we expose ourselves to the theft of sensible and immutable data that we would better be considering our own irreplaceable asset.

In this paragraph I’ve written about both the most common cases of access credential theft and those that occur less frequently, even talking about those that have never occurred but have the potential to. We should never underestimate those, because when it comes to stealing money cyber thieves are always ready to outdo themselves.

Two high security solutions and protection of the privacy of clients

I can only propose two main solutions regarding strong authentication that are in accordance with PSD2 and able to guarantee security and privacy at the highest level. I must say that both solutions should be coupled with an access based on knowledge (implemented however one see fit), and also a client’s privacy should be protected by leaving out biometric data or other sensible information.

We’ve already addressed, indirectly, the first possible solution. It requires the use of hardware tokens protected by PIN. In the event of a theft or loss of the device, the PIN prevents any kind of fraud; a client could still have written down all his credentials and attached them to the device itself, but hopefully not the PIN – and at that point, the bank would have no part in this as stated by PSD2. Hardware tokens must be produced at high-quality, both component wise (as to guarantee its durability) and as software itself (to guarantee no flaws when it comes to cryptography). The latter has an expiration date, which means it’s best, from time to time, to withdraw permits to the devices and change with new ones. This calls for high costs but would also guarantee a higher standard in terms of protection (and maximum protection for banks if clients try to sue).

The second solution is far more interesting than the first one because it guarantees the same level of security but doesn’t call for high costs in terms of hardware. On the other hand, it takes more time to spend on licenses of clients and subsequent (even if not that frequent) trips to the bank. It’s a software token with a different registration procedure. Users won’t be able to assign the token to a new device by themselves anymore, but would have to get to the bank whenever a new device is paired. It’s all about replacing the procedure that required sending a text message, so that it’s easier to avoid the cloning of SIM cards; to do this, one would have to do everything in person at any bank branch in front of a bank employee in charge of verifying the identity of the client. Most devices (usually smartphones or tablets) can get a unique identification code, but others can’t. In the first case the client can pair the device one single time (unless a new device becomes the user’s first and only choice); in the event that the device can’t get a unique ID, it would be created the second the app of the bank is downloaded and installed, which then would require for the registration on the client’s part not only like we’ve already described, but also every time the app is installed and removed from the device (e.g. after its hard reset). As inconvenient as it may look, this is a fair price to pay for security; if a bank doesn’t spend a lot of money on hardware tokens, its clients would have to compensate for it with their time – and the bank would also have to spend some time on that, since some of its employees would have to help the clients. There would be plenty of ways to use this kind of token, at least in theory; I can write down one that would minimize inconvenience. Once the app of the bank is installed, a message is sent to the server carrying the hash of an ID (that could be either the ID of the device or the one associated to the app installed on it). The server will then respond with a sequence number corresponding to the record ID of the database in which the hash is stored; the data are ultimately registered offline inside the app, so that the client is able to communicate it to the bank employee. The mere fact that the client transmits a integer that’s relatively small (instead of the device’s ID hash) is something thought to avoid reporting a string that could contain hundreds of characters. The token is then active and the hash is paired with the client’s user (also in the bank’s database).

In conclusion

Most of the times the weak link in the chain of security is the users themselves, thus making harder to prevent their mistakes as much as it is to spread computer culture and right behaviour around security. Thus I think that client’s safety should be banks’ top priority and duty. That can be considered on a par with investor protection, savings, investments, and all the assets (material and immaterial). When clients suffer from a cyber attack on their online account, the bank should be able to provide documentary evidence that the bank itself has done everything to avoid it, starting with a clear communication of means and minimal precautions of use (such as: never give your credentials to others, never write down credentials and take them with you, protect your devices with PIN codes, install anti-malwares etc…), until the bank is able to prove that the security measures that were adopted would have been enough to protect the clients if not for their own negligence. The link managing user’s safety must have at least two links:

  1. Initial recommendations to users right when access credentials to online services are provided, and also every time an update changes in anyway its operating procedures – this would also require an accurate employee training, to raise awareness of the issue;
  2. Adoption of high-end safety measures

The second point would call for a number of expenses. To reduce in part the costs I think that banks could add a link halfway in that chain: an individual choice made by each and every single client guided by an employee about the level of security that can be chosen, either “high” (with less expenses for the bank) and “maximum” (more expensive). For instance, the level “high” would include the use of biometric data, popular with those clients valuing comfort over privacy. A software token relying on text message could be included – once again, useful and good enough for those clients that can prevent on their side phishing mails, cloning of SIM cards, malwares, devices not protected by PIN codes etcetera. The level “maximum” would be the one using the two solutions I have described in the previous paragraph, instead; they’re more expensive, but perfect with those clients that are not familiar with technology or have a high-risk profile (medium to large companies, public bodies, relevant organizations, and everything that requires maximum protection).

In a scenario where productive world is more concerned with avoiding cyber-attacks, banks will have an increasingly important role. There’s so much they’d have to turn their attention to, and simply creating stronger authentication systems won’t be enough. Authentication is the most prominent frontier, which makes me think that we need to be extra careful about it. I hope that with this article I was able to provide food for thought, both to banks and their clients.

If you have something to say about what I wrote, especially in a well-considered and constructive way, you can find my e-mail at the end of the page.

(translation by Silvia Di Mauro)