Registration is wrong! Please check form fields.
Publication detail:

Addressing Legal, Ethical, and Technical Issues Associated with Voice Biometrics: Technology goes along with Ethics

01 (1).jpgIn our previous articles we covered some strategic and preliminary questions about adopting and properly implementing voice biometrics. Once you start along that path, however, more complex issues begin to surface. The fact is that speech technologies, including voice biometry, allow extracting quite a serious amount of data from human speech that can be used in a variety of ways, e.g. to measure and improve customer experience, fine-tune marketing strategies, optimise sales, etc. Here are the last three answers that will help navigate around some of the important legal, ethical and technical stumbling blocks:

1. Vendors are offering “speech analytics tools” that extract specific information from large amounts of customer and/or employee conversation recordings and from live conversations in real time. Are there legal and ethical limitations on the usage of such systems?

There are no legal limitations or ethical concerns regarding the use of tools for analytics that prohibit extraction of information for lawful business purposes. In some cases, however, the analysis of voice can produce some potentially sensitive data that may fall under the restrictions of personal data protection. The voice of a person may for example contain features that can be used to detect signs of certain illnesses or racial origin of the speaker. In Switzerland, these are regarded as “sensitive personal data” (Art. 235.1 Swiss Federal Law for data Protection), which requires data users to follow specific rules on how to collect, process and store such information.

2. What are the legal consequences for the customer, for Spitch, and for the end-user if these solutions are hacked?

Voice biometric solutions deliver a greater level of security than traditional verification procedures based on security questions (“something you know” only). Human voice is a set of biometric characteristics that are hard to imitate – to the point that both a human operator and a biometric system are tricked at the same time. Professional artists can imitate a voice of another person but not all the key individual characteristics (over 100 parameters) of any person’s voice. Advanced biometric systems, therefore, easily detect the differences. It works reliably even with full twins, as Spitch’s experience shows.

To ensure maximum security, most up-to-date authentication systems are designed to use not just one but several authenticators – e.g. voice biometrics, in combination with a SIM-card number of the caller’s telephone, or in some cases – like transferring large sums of money – additional measures. To make it easy for customers, Spitch recommends using both automatic customer biometric identification (in combination with the caller’s phone number, if pre-registered SIM-card is used) as well as continuous verification throughout the entire conversation.

Besides, adequate technological and organisational measures are required to protect data records, set limits for certain operations by telephone, etc. as prescribed by legal rules and internal guidelines. There is also an obligation to notify the customer of the breach of security due to spoofing attacks within 72 hours.

3. What if we need machine-driven security checks by comparing live customer voices with the lists of suspected fraudsters’ voiceprints? In case of positive match, people may be denied service, or their calls may be routed to security specialists. Any legal or ethical concerns?

As long as this does not violate any statutory rights, it would be legally appropriate with some caveats. In the U.K. for example, and in many other countries, individuals have the right to object to decisions being taken by automated means. Individuals should be informed when such a decision has been taken. Automated decision-making processes, if they do not include any human intervention and control, should be mentioned as part of the security procedures and other conditions that are published online by way of notification, and accepted by customers.

From the ethical point of view, however, things are not so simple. The scenario where computers deny service on the basis of machine-driven security checks precisely represents one of the biggest public scares, namely an accidental “blacklisting”. Most importantly, there should also be an alternative way around such barriers (that exist, after all, to enhance our common security, and not vice versa). A simple phone call to the contact centre, or automatic call-back triggered by a failed security check, for example, would constitute an appropriate procedural response and help send an important signal – customer experience with security as an integral part of it should always be the first priority.


Key definitions

Equal Error Rate (EER), false acceptance rate (FAR), and false rejection rate (FRR):

Equal error rate (EER) is the value where the false acceptance rate (FAR) and false rejection rate (FRR) are equal. False acceptance means incorrectly accepting an access attempt by an unauthorised user. False rejection means incorrectly rejecting an access attempt by an authorised user.

EER is regarded as a figure of merit for the identity verification system; the lower the EER is, the more certain the system’s verification decision is likely to be.

In real use cases, however, the settings of FAR and FRR are not equal. They are adjusted to attain specific goals – e.g. maximising security by making FAR as low as possible. In this case, FRR may become higher.