What is Your Voice Privacy Debt?

What is Your Voice Privacy Debt?

Dec 1, 2016 | Privacy

By, Martin Geddes
Co-Founder and Executive Director Hypervoice Consortium

Co-Written By, Kelly Fitzsimmons
Co-Founder and Managing Director Hypervoice Consortium

Today, we can talk to a computer just as easily as can type at one. The use of Personal Voice Assistants (PVAs) — such as Siri, Alexa, Cortana — is skyrocketing. According to Google, 20% of mobile searches are voice-activated.

As it gets easier and more of us use PVAs, a new concern arises: When is it not appropriate for me to talk to a PVA?

To answer this, context is king. With PVAs being used in all sorts of new and interesting ways, different concerns quickly arise in different contexts. If you work in a regulated industry, such as healthcare, finance and law, it is critical to stop and think this new context all the way through.

Voice isn’t just a new interface, it’s a whole new way of performing computing. The rise of intelligent personal voice assistants is a fundamental change in how we relate to computers. The advance is comparable to the invention of the PC desktop. A voice interface brings with it a welcome humanization of technology, one that democratizes access to a wide range of valuable capabilities.

It is increasingly common to use smartphone PVAs as dictation engines for taking notes. Enterprise CRM systems also now often come with voice transcription functionality. In the context of most enterprises the use of these capabilities is an innocuous choice.

However, for regulated industries (most notably finance, law and healthcare) this is not the case, especially when sharing confidential information. The inherent nature of PVAs sets up a trap for the unwary or unthinking user.

The voice interface is designed to mimic a helpful human. Yet what sits behind the façade is merely a machine-learning algorithm, one that has little concept of contextual ethics, so potentially shares our corporate secrets inappropriately. When we mention confidential client information while dictating, that data becomes a digital privacy biohazard with a long-term waste disposal problem.

As such, services like Siri bring a new class of ‘digital health’ problems with serious ‘enterprise wellbeing’ consequences, just as the desktop brought email viruses and pop-up malware. So why would anyone do this? There are three reasons: you’ve not considered the risks; you wrongly feel they are always small; or you think the costs don’t matter (and you’ll not get caught!).

Why we get seduced by Siri
We fail to consider the risks of PVAs because our brains are hardwired for the hazards of a primeval world. The pains we anticipate are biased accordingly. You might feel giddy when you stand at the edge of a precipice, because you naturally fear the consequences of a fall. Yet few of us feel similar terror when we drive a car, since moving at 70mph along a freeway wasn’t a feature of our distant ancestors’ jungle or savannah environment.

Our bodies are physical and our minds are forgetful. This means our human voice conversations are by default tied to one place and are ephemeral. When we talk to Siri, we are presented with a human-like interface, and naively make assumptions that the interaction remains local and time-bound.

Yet we are interacting with a machine that exists to persist and replicate information. Siri is the frontend to an AI engine that is designed to learn things over time, and those learnings may be collated across users. An innocuous mention of a corporate takeover or a patient procedure may suddenly become part of the learning database of Apple. If not today, then it could happen tomorrow, without you ever noticing the change of functionality and terms.

We also have little visibility and no control over where that voice-as-data gets stored and used. It may seem that everything Siri is doing happens locally on the machine, but the reality is that your information is being passed into a complex set of back-office cloud systems. We have little or no visibility or control over where that information resides, or who might have access to it later.
With humans there are also some very specific expectations from conversations, ones that the machine simulation breaks. A human has social understanding of what is appropriate to his or her context. If your doctor asked an anonymous passer-by to come into their surgery and act as a scribe during consultations, that person might balk at the prospect as being inappropriate.

With PVAs we have artificial intelligence systems which lack artificial ethics. Siri will never let you know “you really shouldn’t tell me this”! Whilst Siri might refer to itself as “I”, but there’s nobody there. What you hear is merely the machine rearrangement of the sound of a human voice actor.

Because Siri and her friends have no concept of appropriate context, that means the user has to take on that burden. It’s obvious personal email is inappropriate for some work contexts, and with a website there is a distinct URL and visibly different site layout to show you that you have strayed away from an approved enterprise application. With a PVA making that context transgression is far less obvious.

A ‘bring your own’ approach to PVAs is in some ways the ultimate socially-engineered attack on the privacy of customer data. We could have secure IT systems with air-gapped isolation from the outside world. When we invite a smartphone voice engine in with a human feeding it, we bypass all that enterprise protection. It is a Trojan of a third party brought into the heart of the enterprise. The unintentional and socially accepted nature of the attack is what makes it all the more subversive.

The risks are bigger than you think
With IT systems, we face novel risks for which we are not well adapted, and for which our systems of behavioral feedback are weak. It is rather like how we fail to relate our choices over sugary processed food to the serious consequences of heart disease and diabetes. The hyper- abundance of calories or computation bring unfamiliar problems to deal with.

Furthermore, with IT the effects of our personal choices are often not internalized: the impact of the risk falls upon colleagues and customers, possibly many years later. In the context of the corporation, an individual’s failure to ‘wash their digital hands’ can lead to reputational ruin risks to the whole enterprise.

So what might go wrong? The nature of information security risks is that they are often of a ‘black swan’ nature. The hazards may be infrequent, but the consequences and costs very severe.

For example, the boundaries could change at any moment with a single update to the technology or device user policies. We’ve seen this happen recently with WhatsApp and Facebook. What was a strong privacy pledge for WhatsApp users has turned into a security risk for anyone using it for work, as messaging contacts are used to personalize ads. The allowable use of information is neither fixed nor bounded.

When there are no signed terms of service, the conditions under which data is stored and shared can change at the whim of a third party. Companies come and go, get taken over by local and foreign competitors, divest divisions, and constantly redesign products. Even if you read the original terms of service for Siri, they can keep changing whenever Apple chooses. Are you going to re-read them every time? Will you ask Apple what their policy is on personally identifiable information in Siri? (We did, and they don’t have one.)

When you use a service like Siri for dictation of private and confidential customer data, you are arming a whole series of subtle security hazards. There is no firm commitment on record retention, so you cannot prove (potentially in court) what was (or was not) shared. You have no right to audit the supplier, so you are in a very weak position to evaluate your compliance with industry regulations. For instance, the data might be held for too short a time, or too long. This disregard for the rules will appear recklessly negligent should there be any ‘data spill’ downstream.

In summary, you have no power to audit Apple’s or Amazon’s suppliers for compliance with your IT security policies, and there is nothing stopping them from using the data they capture in ways that would harm your reputation. A single incident in any regulated industry, or by any major supplier, could at any time attract attention to this issue. This would arm a latent risk of ruin, which is the cumulative liability built up over time from years of non-compliant use.

The risk isn’t just about you and your enterprise, but is also systemic. There are bad actors in the world who constantly strive to break into these systems for gain. Once the epidemic of privacy breaches reaches critical levels, things can fall apart, like how a cholera epidemic spreads in an overcrowded and unsanitary slum.

We lack a feedback system to account for our behavior
There are many routine daily activities for which PVAs like Siri are wholly appropriate, even in an enterprise context. Sharing private customer data is not one of them. Just because something is easy and convenient doesn’t mean it is the right thing to do.

If you work in a regulated industry, such as finance or healthcare, then you as a professional know you are responsible for the privacy of confidential customer data.
However, personal morals are not enough. We also need systems of feedback to create the right behaviors, and make systemic and architectural choices to support them.

We need to have a way to quantify the cost and benefit of different solutions to our productivity issues. And here’s the catch. When any data is captured by an information system, then it has some asset value to the enterprise. It also comes with it a liability, which includes the need identify private information, pay to secure and manage it, and ultimately delete it to expunge the liability.

At present our systems for accounting for the liability are generally weak. There is no ‘double- entry data-keeping’ to track where the information has been shared and sum up the total liability. Yet just because the liability is hard to measure does not mean it ceases to exist.

So there is an implicit cost on the enterprise of all non-compliant use of PVAs for private data. The liability is being self-insured, and sits as an invisible item on the CIOs or CFOs books. Each and every single non-compliant use of a PVA is an invisible debt on that account.

The conversation that regulated enterprises need to have is to begin to size that debt. What are the frequency of those breaches? What are the resulting hazards that you face? What is the impact of those hazards? What is the total implicit cost of self-insurance for the hidden ‘voice privacy debt’?

At the end of the day, it may be far cheaper and safer to avoid the non-compliant use in the first place, and select a tool that is fit-for-purpose for regulated industry use.

AboutHypervoice Consortium
The Hypervoice Consortium’s mission is to bring awareness to the importance of the emerging communications ecosystem and serve as the official forum for standards, capabilities and applications. Our purpose is to research the future of communication and advocate passionately for humankind’s best interests. Learn more at http://www.hypervoice.org/ or follow the conversation on Twitter with #Hypervoice. Hypervoice(TM) is registered trademark of the Hypervoice Consortium LLC.

Sign up for our Newsletter

Learn about upcoming events, special offers from our partners and more.

Sub Topics