We tend to think of conversations as ephemeral. If a conversation is sensitive, we stop typing and start talking. Our long history of telecommunication regulations has led us to believe our conversations are safe, protected, regulated.
And yet some conversations that feel ephemeral aren’t. With the advent of smart phones, we have stopped simply talking through our devices and started talking to our devices. And here is where things get complicated.
When we talk to our virtual assistants (e.g., Apple’s Siri, Amazon’s Echo or Microsoft’s Cortana) those conversations are recorded and kept. How long and for what purpose varies from company to company. What is clear is that you have limited control about what happens to these recorded conversations.
Prior to the advent of virtual assistants, the vast majority of voice conversations were not recorded. We have customer service and regulated industries recording our calls for “quality assurance,” “training” and/or “compliance” purposes. As much as we may not like having those conversations recorded, we have been willing to make the trade – in part — because it represents a small percentage of our total conversations.
That is now longer true. More and more of our conversations are being recorded without our truly informed consent or choice. The advent of the virtual assistants is just the first wave of recorded voice. We are quickly leaving the realm of keyboard and mouse as our primary input technology to a world of voice as the interface to the Internet. In short, all future applications will need to address the issue of recording human voice and how to handle it.
Given this how rapidly this new interface is emerging, it is time to make the voice privacy problem clear and explicit. How is voice different from symbolic data?
The short answer is: Voice contains our identity. Hear a person talk and you can identify him/her. Not only that, you can identify their intent, humor, use of sarcasm. Voice is a far richer data set than text. Consider the impact a voice recording can have in a court proceeding versus an email. With voice, there is no more wiggle room. The jury will know if you were not kidding.
From a strategic perspective, there is some good news. Voice data is not yet being collected en masse. We have a chance to think proactively and begin putting programs in place.
From a risk management perspective, it is important to be aware of the issue and start asking questions of your executive management about how your corporation handles voice data that is recorded today. Key questions to ask are:
- How do we currently handle the capture of customer conversation?
- How do we use that information specifically?
- Do we have any applications that use voice as an interface?
- Does our data destruction policy cover voice information?
- Do we give our customers a choice of having a conversation recorded or deleted?
The last question is there to start a strategic conversation. As more customer inquiries originate online and may use voice or video, there is an opportunity to allow the customer to opt out of recording. This “opt out” option may serve as a way to garner consumer trust and allow your organization to be an early mover in the “privacy as loyalty” world that is quickly emerging.
The opportunity to garner consumer trust and loyalty by taking a lead position on voice privacy is at hand. To help illustrate this point, let’s look at two different corporate approaches to voice privacy. Amazon allows Echo consumers to delete voice records online through their account management settings. Apple doesn’t not allow consumers access to their voice recordings and holds the original voice data for a minimum of 6 months and then claims to “anonymizes” the data. The archived voice record is used, according to Apple, to improve its algorithms and functionality.
If we look through the lens of power, the contrast in these two corporate stances is stark. Apple holds all the power. If you use Siri, Apple defines how it will use your voice and for what purposes. If you don’t like the terms, your only recourse is to not use Siri.
In contrast, Amazon does indeed record your voice and use it, but it gives the consumer sovereignty over that data ultimately. If you are concerned or uncomfortable, you can delete your voice data. The power is more balanced between Amazon and its consumer.
From a brand perspective, which of these two corporate approaches engenders more consumer trust and loyalty?
Ultimately, the solutions to voice privacy will come from a combination of businesses investing in consumer trust and government regulations catching up with the wild, wild west of Silicon Valley. The prevailing winds are favoring a “Privacy Spring” as consumers are becoming increasingly vocal about the personal toll of the recent data breaches and resulting identity theft. The real question is: On which side of history does your corporation wish to be?
E. Kelly Fitzsimmons is a well-known serial tech entrepreneur who has founded, led and sold several technology startups. Currently, she is the co-founder and director of HarQen, named one of Gartner’s 2013 Cool Vendors in Unified Communications and Network Systems and Services, and co-founder of the Hypervoice Consortium.