As smart assistants and voice interfaces become more common, we’re giving away a new form of personal data — our speech. This goes far beyond just the words we say out loud.
The Ethics of Smart Devices That Analyze How We Speak
As smart assistants and voice interfaces become more common, we’re giving away a new form of personal data — our speech. This goes far beyond just the words we say out loud. Speech lies at the heart of our social interactions, and we unwittingly reveal much about ourselves when we talk. When someone hears a voice, they immediately start picking up on accent and intonation and make assumptions about the speaker’s age, education, personality, etc. But what happens when machines start analyzing how we talk? The big tech firms are coy about exactly what they are planning to detect in our voices and why, but Amazon has a patent that lists a range of traits they might collect, including identity (“gender, age, ethnic origin, etc.”), health (“sore throat, sickness, etc.”), and feelings, (“happy, sad, tired, sleepy, excited, etc.”). This is worrisome, because algorithms are imperfect. And voice is particularly difficult to analyze because the signals we give off are inconsistent and ambiguous. What’s more, the inferences that even humans make are distorted by stereotypes. In business, we’ve gotten used to being careful about what we write in emails, in case information goes astray. We need to develop a similar wary attitude to having sensitive conversations close to connected devices. The only truly safe device to talk in front of is one that is turned off.