Research Shows That Smart Speakers Could Record Users Up to 19 Times Per Day
Smart speakers like the Amazon Echo, Apple HomePod and Google Home are being triggered by popular TV shows to record private conversation, a university study has revealed.
Researchers at Northeastern University and Imperial College London made the discovery after playing 125 hours of Netflix content to see if the voice assistants were activated by dialogue that sounded like wake words.
The main goals of the research was to detect if, how, when, and why smart speakers are unexpectedly recording audio from their environment. The researchers are also interested in whether there are trends based on certain non-wake words, type of conversation, location, and other factors.
The researchers turned to popular TV shows containing reasonably large amounts of dialogue. Namely, their experiments use 125 hours of Netflix content from a variety of themes/genres, and they repeated the tests multiple times to understand which non-wake words consistently lead to activations and voice recording.
Te study focused only on voice assistants installed on the following stand-alone smart speakers:
- Google Home Mini 1st generation (wake up word: OK/Hey/Hi Google)
- Apple Homepod 1st generation (wake up word: Hey, Siri)
- Harman Kardon Invoke by Microsoft (wake up word: Cortana)
- 2 Amazon Echo Dot 2nd generation (wake up words: Alexa, Amazon, Echo, Computer)
- 2 Amazon Echo Dot 3rd generation (wake up words: Alexa, Amazon, Echo, Computer)
The study showd that activations occurred up to 19 times per day, with HomePod and Microsoft's Cortana assistant most susceptible to accidental recordings. The experiment also found that Gilmore Girls and The Office were "responsible for the majority of activations" due to the large amount of dialogue in the shows.
"Anyone who has used voice assistants knows that they accidentally wake up and record when the 'wake word' isn't spoken - for example, 'seriously' sounds like the wake word 'Siri' and often causes Apple's Siri-enabled devices to start listening," the researchers wrote.
"There are many other anecdotal reports of everyday words in normal conversation being mistaken for wake words... Our team has been conducting research to go beyond anecdotes through the use of repeatable, controlled experiments that shed light on what causes voice assistants to mistakenly wake up and record."
Several patterns emerged for non wake words triggering the devices, such as words that rhymed with the words or sounded similar. For example, the Amazon Echo Dot was activated when it mistook "kevin's car" for "Alexa".
The researchers also referred to privacy concerns that these devices have raised in recent years, claiming that their study suggests "these aren't just hypothetical concerns from paranoid users".
There have been many reports about the "risks" of inadvertent spying by smart speakers.
Earlier this month, a former Amazon executive revealed that he disabled his Alexa-powered smart speaker when he "didn't want certain conversations to be heard by humans".
Ex-AWS manager Robert Fredrick made the revelation after Amazon admitted to employees listening to customer voice recordings made by its Alexa voice assistant.
Amazon said recordings were used to train its "speech recognition and natural language understanding systems" and improve its AI assistant.