Hi,

I share this only as information.  I'm certainly not trying to spread fear or 
say that these devices should not be used by folks in their homes.  As noted 
often in these articles, and as many on this list have encouraged in the past, 
due diligence with respect to security on such devices is important and 
necessary.  Ignoring these security suggestions could cause you grief in 
numerous ways.

Read below the text from the article or see it from the source HERE 
<https://www.cnet.com/news/security-researchers-warn-of-voice-vulnerabilities/>.

Some suggestions to maintain security when deploying these devices can be seen 
HERE 
<https://www.symantec.com/blogs/threat-intelligence/security-voice-activated-smart-speakers>.

As voice assistants go mainstream, researchers warn of vulnerabilities
New research suggests that popular voice control platforms may be vulnerable to 
silent audio attacks that can be hidden within music or YouTube video -- and 
Apple, Google and Amazon aren't saying much in response.

BY 
RY CRIST
MAY 10, 2018 2:40 PM PDT
 

Sarah Tew/CNET
In case you haven't noticed, voice-activated gadgets are booming in popularity 
right now, and finding their way into a growing number of homes as a result. 
That's led security researchers to worry about the potential vulnerabilities of 
voice-activated everything. 

Now, per the New York Times, some are claiming that those vulnerabilities 
include recorded commands at a frequency beyond what humans can hear that can 
be hidden inside otherwise innocuous-seeming audio. In the wrong hands, they 
say, such secret commands could be used to send messages, make purchases, wire 
money -- really anything that these virtual assistants can do already -- all 
without you realizing it.


All of that takes things a step beyond what we saw last year, when researchers 
in China showed that inaudible, ultrasonic transmissions could successfully 
trigger popular voice assistants like Siri, Alexa, Cortana and the Google 
Assistant. That method, dubbed "DolphinAttack," required the attacker to be 
within whisper distance of your phone or smart speaker. New studies conducted 
since suggest that ultrasonic attacks like that one could be amplified and 
executed at a distance -- perhaps as far away as 25 feet.

The most recent study cited by the Times comes from UC Berkeley researchers 
Nicholas Carlini and David Wagner. In it, Carlini and Wagner claim that they 
were able to fool Mozilla's open-source DeepSpeech voice-to-text engine by 
hiding a secret, inaudible command within audio of a completely different 
phrase. The pair also claims that the attack worked when they hid the rogue 
command within brief music snippets.  

"My assumption is that the malicious people already employ people to do what I 
do," Carlini told the Times, with the paper adding that, "he was confident that 
in time he and his colleagues could mount successful adversarial attacks 
against any smart device system on the market."

"We want to demonstrate that it's possible," Carlini added, "and then hope that 
other people will say, 'Okay this is possible, now let's try and fix it.'"

So what are the makers of these voice platforms doing to protect people? Good 
question. None of the companies we've talked to have denied that attacks like 
these are possible -- and none of them have offered up any specific solutions 
that would seem capable of stopping them from working. None would say, for 
instance, whether or not their voice platform was capable of distinguishing 
between different audio frequencies and then blocking ultrasonic commands above 
20kHz. Some, like Apple, declined to comment for this story.

Most companies are reluctant to speak publicly about these sorts of security 
issues. Despite a heavy focus on artificial intelligence and voice at its 
yearly I/O developer conference this week, Google's keynote presentation hardly 
mentioned security at all.

"The Google Assistant has several features which will mitigate aggressive 
actions such as the use of undetectable audio commands," a company spokesperson 
tells CNET. "For example, users can enable Voice Match, which is designed to 
prevent the Assistant from responding to requests relating to actions such as 
shopping, accessing personal information, and similarly sensitive actions 
unless the device recognizes the user's voice."

Mitigating a potential vulnerability is a good start, but Voice Match -- which 
isn't foolproof, by the way -- only protects against a limited number of 
scenarios, as Google itself notes. For instance, it wouldn't stop an attacker 
from messing with your smart home gadgets or sending a message to someone. And 
what about users who don't have Voice Match enabled in the first place?

Amazon, meanwhile, offers similar protections for Alexa using its own voice 
recognition software (which again, can be fooled), and the company also lets 
users block their Alexa device from making voice purchases unless a spoken PIN 
code is given. The same goes for unlocking smart locks, where the spoken PIN 
code isn't just an option, but a default requirement.

All of that said, there's no indication from Amazon that there's anything 
within Alexa's software capable of preventing attacks like these outright.

"We limit the information we disclose about specific security measures we 
take," a spokesperson tells CNET, "but what I can tell you is Amazon takes 
customer security seriously and we have full teams dedicated to ensuring the 
safety and security of our products." 

The spokesperson goes on to describe Amazon's efforts at keeping the line of 
voice-activated Echo smart speakers secure, which they say includes 
"disallowing third party application installation on the device, rigorous 
security reviews, secure software development requirements and encryption of 
communication between Echo, the Alexa App and Amazon servers."

That's all well and good, but as attacks like these creep closer and closer to 
real-world plausibility, manufacturers will likely need to do more to assuage 
consumer fears. In today's day and age, "trust us" might not cut it.

Later...

Tim Kilburn
Apple Teacher
Fort McMurray, AB Canada



Tim Kilburn
Apple Teacher
Fort McMurray, AB Canada

-- 
The following information is important for all members of the Mac Visionaries 
list.

If you have any questions or concerns about the running of this list, or if you 
feel that a member's post is inappropriate, please contact the owners or 
moderators directly rather than posting on the list itself.

Your Mac Visionaries list moderator is Mark Taylor.  You can reach mark at:  
mk...@ucla.edu and your owner is Cara Quinn - you can reach Cara at 
caraqu...@caraquinn.com

The archives for this list can be searched at:
http://www.mail-archive.com/macvisionaries@googlegroups.com/
--- 
You received this message because you are subscribed to the Google Groups 
"MacVisionaries" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to macvisionaries+unsubscr...@googlegroups.com.
To post to this group, send email to macvisionaries@googlegroups.com.
Visit this group at https://groups.google.com/group/macvisionaries.
For more options, visit https://groups.google.com/d/optout.

Reply via email to