On Tuesday, June 20, 2017 at 2:15:57 PM UTC-5, annie nguyen wrote:

> Dropbox, GitHub, Spotify and Discord (among others) have done the same
> thing for years: they embed SSL certificates and private keys into their
> applications so that, for example, open.spotify.com can talk to a local
> instance of Spotify (which must be served over https because
> open.spotify.com is also delivered over https).
> 
> This has happened for years, and these applications have certificates
> issued by DigiCert and Comodo all pointing to 127.0.0.1 whose private
> keys are trivially retrievable, since they're embedded in publicly
> distributed binaries.
> 

Really?!?  This is ridiculous.


> What I want to know is: how does this differ to Cisco's situation? Why
> was Cisco's key revoked and considered compromised, but these have been
> known about and deemed acceptable for years - what makes the situation
> different?

That situation is not different from the Cisco situation and should yield the 
same result.

> It's been an on-going question for me, since the use case (as a software
> developer) is quite real: if you serve a site over HTTPS and it needs to
> communicate with a local client application then you need this (or, you
> need to manage your own CA, and ask every person to install a
> certificate on all their devices)

There are numerous security reasons for this, which quite several other people 
here are better than I to illuminate.  I'm a software developer myself 
(particularly, I am in the real-time communications space).  I am not naive to 
the many great uses of WebSockets and similar.  I have to admit, however, that 
never once have I ever considered having a piece of software that I have 
written running in the background with an open server socket for purposes of 
waiting on any old caller from localhost to interact with it.

The major browsers already consider localhost to be a secure context 
automatically, even without https.  In this case, however, they don't seem to 
follow that.  I have this theory as to why....

Maybe they think that it is ridiculous that an arbitrary website "need" to 
interact with locally installed software via WebSocket (or any other manner, 
short of those which require a great deal of explicit end-user interaction).  
It is not beyond imagination that they may even regard the mere fact that 
people believe that they "need" such interaction to be ridiculous.

Perhaps they've stopped to think, "Well, that would only work if our software 
or some part of our software is running on the visitor's system all the time."  
That kind of thing, in turn, encourages developers to write auto-start software 
that runs in the background from system startup and just sits there waiting for 
the user to load your website.  That wastes of system resources (and probably 
an unconscionable amount of energy worldwide).

Perhaps they are concerned that if the local software "needs" interaction from 
a browser UI being served up from an actual web server elsewhere on the 
internet, then the software may well be written by people who are not well 
versed in the various mechanisms of security exploit in networked environments. 
 Those are just the kinds of developers you do not want to be writing code that 
opens and listens for connections on server sockets.  As a minor example, I do 
not believe that cisco.com is on the PSL.  This means that if other Cisco.com 
web sites use domain-wide cookies, those cookies are available to that software 
running on the computer.  Conversely, having that key and an ability to 
manipulate a computer's DNS queries might allow a third party to perpetrate a 
targeted attack and capture any cisco.com site wide cookies.

I personally am of the position that visiting a website in a browser should 
never privilege that website with direct interaction to other software on my 
computer without some sort of explicit extension / plugin / bridge technology 
that I had numerous obnoxious warnings to overcome to get installed.

Why on earth would a visit to spotify.com ever need to interact with the 
Spotify application on my computer?  Can you explain what they do with that?

More broadly, the Google people provide Chrome Native Messaging for scenarios 
where trusted sources (like a Chrome extension) can communicate with a local 
application which has opted into this arrangement in a secure way.  Limiting it 
to access via Chrome Extensions means that your Chrome browser user needs to 
install the Chrome Extension before you will be able to engage via that conduit.

A brief glance at the various chromium bugs involved in locking down access to 
WebSocket when referenced from a secure origin shows that the Chrome people 
definitely understood the use case and did not care that it would break things 
like this.

I have no affiliation with any browser team, but I speculate based on their 
actions and their commentary that they _meant_ to break this use case.  Rather, 
they seem to regard the use case itself as inappropriate and seem to be 
actively breaking attempts to circumvent that.

I strongly support that.  I see no reason that a web site needs to use my 
browser as a conduit to talk to a non-browser software element on my computer.  
A visit to spotify.com certainly does not imply that I want Spotify to play 
with my volume settings.  There is no planet where the defaults should allow 
for a mere visit to spotify.com to determine whether or not I have the spotify 
application installed, running, or identify which instance of the spotify 
application I correspond with.  Not without specially granted privilege.

I have an idea for the browser authors to contemplate in their statistics 
gathering:

I propose a new metric be produced by the browser and "shipped home" for 
analysis.  Specifically, any time a https connection is properly validated to a 
certificate which descends from the publicly trusted roots ( no need to care 
about anything in the enterprise / administrative roots category ) AND further 
the connection is actually ultimately made to localhost (whether by hostname, 
IPv4 loopback, IPv6 loopback, etc.), capture the set of SANs in the certificate 
against which the connection was validated.  Collect that data along with 
anonymized source-of-submission data and certificate issuer data.

When any single SAN on a certificate descendant of a publicly trusted hierarchy 
is utilized by too many sources to plausibly be represented by a single user, 
report that data to the issuer as a strong statistical showing that the key has 
been leaked.  Demand revocation.

Just my thoughts on the matter,

Matt Hardeman


_______________________________________________
dev-security-policy mailing list
dev-security-policy@lists.mozilla.org
https://lists.mozilla.org/listinfo/dev-security-policy

Reply via email to