Re: [liberationtech] NYTimes and Guardian on NSA

2013-09-05 Thread Daniel Colascione
On 9/5/13 12:32 PM, Richard Brooks wrote:
> Latest articles:
> 
> http://www.nytimes.com/2013/09/06/us/nsa-foils-much-internet-encryption.html?emc=edit_na_20130905&_r=0&pagewanted=print
> 
> http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryption-codes-security
> 
> 
> I find most of this (if not all) silly. They seem shocked that the
> NSA does cryptanalysis. It would be nice if the newspapers had
> people with some knowledge of the domain writing articles.
> 

There is a massive difference between cryptanalysis and decade-long,
well-funded, and top-secret program to subtly weaken international
cryptographic protocols and sabotage industry implementations.

-- 
Liberationtech is a public list whose archives are searchable on Google. 
Violations of list guidelines will get you moderated: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech. Unsubscribe, 
change to digest, or change password by emailing moderator at 
compa...@stanford.edu.

Re: [liberationtech] Wickr app aims to safeguard online privacy

2013-02-05 Thread Daniel Colascione
On 2/5/2013 11:11 AM, Jacob Appelbaum wrote:
> Brian Conley:
>> Apparently Silent Circle is also proposing such a feature now.
> 
> Such a feature makes sense when we consider the pervasive world of
> targeted attacks. If you compromise say, my email client today, you may
> get years of email. If you compromise my Pond client today, you get a
> weeks worth of messages. Such a feature is something I think is useful
> and I agreed to it when I started using Pond. 

Nobody is objecting to a feature that deletes certain messages after a
configurable time. I agree that it mitigates some attacks (although less than
one might think, if the mail account isn't tamper-evident), and timed message
deletion has other benefits besides. Many MUAs provide this feature, often
through "filters" or "rules" interfaces.

Rich's objection, which I share, is that Wickr (and apparently, Silent Circle)
attempt to impose this policy on users without allowing them to make an
independent choice.

Is your position that timed message deletion is valuable only if it is
sender-selected and MUA-enforced?


--
Unsubscribe, change to digest, or change password at: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech

Re: [liberationtech] Browser-based Tor proxies

2013-01-03 Thread Daniel Colascione
On 1/3/13 5:25 PM, Steve Weis wrote:
> I noticed a Stanford project for setting up browser-based, ephemeral
> Tor proxies. In their words, "the purpose of this project is to
> create many, generally ephemeral bridge IP addresses, with the goal
> of outpacing a censor's ability to block them."

I'm extremely worried by the client enumeration problem. Nothing
could paint a brighter target on dissidents. Normalization is no
defense here, since it applies to any scheme for circumventing a
censorship system. (And with sufficient normalization, the political
will to continue censorship evaporates anyway.) Either it's okay to
identify clients to an adversary or it's not, and I'm under the
impression that the consensus is that it's not.

I also think the system could be easily rendered useless: I'm also
not convinced that it's possible for the mass of ephemeral proxies
to "absorb the busywork created by the adversary": to twist an old
aphorism, never get into a bandwidth competition with someone who
buys 10GigE ethernet cards by the crate.

While I do have to credit the authors with a good enumeration of the
possible threats to the system, I think these threats simply make
the system unworkable in practice. If the system becomes popular,
it's easy to block, and if the system *isn't* popular, it's easy to
identify who's using it.

Remember that the adversary need not completely block all
connections from ephemeral proxies: he need only impair usability to
the point that users give up.


--
Unsubscribe, change to digest, or change password at: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech

Re: [liberationtech] Silent Circle Dangerous to Cryptography Software Development

2012-10-11 Thread Daniel Colascione
On 10/11/2012 10:54 AM, Moxie Marlinspike wrote:
> The problem is that if you have an enterprise focus, you can't sell a
> service, you have to sell software.  Serviced-based models have
> certainly made inroads into the enterprise, but they still want to host
> security-focused stuff themselves (even if it's encrypted end-to-end).
> It's hard to sell an expensive site license for your software if the
> software is freely available.
> 
> In general, I'm not actually convinced that OSS is a necessity for
> secure communication tools.  Protocols can generally be verified on the
> wire, and unfortunately, the number of people who are going to be able
> to look at software-based cryptography and find vulnerabilities is very
> small -- and two of them put their names behind Silent Circle.

I feel like there are two ways to interpret your argument. Either we're talking
about a protocol with a thorough public specification, which would allow third
parties to verify its proper implementation, or we're talking about a protocol
examined only be a few trusted researchers, presumably under some sort of NDA.
In the former case, FOSS tools will be able to clone and mimic any sufficiently
popular product, eroding the market advantage of a closed-source development
model (although perhaps not for a while); in the latter case, users must base
their confidence and sense of safety on the word of a few people who, however
distinguished they may be, have the same weaknesses as the rest of us.

While source code availability and verifiability (can I compile it and get the
same binaries?) may not be necessity for secure communication tools, I suspect
it's necessary in order for a tool to gain a reputation for being secure, at
least among those in the know.

--
Unsubscribe, change to digest, or change password at: 
https://mailman.stanford.edu/mailman/listinfo/liberationtech

Re: [liberationtech] Wickr

2012-06-28 Thread Daniel Colascione
Hi, Kara.

Thanks for taking the time to explain your product and goals. (And for
not repeating the mistakes of a certain previous project.)

On 6/28/12 8:30 PM, Kara Coppa wrote:
> I'm a Wickr co-founder and  I heard there was some discussion today
> about our technology.  As you've probably heard Dan Kaminsky is part
> of our advisory board and we've worked out some additional details
> about our technology that we'd like to share with you.  I hope you'll
> appreciate what we've been working so hard on. 
> 
> Below is what we've come up with attached with greetings from Dan. 
> 
> Hi everyone, this is Dan Kaminsky.  I've been advising Wickr for some
> time, and I'm relatively pleased with the nature of the product we're
> offering here.
> 
> Essentially, it's an attempt to create an environment where the best
> practices of secure messaging are "always on" and "just work".  There
> are quite a few communities that we all agree could use an easier way
> to communicate safely, and we're honored to provide this new service.

While that's an admirable goal, OTR serves much the same purpose and
is an open, audited system. Why have you decided to create a
proprietary solution?

> A couple of comments about how it all works:
> 
> Obviously, there's no home grown crypto.  It's 2012, everyone knows
> how that story ends.  Messages are encrypted via multiple rounds of
> AES-256,

Multiple rounds? 256 bits of security should be sufficient. Are you
afraid of an AES break?

The block cipher you use provides a nice headline piece, but it's
largely irrelevant. As you noted, the story is over: nobody breaks
block ciphers anymore. Can you provide more details about your
cryptosystem and threat model? What about your messaging packaging and
signing? What about message authentication? (Do I have any independent
way of verifying that a message is from the same device that sent me a
previous message, or do I have to trust your server to tell me it's so?)

> with the symmetric keys transported via 4096 bit RSA.

To be fair, ECC is just as secure, vastly more space-efficient, and
much harder to subtly and dangerously misuse.

> Private keys actually never leave the decrypting device; in fact,
> Wickr goes out of its way to bind messages to a particular device as
> thoroughly as feasible.  It actually uses some properties of devices
> that are unique from phone to phone as part of the key material
> necessary to decrypt messages to a particular phone.  

There are many ways to implement the scheme you describe above, some
secure, some not. Can you provide more details on your key generation
algorithm?

> We sacrifice
> some usability to achieve device dependence but feel the paranoia is
> justified.

In what way?

> There is indeed a central server in the Wickr design; it's there to
> introduce peers to one another and to provide some protection against
> traffic analysis while proxying messages between peers.

Do you send decoy traffic? Do you introduce jitter in message
delivery? If not, simple timing analysis is good enough for traffic
analysis, even when an adversary can't tap your traffic at your
server. (Which any reasonable adversary definitely can.)

What exactly is your threat model?

> Critically,
> the Wickr server never sees the plaintext and does not have a backup
> of the private keys.  Encrypted messages are delivered to the central
> server via SSL and a Wickr-specific key, and then they are proxied to
> clients for decryption and display.

> The central server really does as much as it can to proxy content, but
> otherwise gets out of the way.  No logs are kept of message delivery,
> all addresses are SHA-256 hashes of keys, and each device stores a
> unique cryptographic hash for each Wickr peer.

And the name of each Wickr peer is a cryptographic hash of a key
derived from each device's unique identifier. Is there some notion of
trans-device identity? If not, I'm worried about a simple social
engineering attack:

Say we have Alice and Bob, and they regularly chat about Flickr, Yelp,
and other disemvoweled things over Wickr. Now, each device is a
completely new and untrusted security principal in your system, so
Alice and Bob don't exist --- only Alice(1), Bob(1), etc., where
Name(N) denotes the Nth device owned by Name. Alice and Bob are both
enthusiastic and savvy users of mobile devices, so they regularly go
out and purchase new mobile devices. When Bob buys a new device, he
tells about it --- except that he's now Bob(2), not Bob(1), and Alice
can't tell the difference between Bob(2) and Mallory(1) saying he's Bob.

I suppose you have some sort of central system of accounts --- Alice,
Bob, etc. --- and that users can register new devices with their
account. A server could send a message to Alice saying Bob(2)
represents the same user as Bob(1). But having done so, you've reduced
the security of your system to the security of your server-side
account management. That's not "military-grade". It's roughly
Face