[Cryptography] End to end

2013-09-16 Thread Phillip Hallam-Baker
Just writing document two in the PRISM-Proof series. I probably have to
change the name before November. Thinking about 'Privacy Protected' which
has the same initials.


People talk about end-to-end without talking about what they are. In most
cases at least one end is a person or an organization, not a machine. So
when we look at the security of the whole system people security issues
like the fact they forget private key passphrases and lose machines matter.

Which ends you are talking about depends on what the context is. If we are
talking about message formats then the ends are machines. If we are talking
about trust then the ends are people and organizations.

End to end has a lot of costs. Deploying certificates to end users is
expensive in an enterprise and often unnecessary. If people are sending
email through the corporate email system then in many cases the corporation
has a need/right to see what they are sending/receiving.

So one conclusion about S/MIME and PGP is that they should support domain
level confidentiality and confidentiality, not just account level.

Another conclusion is that end-to-end security is orthogonal to transport.
In particular there are good use cases for the following configuration:

Mail sent from al...@example.com to b...@example.net

* DKIM signature on message from example.com as outbound MTA 'From'.

* S/MIME Signature on message from example.com with embedded logotype
information.

* TLS Transport Layer Security with Forward Secrecy to example.net mail
server using DNSSEC and DANE to authenticate the IP address and certificate.

* S/MIME encryption under example.net EV certificate

* S/MIME encryption under b...@example.net personal certificate.

[Hold onto flames about key validation and web of trust for the time being.
Accepting the fact that S/MIME has won the message format deployment battle
does not mean we are obliged to use the S/MIME PKI unmodified or require
use of CA validated certificates.]


Looking at the Certificate Transparency work, I see a big problem with
getting the transparency to be 'end-to-end', particularly with Google's
insistence on no side channels and ultra-low latency.

To me the important thing about transparency is that it is possible for
anyone to audit the key signing process from publicly available
information. Doing the audit at the relying party end prior to every
reliance seems a lower priority.

In particular, there are some type of audit that I don't think it is
feasible to do in the endpoint. The validity of a CT audit is only as good
as your newest notary timestamp value. It is really hard to guarantee that
the endpoint is not being spoofed by a PRISM capable adversary without
going to techniques like quorate checking which I think are completely
practical in a specialized tracker but impractical to do in an iPhone or
any other device likely to spend much time turned off or otherwise
disconnected from the network.



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] End to end

2013-09-16 Thread Phillip Hallam-Baker
On Mon, Sep 16, 2013 at 3:14 PM, Ben Laurie  wrote:

>
> On 16 September 2013 18:49, Phillip Hallam-Baker  wrote:
>
>> To me the important thing about transparency is that it is possible for
>> anyone to audit the key signing process from publicly available
>> information. Doing the audit at the relying party end prior to every
>> reliance seems a lower priority.
>>
>
> This is a fair point, and we could certainly add on to CT a capability to
> post-check the presence of a pre-CT certificate in a log.
>

Yeah, not trying to attack you or anything. Just trying to work out exactly
what the security guarantees provided are.



> In particular, there are some type of audit that I don't think it is
>> feasible to do in the endpoint. The validity of a CT audit is only as good
>> as your newest notary timestamp value. It is really hard to guarantee that
>> the endpoint is not being spoofed by a PRISM capable adversary without
>> going to techniques like quorate checking which I think are completely
>> practical in a specialized tracker but impractical to do in an iPhone or
>> any other device likely to spend much time turned off or otherwise
>> disconnected from the network.
>>
>
> I think the important point is that even infrequently connected devices
> can _eventually_ reveal the subterfuge.
>

I doubt it is necessary to go very far to deter PRISM type surveillance. If
that continues very long at all. The knives are out for Alexander, hence
the story about his Enterprise bridge operations room.

Now the Russians...


Do we need to be able to detect PRISM type surveillance in the infrequently
connected device or is is sufficient to be able to detect it somewhere?

One way to get as good timestamp into a phone might be to use a QR code:
This is I think as large as would be needed:

[image: Inline image 1]



-- 
Website: http://hallambaker.com/
<>___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-16 Thread Phillip Hallam-Baker
On Mon, Sep 16, 2013 at 2:48 PM, zooko  wrote:

> On Sun, Sep 08, 2013 at 08:28:27AM -0400, Phillip Hallam-Baker wrote:
> >
> > It think we need a different approach to source code management. Get rid
> of
> > user authentication completely, passwords and SSH are both a fragile
> > approach. Instead every code update to the repository should be signed
> and
> > recorded in an append only log and the log should be public and enable
> any
> > party to audit the set of updates at any time.
> >
> > This would be 'Code Transparency'.
>
> This is a very good idea, and eminently doable. See also Ben Laurie's blog
> post:
>
> http://www.links.org/?p=1262
>
> > Problem is we would need to modify GIT to implement.
>
> No, simply publish the git commits (hashes) in a replicated, append-only
> log.
>

Well people bandwidth is always a problem.

But what I want is not just the ability to sign, I want to have a mechanism
to support verification and checking of the log etc. etc.



> So what's the next step? We just need the replicated, append-only log.
>

Where I am headed is to first divide up the space for PRISM-PROOF email
between parts that are solved and only need good execution (message
formats, mail integration, etc) and parts that are or may be regarded as
research (key distribution, key signing, PKI).

Once that is done I am going to be building myself a very lightweight
development testbed built on a SMTP/SUBMIT + IMAP proxy.

But hopefully other people will see that there is general value to such a
scheme and work on:

[1] Enabling MUAs to make use of research built on the testbed.

[2] Enabling legacy PKI to make use of the testbed.

[3] Research schemes


Different people have different skills and different interests. My interest
is on the research side but other folk just want to write code to a clear
spec. Anyone going for [3] has to understand at the outset that whatever
they do is almost certain to end up being blended with other work before a
final standard is arrived at. We cannot afford another PGP/SMIME debacle.

On the research side, I am looking at something like Certificate
Transparency but with a two layer notary scheme. Instead of the basic
infrastructure unit being a CA, the basic infrastructure unit is a Tier 2
append only log. To get people to trust your key you get it signed by a
trust provider. Anyone can be a trust provider but not every trust provider
is trusted by everyone. A CA is merely a trust provider that issues policy
and practices statements and is subject to third party audit.


The Tier 2 notaries get their logs timestamped by at least one Tier 1
notary and the Tier 1 notaries cross notarize.

So plugging code signing projects into a Tier 2 notary would make a lot of
sense.

We could also look at getting Sourceforge and GITHub to provide support
maybe.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-17 Thread Phillip Hallam-Baker
My phrase PRISM-Proofing seems to have created some interest in the press.

PRISM-Hardening might be more important, especially in the short term. The
objective of PRISM-hardening is not to prevent an attack absolutely, it is
to increase the work factor for the attacker attempting ubiquitous
surveillance.

Examples include:

Forward Secrecy: Increases work factor from one public key per host to one
public key per TLS session.

Smart Cookies: Using cookies as authentication secrets and passing them as
plaintext bearer tokens is stupid. It means that all an attacker needs to
do is to compromise TLS once and they have the authentication secret. The
HTTP Session-ID draft I proposed a while back reduces the window of
compromise to the first attack.


I am sure there are other ways to increase the work factor.



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-18 Thread Phillip Hallam-Baker
A few clarifications

1) PRISM-Proof is a marketing term

I have not spent a great deal of time looking at the exact capabilities of
PRISM vs the other programs involved because from a design point they are
irrelevant. The objective is to harden/protect the infrastructure from any
ubiquitous, indiscriminate intercept capability like the one Gen Alexander
appears to have constructed.

PRISM-class here is merely a handy label for a class of attack where the
attacker can spend upwards of $100 million to perform an attack which
potentially affects every Internet user. PRISM-class is a superset of
PRISM, BULLRUN, MANASAS, etc. etc.


2) SSL is not designed to resist government intercept

Back in 1993-6 when I was working on Internet security and payments at CERN
and the Web Consortium the priority was to make payments on the Web, not
make it resistant to government intercept. The next priority was to
establish the authenticity of news Web sites. There were several reasons
for that set of priorities, one of which was that the technology we had
available was limited and it was impractical to do more than one public key
operation per session and it was only practical to use public key some of
the time. Severs of the day simply could not handle the load otherwise.

Twenty years later, much has changed and we can do much more. The designs
do not need to be constrained in the way they were then.

It is not a question of whether email is encrypted in transport OR at rest,
we need both. There are different security concerns at each layer.


3) We need more than one PKI for Web and email security.

PGP and S/MIME have different key distribution models. Rather than decide
which is 'better' we need to accept that we need both approaches and in
fact need more.

If I am trying to work out if an email was really sent by my bank then I
want a CA type security model because less than 0.1% of customers are ever
going to understand a PGP type web of trust for that particular purpose.
But its the bank sending the mail, not an individual at the bank.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] An NSA mathematician shares his from-the-trenches view of the agency's surveillance activities

2013-09-18 Thread Phillip Hallam-Baker
On Tue, Sep 17, 2013 at 8:01 PM, John Gilmore  wrote:

> Techdirt takes apart his statement here:
>
>
> https://www.techdirt.com/articles/20130917/02391824549/nsa-needs-to-give-its-rank-and-file-new-talking-points-defending-surveillance-old-ones-are-stale.shtml
>
>   NSA Needs To Give Its Rank-and-File New Talking Points Defending
>   Surveillance; The Old Ones Are Stale
>   from the that's-not-really-going-to-cut-it dept
>   by Mike Masnick, Tue, Sep 17th 2013
>
>   It would appear that the NSA's latest PR trick is to get out beyond
>   the top brass -- James Clapper, Keith Alexander, Michael Hayden and
>   Robert Litt haven't exactly been doing the NSA any favors on the PR
>   front lately -- and get some commentary from "the rank and file."
>   ZDNet apparently agreed to publish a piece from NSA mathemetician/
>   cryptanalyst Roger Barkan in which he defends the NSA using a bunch
>   of already debunked talking points. What's funny is that many of
>   these were the talking points that the NSA first tried out back in
>   June and were quickly shown to be untrue. However, let's take a
>   look. It's not that Barkan is directly lying... it's just that he's
>   setting up strawmen to knock down at a record pace.


As someone who has met Hayden, I do not think his words are necessarily
untrue, they may be out of date. It appears that there was a major change
at the NSA after his departure. In particular the number of external
contractors seems to have increased markedly (based on the number and type
of job adverts from SAIC, Booz-Allen, Van Dyke, etc.)

The enterprise bridge control center certainly does not seem to be Hayden's
style either. Hayden is not the type to build a showboat like that.


After 9/11 we discovered that our view of the cryptowars was completely
false in one respect. Louis Freeh wasn't building a panopticon, he simply
had no comprehension of the power of the information he was demanding the
ability to collect. The FBI computer systems were antiquated, lacking the
ability to do keyword search on two terms.

I rather suspect that Alexander is similarly blind to the value of the
information the system is collecting. They might well be telling the truth
when they told the court that the system was so compartmentalized and
segregated nobody knew what it was doing.

For example, did the NSA people who thought it a good wheeze to trade raw
SIGINT on US citizens to the Israelis understand what they were passing on?
They certainly don't seem to know the past history of US-Israeli
'cooperation' only last year an Israeli firm was trying to sell intercept
equipment to Iran through an intermediary and the story of how the Chinese
got an example of the Stinger missile to copy is well known. My country has
had an arms embargo on Israel for quite a while due to breach of Israeli
undertakings not to use military weapons against civilians.


That does not make the situation any less dangerous, it makes it more so.

What Barkan does not mention is that we know that the NSA internal controls
have collapsed completely, Snowdens disclosure proves that. Snowden should
never have had access to the information he has disclosed.

As with gwbush53.com, the intelligence gathered through PRISM-class
intercepts will undoubtedly be spread far and wide. Anything Snowden knows,
China and Russia will know.


The fact that nothing has been said on that publicly by the NSA
spokespeople is something of a concern. They have a big big problem and
heads should be rolling. I can't see how Clapper and Alexander can remain
given the biggest security breach in NSA history on their watch.
-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-19 Thread Phillip Hallam-Baker
On Wed, Sep 18, 2013 at 5:23 PM, Lucky Green wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 2013-09-14 08:53, Peter Fairbrother wrote:
>
> > I get that 1024 bits is about on the edge, about equivalent to 80
> > bits or a little less, and may be crackable either now or sometime
> > soon.
>
> Moti Young and others wrote a book back in the 90's (or perhaps) 80's,
> that detailed the strength of various RSA key lengths over time. I am
> too lazy to look up the reference or locate the book on my bookshelf.
> Moti: help me out here? :-)
>
> According to published reports that I saw, NSA/DoD pays $250M (per
> year?) to backdoor cryptographic implementations. I have knowledge of
> only one such effort. That effort involved DoD/NSA paying $10M to a
> leading cryptographic library provider to both implement and set as
> the default the obviously backdoored Dual_EC_DRBG as the default RNG.
>
> This was $10M wasted. While this vendor may have had a dominating
> position in the market place before certain patents expired, by the
> time DoD/NSA paid the $10M, few customers used that vendor's
> cryptographic libraries.
>
> There is no reason to believe that the $250M per year that I have seen
> quoted as used to backdoor commercial cryptographic software is spent
> to any meaningful effect.
>

The most corrosive thing about the whole affair is the distrust it has sewn.

I know a lot of ex-NSA folk and none of them has ever once asked me to drop
a backdoor. And I have worked very closely with a lot of government
agencies.


Your model is probably wrong. Rather than going out to a certain crypto
vendor and asking them to drop a backdoor, I think they choose the vendor
on the basis that they have a disposition to a certain approach and then
they point out that given that they have a whole crypto suite based on EC
wouldn't it be cool to have an EC based random number generator.

I think that the same happens in IETF. I don't think it very likely Randy
Bush was bought off by the NSA when he blocked deployment of DNSSEC for ten
years by killing OPT-IN. But I suspect that a bunch of folk were whispering
in his ear that he needed to be strong and resist what was obviously a
blatant attempt at commercial sabotage etc. etc.


I certainly think that the NSA is behind the attempt to keep the Internet
under US control via ICANN which is to all intents a quango controlled by
the US government. For example, ensuring that the US has the ability to
impose a digital blockade by dropping a country code TLD out of the root.
Right now that is a feeble threat because ICANN would be over in a minute
if they tried. But deployment of DNSSEC will give them the power to do that
and make it stick (and no, the key share holders cannot override the veto,
the shares don't work without the key hardware).

A while back I proposed a scheme based on a quorum signing proposal that
would give countries like China and Brazil the ability to assure themselves
that they were not subjected to the threat of future US capture. I have
also proposed that countries have a block of IPv6 and BGP-AS space assigned
as a 'Sovereign Reserve'. Each country would get a /32 which is more than
enough to allow them to ensure that an artificial shortage of IPv6
addresses can't be used as a blockade. If there are government folk reading
this list who are interested I can show them how to do it without waiting
on permission from anyone.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Cryptographic mailto: URI

2013-09-19 Thread Phillip Hallam-Baker
I am in mid design here but I think I might have something of interest.

Let us say I want to send an email to al...@example.com securely.

Now obviously (to me anyway) we can't teach more than a small fraction of
the net to use any identifier other than the traditional email address.

So we need some sort of directory infrastructure to allow discovery of
those email addresses and it would be good to be able to reuse existing
directories if at all possible.


But how do we insert the email addresses into a directory like LinkedIn?
Well you can add a URI into your account. So what if the URI is of the form:

ppid:al...@example.com:example.net:
Syd6BMXje5DLqHhYSpQswhPcvDXj+8rK9LaonAfcNWM

Where

al...@example.com is Alice's email address for secure communications

example.net is a server which will resolve the reference by means of a
simple HTTP query using the pattern http:///.well-known/ppid/

"Syd...NWM" is the Base64 hash of OID-SHA256 + SHA256(X)

X is a public key that signs a document (probably JSON) that specifies:

* X
* Alice's certificate(s)
* Alice's email receipt policy whether to always encrypt, what message
formats are supported
* links to whatever additional advice information might help convince a
relying party the key is genuine like a CT log.
* reliance policy (is this key for public use or restricted)
* reporting policy (for future changes)

So to use this as a mechanism for ghetto key distribution receivers would
add the URI into their account. Or let their PKI discovery agent do it for
them.


Senders would enable their PKI discovery agent to access their LinkedIn
account.

It would slurp down the data once a day (say) and keep it in a cache for
use by that user alone unless it is marked public when any user of the PKI
discovery agent can make use of it.

It would attempt to validate the information obtained, possibly resulting
in a report if it detected a change in a previously registered key that had
not been properly countersigned by the old.

The validated info would then be used to encrypt the outbound mail
according to the specified policy.


Notes:

1) This is only about key discovery, not validation.

2) Better to send email encrypted under a key that is not validated than in
the clear.

3) A MUA should offer the option 'force encryption' however. And in that
case it would barf if the key provided didn't meet the validation criteria.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-19 Thread Phillip Hallam-Baker
On Wed, Sep 18, 2013 at 5:50 PM, Viktor Dukhovni
wrote:

> On Wed, Sep 18, 2013 at 08:47:17PM +, Viktor Dukhovni wrote:
>
> > On Wed, Sep 18, 2013 at 08:04:04PM +0100, Ben Laurie wrote:
> >
> > > > This is only realistic with DANE TLSA (certificate usage 2 or 3),
> > > > and thus will start to be realistic for SMTP next year (provided
> > > > DNSSEC gets off the ground) with the release of Postfix 2.11, and
> > > > with luck also a DANE-capable Exim release.
> > >
> > > What's wrong with name-constrained intermediates?
> >
> > X.509 name constraints (critical extensions in general) typically
> > don't work.
>
> And public CAs don't generally sell intermediate CAs with name
> constraints.  Rather undercuts their business model.
>
>
This is no longer the case. Best Practice is now considered to be to use
name constraints but not mark them critical.

This is explicitly a violation of PKIX which insists that a name constraint
extension be marked critical. Which makes it impossible to use name
constraints as they will break in Safari and a few other browsers.

The refusal to make the obvious change is either because people do not
understand the meaning of the critical bit or the result of some of that
$250 million being felt in the PKIX group. As I pointed out at RSA, the use
of name constraints might well have prevented the FLAME attack working.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-21 Thread Phillip Hallam-Baker
On Thu, Sep 19, 2013 at 4:15 PM, Ben Laurie  wrote:

>
>
>
> On 18 September 2013 21:47, Viktor Dukhovni wrote:
>
>> On Wed, Sep 18, 2013 at 08:04:04PM +0100, Ben Laurie wrote:
>>
>> > > This is only realistic with DANE TLSA (certificate usage 2 or 3),
>> > > and thus will start to be realistic for SMTP next year (provided
>> > > DNSSEC gets off the ground) with the release of Postfix 2.11, and
>> > > with luck also a DANE-capable Exim release.
>> >
>> > What's wrong with name-constrained intermediates?
>>
>> X.509 name constraints (critical extensions in general) typically
>> don't work.
>>
>
> No. They typically work. As usual, Apple are the fly in the ointment.
>

The key to make them work is to NOT follow the IETF standard and to NOT
mark the extension critical.

If the extension is marked critical as RFC 5280 demands then the
certificates will break in Safari (and very old versions of some other top
tier browsers).

If the extension is not marked critical as CABForum and Mozilla recommend
then nothing breaks and the certificate chain will be correctly processed
by every current edition of every top tier browser apart from Safari.


The peculiar insistence that the extension be marked critical despite the
obvious fact that it breaks stuff is one of the areas where I suspect NSA
interference.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PRISM-Proofing and PRISM-Hardening

2013-09-21 Thread Phillip Hallam-Baker
On Thu, Sep 19, 2013 at 5:11 PM, Max Kington  wrote:

>
> On 19 Sep 2013 19:11, "Bill Frantz"  wrote:
> >
> > On 9/19/13 at 5:26 AM, rs...@akamai.com (Salz, Rich) wrote:
> >
> >>> I know I would be a lot more comfortable with a way to check the mail
> against a piece of paper I
> >>
> >> received directly from my bank.
> >>
> >> I would say this puts you in the sub 1% of the populace.  Most people
> want to do things online because it is much easier and "gets rid of paper."
>  Those are the systems we need to secure.  Perhaps another way to look at
> it:  how can we make out-of-band verification simpler?
> >
> >
> > Do you have any evidence to support this contention? Remember we're
> talking about money, not just social networks.
> >
> > I can support mine. ;-)
> >
> > If organizations like Consumers Union say that you should take that
> number from the bank paperwork you got when you signed up for an account,
> or signed up for online banking, or got with your monthly statement, or got
> as a special security mailing and enter it into your email client, I
> suspect a reasonable percentage of people would do it. It is, after all a
> one time operation.
>
> As with other themes though, one size does not fit all. The funny thing
> being that banks are actually extremely adept at doing out of band paper
> verification. Secure printing is born out of financial transactions,
> everything from cheques to cash to PIN notification.
>
> I think it was Phillip who said that other trust models need to be
> developed. I'm not as down on the Web of trust as others are but I strongly
> believe that there has to be an ordered set of priorities. Usability has to
> be right up there as a near-peer with overall system security. Otherwise as
> we've seen a real attack in this context is simply to dissuade people to
> use it and developers, especially of security oriented systems can do that
> of their own accord.
>
> If you want to get your systems users to help with out of band
> verification get them 'talking' to each other. Perry said that our social
> networks are great for keeping spam out of our mailboxes yet were busy
> trying to cut out the technology that's driven all of this.
>
> Out of band for your banking might mean security printing techniques and
> securing your email, phoning your friends.
>

Bear in mind that securing financial transactions is exactly what we
designed the WebPKI to do and it works very well at that.

Criminals circumvent the WebPKI rather than trying to defeat it. If they
did start breaking the WebPKI then we can change it and do something
different.


But financial transactions are easier than protecting the privacy of
political speech because it is only money that is at stake. The criminals
are not interested in spending $X to steal $0.5X. We can do other stuff to
raise the cost of attack if it turns out we need to do that.

So I think what we are going to want is more than one trust model depending
on the context and an email security scheme has to support several.


If we want this to be a global infrastructure we have 2.4 billion users to
support. If we spend $0.01 per user on support, that is $24 million. It is
likely to be a lot more than that per user.

Enabling commercial applications of the security infrastructure is
essential if we are to achieve deployment. If the commercial users of email
can make a profit from it then we have at least a chance to co-opt them to
encourage their customers to get securely connected.

One of the reasons the Web took off like it did in 1995 was that Microsoft
and AOL were both spending hundreds of millions of dollars advertising the
benefits to potential users. Bank America, PayPal etc are potential allies
here.




-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Specification: Prism Proof Email

2013-09-21 Thread Phillip Hallam-Baker
We need an email security infrastructure and recent events demonstrate that
the infrastructure we develop needs to be proof against PRISM-class attacks.

By PRISM-class I mean an attack that attempts pervasive surveillance with
budgets in excess of $100 million rather than the PRISM program in
particular.

Neither OpenPGP nor S/MIME is capable of providing protection against this
class of attack because they are not widely enough used. We can only hope
for these to be useful if at least 5% of Internet users start sending mail
securely.

But while the legacy protocols are not sufficient, 95% of the existing work
is fine and does not need to be repeated although there may be some details
of execution that can be improved.

The part that is going to need new research is in the area of trust models.
As someone who has seen the documents said to me this week, given a choice
between A and B, the NSA does both. We have to do the same. Rather than
have a pointless argument about whether Web 'o Trust or PKIX is the way to
go, let everyone do both. Let people get a certificate from a CA and then
get it endorsed by their peers: belt and braces.

The idea in this draft is to split up the problem space so that people who
know email clients can write code to support any of the research ideas that
might be proposed and any of the research ideas can be used with any of the
mail clients that have been enabled.


The draft is to be found at:

http://www.ietf.org/id/draft-hallambaker-prismproof-dep-00.txt

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Cryptographic mailto: URI

2013-09-21 Thread Phillip Hallam-Baker
On Fri, Sep 20, 2013 at 4:36 AM, Dirk-Willem van Gulik  wrote:

>
> Op 19 sep. 2013, om 19:15 heeft Phillip Hallam-Baker 
> het volgende geschreven:
>
> > Let us say I want to send an email to al...@example.com securely.
> ...
> > ppid:al...@example.com:example.net:
> Syd6BMXje5DLqHhYSpQswhPcvDXj+8rK9LaonAfcNWM
> ...
> > example.net is a server which will resolve the reference by means of a
> simple HTTP query using the pattern http:///.well-known/ppid/
> > "Syd...NWM" is the Base64 hash of OID-SHA256 + SHA256(X)
> ..
> > So to use this as a mechanism for ghetto key distribution receivers
> would add the URI into their account. Or let their PKI discovery agent do
> it for them.
>
> We've been experimenting with much the same. With two twists. Basic
> principle is the same.
>
> We use:
>
> -   :
>
> as to keep it short. ID is currently a ; namespace is a 2-3 char
> identifier. We then construct with this a 'hardcoded' zone name:
>
> .fqdn-in-some-tld.
>
> which is to have a (signed) entry for in DNS:
>
> ...fqdn-in-some-tld.
>
> which is in fact a first-come, first-served secure dynamic dns updatable
> zone containing the public key.
>
> Which once created allows only updating to those (still) having the
> private key of the public key that signed the initial claim of that .
>

Interesting, though I suspect this is attempting to meet different trust
requirements than I am.

A couple of days ago I spoke with someone well known here who has seen the
Snowden files. His take was that when the NSA has a choice of doing A or B
it always does both.

I think we need to adopt the same approach but in a way that lets all the
various approaches work together. It should not be necessary for me to
install five plug ins into Thunderbird to support five different flavors of
researchy trust infrastructure.

A better approach is to have one plug-in, or better native support for a
connector to a Web Service that can then perform all the researchy trust
infrastructure navigation on my behalf. The Web service can be shared
between users and when there is a new researchy trust infrastructure
proposed, all that is necessary to add it into the mix is to extend the Web
Service and everyone using it can try out the new scheme and see if it is
practical.


This approach does introduce the risk that the web service might be
compromised. Particularly if the client is unable to perform at least some
degree of local validation on the keys. But the long term expectation would
be that support for trust infrastructures that prove to be stable, widely
used, and effective will eventually migrate into the client.


At this point the experimental research question should be 'is this trust
infrastructure practical'. We can get a very good idea of the security
properties of the system by looking at how people use it and doing math.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] InfoRequest: How to configure email clients to accept encrypted S/MIME

2013-09-21 Thread Phillip Hallam-Baker
Working on Prism Proof email, I could use information on how to configure
various email clients to support S/MIME decryption using a previously
generated key package.

While descriptions of how the user can configure S/MIME would be nice, what
I am really after is information on the internals so that it would be
possible for a tool to do this configuration for the user automatically.

Info on where the account configuration data is stored would also be very
useful.


The end goal here is a tool that will generate and manage private keys and
configure their email clients so that they can read mail encrypted under
them.

If we have the 'how to read encrypted mail well' side of things sorted
using this tool that leaves only the 'how to send encrypted mail well' as a
research problem.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] The hypothetical random number generator backdoor

2013-09-24 Thread Phillip Hallam-Baker
So we think there is 'some kind' of backdoor in a random number generator.
One question is how the EC math might make that possible. Another is how
might the door be opened.


I was thinking about this and it occurred to me that it is fairly easy to
get a public SSL server to provide a client with a session key - just ask
to start a session.

Which suggests that maybe the backdoor is of the form that if you know
nonce i, and the private key to the backdoor, that reduces the search space
for finding nonce i+1.

Or maybe there is some sort of scheme where you get a lot of nonces from
the random number generator, tens of thousands and that allows the seed to
be unearthed.


Either way, the question is how to stop this side channel attack. One
simple way would be to encrypt the nonces from the RNG under a secret key
generated in some other fashion.

nonce = E (R, k)

Or hashing the RNG output and XORing with it

nonce = r  XOR H (r)


Either way, there is an extra crypto system in the way that has to be
broken if a random number generator turns out to have some sort of
relationship between sequential outputs.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The hypothetical random number generator backdoor

2013-09-25 Thread Phillip Hallam-Baker
On Tue, Sep 24, 2013 at 10:59 AM, Jerry Leichter  wrote:

> On Sep 22, 2013, at 8:09 PM, Phillip Hallam-Baker 
> wrote:
> > I was thinking about this and it occurred to me that it is fairly easy
> to get a public SSL server to provide a client with a session key - just
> ask to start a session.
> >
> > Which suggests that maybe the backdoor [for an NSA-spiked random number
> generator] is of the form that ... you get a lot of nonces [maybe just one]
> from the random number generator ... and that allows the [next one to be
> predicted more easily or even the] seed to be unearthed.  One simple way
> [to stop this] would be to encrypt the nonces from the RNG under a secret
> key generated in some other fashion.
> >
> > nonce = E (R, k)
> >
> > Or hashing the RNG output and XORing with it
> >
> > nonce = R  XOR H(R)
> You shifted from "random value" to "nonce".  Given the severe effects on
> security that using a "nonce" - a value that is simply never repeated in a
> given cryptographic context; it may be predictable, even fixed - to a
> "random value", one needs to be careful about the language.  (There's
> another layer as well, partly captured by "unpredictable value" but not
> really:  Is it a value that we must plan on the adversary learning at some
> point, even though he couldn't predict it up front, or must it remain
> secret?  The random values in PFS are only effective in providing forward
> security if they remain secret forever.)
>
> Anyway, everything you are talking about here is *supposed* to be a random
> value.  Using E(R,k) is a slightly complicated way of using a standard
> PRNG:  The output of a block cipher in counter mode.  Given (a) the
> security of the encryption under standard assumptions; (b) the secrecy and
> randomness of k; the result is a good PRNG.  (In fact, this is pretty much
> exactly one of the Indistinguishability assumptions.  There are subtly
> different forms of those around, but typically the randomness of input is
> irrelevant - these are semantic security assumptions so knowing something
> about the input can't help you.)  Putting R in there can't hurt, and if the
> way you got R really is random then even if k leaks or E turns out to be
> weak, you're still safe.  However ... where does k come from?  To be able
> to use any of the properties of E, k itself must be chosen at random.  If
> you use the same generator as way use to find R, it's not clear that this
> is much stronger than R itself.  If you have some assured way of getting a
> random k - why not use it for R itself?  (This might be worth it if you can
> generate a k you believe in but only at a much lower rate than you can
> generate an R directly.  Then you can "stretch" k over a number of R
> values.  But I'd really think long and hard about what you're assuming
> about the various components.)
>
> BTW, one thing you *must not* do is have k and the session key relate to
> each other in any simple way.
>
> For hash and XOR ... no standard property of any hash function tells you
> anything about the properties of R XOR H(R).  Granted, for the hash
> functions we generally use, it probably has about the same properties; but
> it won't have any more than that.  (If you look at the structure of classic
> iterated hashes, the last thing H did was compute S = S + R(S), where S was
> the internal state and R was the round function.  Since R is usually
> invertible, this is the only step that actually makes the whole thing
> non-invertible.  Your more-or-less repetition of the same operation
> probably neither helps more hinders.)
>
> At least if we assume the standard properties, it's hard to get R from
> H(R) - but an attacker in a position to try a large but (to him) tractable
> number of guesses for R can readily check them all.  Using R XOR H(R) makes
> it no harder for him to try that brute force search.  I much prefer the
> encryption approach.
>


There are three ways a RNG can fail

1) Insufficient randomness in the input
2) Losing randomness as a result of the random transformation
3) Leaking bits through an intentional or unintentional side channel

What I was concerned about in the above was (3).

I prefer the hashing approaches. While it is possible that there is a
matched set of weaknesses, I find that implausible.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-25 Thread Phillip Hallam-Baker
On Sun, Sep 22, 2013 at 2:00 PM, Stephen Farrell
wrote:

>
>
> On 09/22/2013 01:07 AM, Patrick Pelletier wrote:
> > "1024 bits is enough for anyone"
>
> That's a mischaracterisation I think. Some folks (incl. me)
> have said that 1024 DHE is arguably better that no PFS and
> if current deployments mean we can't ubiquitously do better,
> then we should recommend that as an option, while at the same
> time recognising that 1024 is relatively short.
>

And the problem appears to be compounded by dofus legacy implementations
that don't support PFS greater than 1024 bits. This comes from a
misunderstanding that DH keysizes only need to be half the RSA length.

So to go above 1024 bits PFS we have to either

1) Wait for all the servers to upgrade (i.e. never do it because the won't
upgrade)

2) Introduce a new cipher suite ID for 'yes we really do PFS at 2048 bits
or above'.


I suggest (2)

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA recommends against use of its own products.

2013-09-28 Thread Phillip Hallam-Baker
On Wed, Sep 25, 2013 at 7:18 PM, Peter Gutmann wrote:

> =?iso-8859-1?Q?Kristian_Gj=F8steen?= 
> writes:
>
> >(For what it's worth, I discounted the press reports about a trapdoor in
> >Dual-EC-DRBG because I didn't think anyone would be daft enough to use
> it. I
> >was wrong.)
>
> +1.  It's the Vinny Gambini effect (from the film My Cousin Vinny):
>
>   Judge Haller: Mr. Gambini, didn't I tell you that the next time you
> appear
> in my court that you dress appropriately?
>   Vinny: You were serious about dat?
>
> And it's not just Dual-EC-DRBG that triggers the "You were serious about
> dat?"
> response, there are a number of bits of security protocols where I've
> been...
> distinctly surprised that anyone would actually do what the spec said.
>

Quite, who on earth thought DER encoding was necessary or anything other
than incredible stupidity?

I have yet to see an example of code in the wild that takes a binary data
structure, strips it apart and then attempts to reassemble it to pass to
another program to perform a signature check. Yet every time we go through
a signature format development exercise the folk who demand
canonicalization always seem to win.

DER is particularly evil as it requires either the data structures to be
assembled in the reverse order or a very complex tracking of the sizes of
the data objects or horribly inefficient code. But XML signature just ended
up broken.


[Just found your ASN.1 dump tool and using it to debug my C# ASN.1 encoder,
OK so maybe ASN.1 is not terrible if I can put together a compiler in four
days but I am not using the Assanine 1 schema syntax and I am using my
personal toolchain]



> (Having said that, I've also occasionally been pleasantly surprised when,
> by
> unanimous unspoken consensus among implementers, everyone ignored the spec
> and
> did the right thing).
>

I have a theory that the NSA stooges are not the technical folk. Why on
earth would a world class expert want to spend their time playing silly
games sabotaging specs when they could have much more fun working inside
the NSA at Fort Meade or building stuff.

What I would do is to take a person who is a technical wannabe and provide
him with technical support and tell him to try to wheedle positions as a
document editor. Extra points if they manage to discourage participation by
folk with solid technical chops.


We saw something of the sort during the anti-spam efforts. I was sure at
the time that the spammers had folk paid to make the discussions as
acrimonious as possible.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] RSA equivalent key length/strength

2013-09-28 Thread Phillip Hallam-Baker
On Fri, Sep 27, 2013 at 3:59 AM, John Gilmore  wrote:

> > And the problem appears to be compounded by dofus legacy implementations
> > that don't support PFS greater than 1024 bits. This comes from a
> > misunderstanding that DH keysizes only need to be half the RSA length.
> >
> > So to go above 1024 bits PFS we have to either
> >
> > 1) Wait for all the servers to upgrade (i.e. never do it because the
> won't
> > upgrade)
> >
> > 2) Introduce a new cipher suite ID for 'yes we really do PFS at 2048 bits
> > or above'.
>
> Can the client recover and do something useful when the server has a
> buggy (key length limited) implementation?  If so, a new cipher suite
> ID is not needed, and both clients and servers can upgrade asynchronously,
> getting better protection when both sides of a given connection are
> running the new code.
>

Actually, it turns out that the problem is that the client croaks if the
server tries to use a key size that is bigger than it can handle. Which
means that there is no practical way to address it server side within the
current specs.



> In the case of (2) I hope you mean "yes we really do PFS with an
> unlimited number of bits".  1025, 2048, as well as 16000 bits should work.
>

There is no reason to use DH longer than the key size in the certificate
and no reason to use a shorter DH size either.

Most cryptolibraries have a hard coded limit at 4096 bits and there are
diminishing returns to going above 2048. Going from 4096 to 8192 bits only
increases the work factor by a very small amount and they are really slow
which means we end up with DoS considerations.

We really need to move to EC above RSA. Only it is going to be a little
while before we work out which parts have been contaminated by NSA
interference and which parts are safe from patent litigation. RIM looks set
to collapse with or without the private equity move. The company will be
bought with borrowed money and the buyers will use the remaining cash to
pay themselves a dividend. Mitt Romney showed us how that works.

We might possibly get lucky and the patents get bought out by a white
knight. But all the mobile platform providers are in patent disputes right
now and I can't see it likely someone will plonk down $200 million for a
bunch of patents and then make the crown jewels open.


Problem with the NSA is that its Jekyll and Hyde. There is the good side
trying to improve security and the dark side trying to break it. Which side
did the push for EC come from?




-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] encoding formats should not be committee'ized

2013-10-02 Thread Phillip Hallam-Baker
Replying to James and John.

Yes, the early ARPANET protocols are much better than many that are in
binary formats. But the point where data encoding becomes an issue is where
you have nested structures. SMTP does not have nested structures or need
them. A lot of application protocols do.

I have seen a lot of alternatives to X.509 that don't use ASN.1 and are
better for it. But they all use nesting. And to get back on topic, the main
motive for adding binary to JSON is to support signed blobs and encrypted
blobs. Text encodings are easy to read but very difficult to specify
boundaries in without ambiguity.


Responding to James,

No, the reason for baring multiple inheritance is not that it is too
clever, it is that studies have shown that code using multiple inheritance
is much harder for other people to understand than code using single
inheritance.

The original reason multiple inheritance was added to C was to support
collections. So if you had a class A and a subclass B and wanted to have a
list of B then the way you would do it in the early versions of C++ was to
inherit from the 'list' class.

I think that approach is completely stupid, broken and wrong. It should be
possible for people to make lists or sets or bags of any class without the
author of the class providing support. Which is why C# has functional
types, List.

Not incidentally, C also has functional types (or at least the ability to
implement same easily). Which is why as a post doc, having studied program
language design (Tony Hoare was my college tutor), having written a thesis
on program language design, I came to the conclusion that C was a better
language base than C++ back in the early 1990s.

I can read C++ but it takes me far longer to work out how to do something
in C++ than to actually do it in C. So I can't see where C++ is helping. It
is reducing, not improving my productivity. I know that some features of
the language have been extended/fixed since but it is far too late.

At this point it is clear that C++ is a dead end and the future of
programming languages will be based on Java, C# (and to a lesser extent
Objective C) approaches. Direct multiple inheritance will go and be
replaced by interfaces. Though with functional types, use of interfaces is
very rarely necessary.


So no, I don't equate prohibiting multiple direct inheritance with 'too
clever code'. There are good reasons to avoid multiple inheritance, both
for code maintenance and to enable the code base to be ported to more
modern languages in the future.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] encoding formats should not be committee'ised

2013-10-03 Thread Phillip Hallam-Baker
On Thu, Oct 3, 2013 at 5:19 AM, ianG  wrote:

> On 3/10/13 00:37 AM, Dave Horsfall wrote:
>
>> On Wed, 2 Oct 2013, Jerry Leichter wrote:
>>
>>  Always keep in mind - when you argue for "easy readability" - that one
>>> of COBOL's design goals was for programs to be readable and
>>> understandable by non-programmers.
>>>
>>
>> Managers, in particular.
>>
>
>
> SQL, too, had that goal.  4GLs (remember them?).  XML.  Has it ever worked?


XML was not intended to be easy to read, it was designed to be less painful
to work with than SGML, that is all.

There are actually good reasons why a document markup format needs to have
more features than a protocol data encoding format. People tend to edit
documents and need continuous syntax checks for a start.

XML is actually a good document format and a lousy RPC encoding. Although
that is exactly what SOAP is designed to turn XML into. The design of WSDL
and SOAP is entirely due to the need to impedance match COM to HTTP.


What does work in my experience is to design a language that is highly
targeted at a particular problem set. Like building FSRs or LR(1) parsers
or encoding X.509 certificates (this week's work).

And no, an ASN1 compiler is not a particularly useful tool for encoding
X.509v3 certs as it turns out.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread Phillip Hallam-Baker
On Thu, Oct 3, 2013 at 5:38 AM, Alan Braggins wrote:

> On 02/10/13 18:42, Arnold Reinhold wrote:
>
>> On 1 Oct 2013 23:48 Jerry Leichter wrote:
>>
>>  The larger the construction project, the tighter the limits on this
>>> stuff.  I used to work with a former structural engineer, and he repeated
>>> some of the "bad example" stories they are taught.  A famous case a number
>>> of years back involved a hotel in, I believe, Kansas City.  The hotel had a
>>> large, open atrium, with two levels of concrete "skyways" for walking
>>> above.  The "skyways" were hung from the roof.  As the structural engineer
>>> specified their attachment, a long threaded steel rod ran from the roof,
>>> through one skyway - with the skyway held on by a nut - and then down to
>>> the second skyway, also held on by a nut.  The builder, realizing that he
>>> would have to thread the nut for the upper skyway up many feet of rod, made
>>> a "minor" change:  He instead used two threaded rods, one from roof to
>>> upper skyway, one from upper skyway to lower skyway.  It's all the same,
>>> right?  Well, no:  In the original design, the upper nut holds the weight
>>> of just the upper skyway.  In the m
>>>
>> o
>
>>   di
>>
>>> fied version, it holds the weight of *both* skyways.  The upper
>>> fastening failed, the structure collapsed, and as I recall several people
>>> on the skyways at the time were killed.  So ... not even a factor of two
>>> safety margin there.  (The take-away from the story as delivered to future
>>> structural engineers was *not* that there wasn't a large enough safety
>>> margin - the calculations were accurate and well within the margins used in
>>> building such structures.  The issue was that no one checked that the
>>> structure was actually built as designed.)
>>>
>>
>> This would be the 1981 Kansas City Hyatt Regency walkway collapse (
>> http://en.wikipedia.org/wiki/**Hyatt_Regency_walkway_collapse
>> **)
>>
>
> Which says of the original design: "Investigators determined eventually
> that this design supported only 60 percent of the minimum load required by
> Kansas City building codes.[19]", though the reference seems to be a dead
> link. (And as built it supported 30% or the required minimum.)
>
> So even if it had been built as designed, the safety margin would not
> have been "well within the margins used in building such structures".


The case is described in Why Buildings Fall Down.

The original design was sound structurally but could not be built as it
would have required the entire length of the connection rod to be threaded.
There was no way to connect one structure to the other.

The modified design could be built but had a subtle flaw: the upper skyway
was now holding the entire weight of both The strength of the joint was
unaffected by the change but the load on the joint doubled.


We see very similar effects in cryptographic systems. But the main problem
is that our analysis apparatus focuses on the part of the problem we know
how to analyze rather than the part of the problem that fails most often.

Compare the treatment of coding errors in cryptographic software and the
treatment of CA mis-issue. Coding errors are much more likely to impact the
end user and much more likely to occur. But those get a free pass. Nobody
has ever suggested that the bugs in Sendmail in the early 1990s should have
stopped people using the product (OK apart from me). But seven mis-issued
certificates and there is a pitchfork wielding mob outside my house.

The fact that the Iranian Revolutionary Guard has a web site filled with
hijacked software that is larded up with backdoors completely missed the
attention of most of the people worrying about the seven certificates, all
of which were revoked within minutes and would be rejected by any browser
that implemented revocation checking like they should. But much easier to
flame on about the evils of CAs than ask why the browser providers prefer
shaving a few milliseconds off the latency of their browser response than
making their customers secure.


Oh and it seems that someone has murdered the head of the IRG cyber effort.
I condemn it without qualification. There are many people who have a vested
interest in keeping wars and confrontations going. There are many beltway
contractors who stand to make a lot of money if they can persuade the US
people to fund a fourth branch of the military to fight cyber wars and fund
it as lavishly as they have foolishly funded the existing three.

A trillion dollars a year spent on bombs bullets and death is no cause for
pride. Nobody should ever carry a gun or wear a military uniform with
anything other than shame for the fact that our inability to solve our
political issues without threat of violence makes it necessary. We do not
need to spend hundreds of billions more on a new form of warfare. But there
are many who would get a lot richer if we did.

As Eisenhower 

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread Phillip Hallam-Baker
On Thu, Oct 3, 2013 at 5:17 AM, ianG  wrote:

> On 2/10/13 17:46 PM, John Kelsey wrote:
>
>> Has anyone tried to systematically look at what has led to previous
>> crypto failures?
>>
>
> This has been a favourite topic of mine, ever since I discovered that the
> entire foundation of SSL was built on theory, never confirmed in practice.
>  But my views are informal, never published nor systematic. Here's a
> history I started for risk management of CAs, informally:
>

I don't understand what you mean there. The actual history of SSL was that
SSL 1.0 was so bad that Alan Schiffman and myself broke it in ten minutes
when Marc Andressen presented it at the MIT meeting.

SSL 2.0 was a little better but none of the people who worked on it had any
formal background in security. During the design process Netscape finally
got a clue and hired some real security specialists but one of Andresen's
hiring criteria was not to hire anyone from CERN who might suggest that the
Web had been invented by Tim Berners-Lee rather than himself so it took
them a lot longer than it needed to.

During that time I told them about their random number generator design
being barfed and they told me they would fix it but they didn't.

SSL 3.0 was designed by Paul Kocher as we all know and he did a pretty good
job. But they only gave him two weeks to work on it.

I don't think Paul's design was very theoretical and Netscape didn't give
him anywhere near enough time to do a full formal analysis of the protocol,
even were that possible with the tools available at the time.


It is far better to select a target such as 128 bit security, and then
> design each component to meet this target.  If you want "overdesign" then
> up the target to 160 bits, etc.  And make all the components achieve this.
>

I don't like that approach to hash function design.

Yes, I know that the strength of a 256 bit hash against a birthday attack
is 2^128 but that is irrelevant to me as a protocol designer as there are
almost no circumstances where a birthday attack results in a major
compromise of my system.

Dobertin demonstrated a birthday attack on MD5 back in 1995 but it had no
impact on the security of certificates issued using MD5 until the attack
was dramatically improved and the second pre-image attack became feasible.

So I would rather that SHA3-256 provide a full 2^256 computational work
factor against pre-image attacks even if there is a birthday vulnerability.

> (3)  Don't accept anything without a proof reducing the security of the
>> whole thing down to something overdesigned in the sense of (1) or (2).
>>
>
>
> Proofs are ... good for cryptographers :)  As I'm not, I can't comment
> further (nor do I design to them).


Proofs are good for getting tenure. They produce papers that are very
citable.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-05 Thread Phillip Hallam-Baker
On Fri, Oct 4, 2013 at 10:23 AM, John Kelsey  wrote:

> On Oct 4, 2013, at 10:10 AM, Phillip Hallam-Baker 
> wrote:
> ...
> > Dobertin demonstrated a birthday attack on MD5 back in 1995 but it had
> no impact on the security of certificates issued using MD5 until the attack
> was dramatically improved and the second pre-image attack became feasible.
>
> Just a couple nitpicks:
>
> a.  Dobbertin wasn't doing a birthday (brute force collision) attack, but
> rather a collision attack from a chosen IV.
>

Well if we are going to get picky, yes it was a collision attack but the
paper he circulated in 1995 went beyond a collision from a known IV, he had
two messages that resulted in the same output when fed a version of MD5
where one of the constants had been modified in one bit position.



> b.  Preimages with MD5 still are not practical.  What is practical is
> using the very efficient modern collision attacks to do a kind of herding
> attack, where you commit to one hash and later get some choice about which
> message gives that hash.
>

I find the preimage nomencalture unnecessarily confusing and have to look
up the distinction between first second and platform 9 3/4s each time I do
a paper.



> ...
> > Proofs are good for getting tenure. They produce papers that are very
> citable.
>
> There are certainly papers whose only practical importance is getting a
> smart cryptographer tenure somewhere, and many of those involve proofs.
>  But there's also a lot of value in being able to look at a moderately
> complicated thing, like a hash function construction or a block cipher
> chaining mode, and show that the only way anything can go wrong with that
> construction is if some underlying cryptographic object has a flaw.  Smart
> people have proposed chaining modes that could be broken even when used
> with a strong block cipher.  You can hope that security proofs will keep us
> from doing that.
>

Yes, that is what I would use them for. But I note that a very large
fraction of the field has studied formal methods, including myself and few
of us find them to be quite as useful as the academics think them to be.

The oracle model is informative but does not necessarily need to be reduced
to symbolic logic to make a point.


> Now, sometimes the proofs are wrong, and almost always, they involve a lot
> of simplification of reality (like most proofs aren't going to take
> low-entropy RNG outputs into account).  But they still seem pretty valuable
> to me for real-world things.  Among other things, they give you a
> completely different way of looking at the security of a real-world thing,
> with different people looking over the proof and trying to attack things.
>

I think the main value of formal methods turns out to be pedagogical. When
you teach students formal methods they quickly discover that the best way
to deliver a proof is to refine out every bit of crud possible before
starting and arrive at an appropriate level of abstraction.

But oddly enough I am currently working on a paper that presents a
formalized approach.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Sha3

2013-10-05 Thread Phillip Hallam-Baker
On Fri, Oct 4, 2013 at 12:27 AM, David Johnston  wrote:

>  On 10/1/2013 2:34 AM, Ray Dillinger wrote:
>
> What I don't understand here is why the process of selecting a standard
> algorithm for cryptographic primitives is so highly focused on speed. ~
>
>
> What makes you think Keccak is faster than the alternatives that were not
> selected? My implementations suggest otherwise.
> I thought the main motivation for selecting Keccak was "Sponge good".
>

You mean Keccak is spongeworthy.


I do not accept the argument that the computational work factor should be
'balanced' in the way suggested.

The security of a system is almost always better measured by looking at the
work factor for breaking an individual message rather than the probability
that two messages might be generated in circumstances that cancel each
other out.

Given adequate cryptographic precautions (e.e. random serial), a
certificate authority can still use MD5 with an acceptable level of
security even with the current attacks. They would be blithering idiots to
do so of course but Flame could have been prevented with certain
precautions.

If a hash has a 256 bit output I know that I cannot use it in a database if
the number of records approaches 2^128. But that isn't really a concern to
me. The reason I use a 256 bit hash is because I want a significant safety
margin on the pre-image work factor.

If I was really confident that the 2^128 work factor really is 2^128 then I
would be happy using a 128 bit hash for most designs. In fact in
PRISM-Proof Email I am currently using a 226 bit Subject Key Identifier
because I can encode that in BASE64 and the result is about the same length
as a PGP fingerprint. But I really do want that 2^256 work factor.

If Keccak was weakened in the manner proposed I would probably use the 512
bit version instead and truncate.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] check-summed keys in secret ciphers?

2013-10-05 Thread Phillip Hallam-Baker
On Mon, Sep 30, 2013 at 7:44 PM, arxlight  wrote:
>
>
> Just to close the circle on this:
>
> The Iranians used hundreds of carpet weavers (mostly women) to
> reconstruct a good portion of the shredded documents which they
> published (and I think continue to publish) eventually reaching 77
> volumes of printed material in a series wonderfully named "Documents
> from the U.S. Espionage Den."
>
> They did a remarkably good job, considering:
>
> http://upload.wikimedia.org/wikipedia/commons/6/68/Espionage_den03_14.png


There is a back story to that. One of the reasons that Ayatolah Kohmenhi
knew about the CIA and embassy involvement in the 53 coup was that he was
one of the hired thugs who raised the demonstrations that toppled Mossadegh.

So the invasion of the embassy was in part motivated by a desire to burn
any evidence of that perfidy on the regimes part. It was also used to
obtain and likely forge evidence against opponents inside the regime. The
files were used as a pretext for the murder of many of the leftists who
were more moderate and western in their outlook.


On the cipher checksum operation, the construction that would immediately
occur to me would be the following:

k1 = R(s)

kv = k1 + E(k1, kd)// the visible key sent over the wire, kd is a
device key

This approach allows the device to verify that the key is intended for that
device. A captured device cannot be used to decrypt arbitrary traffic even
if the visible key is known. The attacker has to reverse engineer the
device to make use of it, a task that is likely to take months if not
years.

NATO likely does an audit of every cryptographic device every few months
and destroys the entire set if a single one ever goes missing.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] A stealth redo on TLS with new encoding

2013-10-05 Thread Phillip Hallam-Baker
I think redoing TLS just to change the encoding format is to tilt at
windmills. Same for HTTP (not a fan of CORE over DTLS), same for PKIX.

But doing all three at once would actually make a lot of sense and I can
see something like that actually happen. But only if the incremental cost
of each change is negligible.


Web Services are moving towards JSON syntax. Other than legacy support I
can see no reason to use XML right now and the only reason to use
Assanine.1 other than legacy is to avoid Base64 encoding byte blobs and
escaping strings.

Adding these two features to JSON is very easy and does not require a whole
new encoding format, just add additional code points to the JSON encoding
for length encoded binary blobs. This approach means minimal changes to
JSON encoder code and allows a single decoder to be used for traditional
and binary forms:

https://datatracker.ietf.org/doc/draft-hallambaker-jsonbcd/


Web services are typically layered over HTTP and there are a few facilities
that the HTTP layer provides that are useful in a Web Service. In
particular it is very convenient to allow multiple Web Services to share
the same IP address and port. Anyone who has used the Web Server in .NET
will know what I mean here.

Web Services use some features of HTTP but not very many. It would be very
convenient if we could replace the HTTP layer with something that provides
just the functionality we need but layers over UDP or TCP directly and uses
JSON-B encoding.


One of the features I use HTTP for is to carry authentication information
on the Web Service requests and responses. I have a Web Service to do a key
exchange using SSL for privacy (its a pro-tem solution though, will add in
a PFS exchange at some point).

http://tools.ietf.org/html/draft-hallambaker-wsconnect-04

The connect protocol produces a Kerberos like ticket which is then used to
authenticate subsequent HTTP messages using a MAC.

http://tools.ietf.org/html/draft-hallambaker-httpsession-01


In my view, authentication at the transport layer is not a substitute for
authentication at the application layer. I want server authentication and
confidentiality at least at transport layer and in addition I want mutual
authentication at the application layer.

For efficiency, the authentication at the application layer uses symmetric
key (unless non-repudiation is required in which case digital signatures
would be indicated but in addition to MAC, not as a replacement).

Once a symmetric key is agreed for authentication, the use of the key for
application layer authentication is reasonably obvious.

http://tools.ietf.org/html/draft-hallambaker-wsconnect-04


OK, so far the scheme I describe is three independent schemes that are all
designed to work inside the existing HTTP-TLS-PKIX framework and they
provide value within that framework. But as I observed earlier, it is quite
possible to kick the framework away and replace HTTP with a JSON-B based
presentation layer framing.

This is what I do in the UDP transport for omnibroker as that is intended
to be a replacement for the DNS client-server interface.


So in summary, yes it is quite possible that TLS could be superseded by
something else, but that something else is not going to look like TLS and
it will be the result of a desire to build systems that use a single
consistent encoding at all layers in the stack (above the packet/session
layer).

Trying to reduce the complexity of TLS is plausible but all of that
complexity was added for a reason and those same reasons will dictate
similar features in TLS/2.0. The way to make a system simpler is not to
make each of the modules simpler but to make the modules fit together more
simply. Reducing the complexity of HTTP is hard, reducing the complexity of
TLS is hard. Reducing the complexity of HTTP+TLS is actually easier.


That said, I just wrote a spec for doing PGP key signing in Assanine.1.
Because even though it is the stupidest encoding imaginable, we need to
have a PKI that is capable of expressing every assertion type that people
have found a need for. That means either we add the functionality of PKIX
to the PGP world or vice versa.

The PKIX folk have a vast legacy code base and zero interest in compromise,
many are completely wedged on ASN.1. The PGP code base is much less
embedded than PKIX and PGP folk are highly ideologically motivated to bring
privacy to the masses rather than the specific PGP code formats.

So I have to write my key endorsement message format in Assanine.1. If I
can stomach that then so can everyone else.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Phillip Hallam-Baker
On Sat, Oct 5, 2013 at 7:36 PM, James A. Donald  wrote:

> On 2013-10-04 23:57, Phillip Hallam-Baker wrote:
>
>> Oh and it seems that someone has murdered the head of the IRG cyber
>> effort. I condemn it without qualification.
>>
>
> I endorse it without qualification.  The IRG are bad guys and need killing
> - all of them, every single one.
>
> War is an honorable profession, and is in our nature.  The lion does no
> wrong to kill the deer, and the warrior does no wrong to fight in a just
> war, for we are still killer apes.
>
> The problem with the NSA and NIST is not that they are doing warlike
> things, but that they are doing warlike things against their own people.
>
>
If people who purport to be on our side go round murdering their people
then they are going to go round murdering people on ours. We already have
Putin's group of thugs murdering folk with Polonium laced teapots, just so
that there can be no doubt as to the identity of the perpetrators.

We are not at war with Iran. I am aware that there are people who would
like to start a war with Iran, the same ones who wanted to start the war
with Iraq which caused a half million deaths but no war crimes trials to
date.

Iran used to have a democracy, remember what happened to it? It was people
like the brothers Dulles who preferred a convenient dictator to a
democratic government that overthrew it with the help of a rent-a-mob
supplied by one Ayatollah Khomenei.


I believe that it was the Ultra-class signals intelligence that made the
operation possible and the string of CIA inspired coups that installed
dictators or pre-empted the emergence of democratic regimes in many other
countries until the mid 1970s. Which not coincidentally is the time that
mechanical cipher machines were being replaced by electronic.

I have had a rather closer view of your establishment than most. You have
retired four star generals suggesting that in the case of a cyber-attack
against critical infrastructure, the government should declare martial law
within hours. It is not hard to see where that would lead there are plenty
of US military types who would dishonor their uniforms with a coup at home,
I have met them.


My view is that we would all be rather safer if the NSA went completely
dark for a while, at least until there has been some accountability for the
crimes of the '00s and a full account of which coups the CIA backed, who
authorized them and why.

I have lived with terrorism all my life. My family was targeted by
terrorists that Rep King and Rudy Giuliani profess to wholeheartedly
support to this day. I am not concerned about the terrorists because they
obviously can't win. It is like the current idiocy in Congress, the
Democrats are bound to win because at the end of the day the effects of the
recession that the Republicans threaten to cause will be temporary while
universal health care will be permanent. The threatened harm is not great
enough to cause a change in policy. The only cases where terrorist tactics
have worked is where a small minority have been trying to suppress the
majority, as in Rhodesia or French occupied Spain during the Napoleonic
wars.

But when I see politicians passing laws to stop people voting, judges
deciding that the votes in a Presidential election cannot be counted and
all the other right wing antics taking place in the US at the moment, the
risk of a right wing fascist coup has to be taken seriously.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-07 Thread Phillip Hallam-Baker
On Thu, Oct 3, 2013 at 12:21 PM, Jerry Leichter  wrote:

> On Oct 3, 2013, at 10:09 AM, Brian Gladman  wrote:
> >> Leaving aside the question of whether anyone "weakened" it, is it
> >> true that AES-256 provides comparable security to AES-128?
> >
> > I may be wrong about this, but if you are talking about the theoretical
> > strength of AES-256, then I am not aware of any attacks against it that
> > come even remotely close to reducing its effective key length to 128
> > bits.  So my answer would be 'no'.
> There are *related key* attacks against full AES-192 and AES-256 with
> complexity  2^119.  http://eprint.iacr.org/2009/374 reports on improved
> versions of these attacks against *reduced round variants" of AES-256; for
> a 10-round variant of AES-256 (the same number of rounds as AES-128), the
> attacks have complexity 2^45 (under a "strong related sub-key" attack).
>
> None of these attacks gain any advantage when applied to AES-128.
>
> As *practical attacks today*, these are of no interest - related key
> attacks only apply in rather unrealistic scenarios, even a 2^119 strength
> is way beyond any realistic attack, and no one would use a reduced-round
> version of AES-256.
>
> As a *theoretical checkpoint on the strength of AES* ... the abstract says
> the results "raise[s] serious concern about the remaining safety margin
> offered by the AES family of cryptosystems".
>
> The contact author on this paper, BTW, is Adi Shamir.


Shamir said that he would like to see AES detuned for speed and extra
rounds added during the RSA conf cryptographers panel a couple of years
back.

That is the main incentive for using AES 256 over 128. Nobody is going to
be breaking AES 128 by brute force so key size above that is irrelevant but
you do get the extra rounds.


Saving symmetric key bits does not really bother me as pretty much any
mechanism I use to derive them is going to give me plenty. I am even
starting to think that maybe we should start using the NSA checksum
approach.

Incidentally, that checksum could be explained simply by padding prepping
an EC encrypted session key. PKCS#1 has similar stuff to ensure that there
is no known plaintext in there. Using the encryption algorithm instead of
the OAEP hash function makes much better sense.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Elliptic curve question

2013-10-07 Thread Phillip Hallam-Baker
On Mon, Oct 7, 2013 at 4:54 AM, Lay András  wrote:

> Hi!
>
> I made a simple elliptic curve utility in command line PHP:
>
> https://github.com/LaySoft/ecc_phgp
>
> I know in the RSA, the sign is inverse operation of encrypt, so two
> different keypairs needs for encrypt and sign. In elliptic curve
> cryptography, the sign is not the inverse operation of encrypt, so my
> application use same keypair for encrypt and sign.
>
> Is this correct?
>

Are you planning to publish your signing key or your decryption key?

Use of a key for one makes the other incompatible.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-07 Thread Phillip Hallam-Baker
On Sun, Oct 6, 2013 at 11:26 AM, John Kelsey  wrote:

> If we can't select ciphersuites that we are sure we will always be
> comfortable with (for at least some forseeable lifetime) then we urgently
> need the ability to *stop* using them at some point.  The examples of MD5
> and RC4 make that pretty clear.
>
> Ceasing to use one particular encryption algorithm in something like
> SSL/TLS should be the easiest case--we don't have to worry about old
> signatures/certificates using the outdated algorithm or anything.  And yet
> we can't reliably do even that.
>

I proposed a mechanism for that a long time back based on Rivest's notion
of a suicide note in SDSI.


The idea was that some group of cryptographers get together and create some
random numbers which they then keyshare amongst themselves so that there
are (say) 11 shares and a quorum of 5.

Let the key be k, if the algorithm being witnessed is AES then the value
AES(k) is published as the 'witness value for AES.

A device that ever sees the witness value for AES presented knows to stop
using it. It is in effect a 'suicide note' for AES.


Similar witness functions can be specified easily enough for hashes etc. We
already have the RSA factoring competition for RSA public key. In fact I
suggested to Burt Kaliski that they expand the program.

The cryptographic basis here is that there are only two cases where the
witness value will be released, either there is an expert consensus to stop
using AES (or whatever) or someone breaks AES.

The main downside is that there are many applications where you can't
tolerate fail-open. For example in the electricity and power system it is
more important to keep the system going than to preserve confidentiality.
An authenticity attack on the other hand might be cause...

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Iran and murder

2013-10-09 Thread Phillip Hallam-Baker
On Wed, Oct 9, 2013 at 12:44 AM, Tim Newsham  wrote:

> > We are more vulnerable to widespread acceptance of these bad principles
> than
> > almost anyone, ultimately,  But doing all these things has won larger
> budgets
> > and temporary successes for specific people and agencies today, whereas
> > the costs of all this will land on us all in the future.
>
> The same could be (and has been) said about offensive cyber warfare.
>

I said the same thing in the launch issue of cyber-defense. Unfortunately
the editor took it into his head to conflate inventing the HTTP referer
field etc. with rather more and so I can't point people at the article as
they refuse to correct it.


I see cyber-sabotage as being similar to use of chemical or biological
weapons: It is going to be banned because the military consequences fall
far short of being decisive, are unpredictable and the barriers to entry
are low.

STUXNET has been relaunched with different payloads countless times. So we
are throwing stones the other side can throw back with greater force.


We have a big problem in crypto because we cannot now be sure that the help
received from the US government in the past has been well intentioned or
not. And so a great deal of time is being wasted right now (though we will
waste orders of magnitude more of their time).

At the moment we have a bunch of generals and contractors telling us that
we must spend billions on the ability to attack China's power system in
case they attack ours. If we accept that project then we can't share
technology that might help them defend their power system which cripples
our ability to defend our own.

So a purely hypothetical attack promoted for the personal enrichment of a
few makes us less secure, not safer. And the power systems are open to
attack by sufficiently motivated individuals.


The sophistication of STUXNET lay in its ability to discriminate the
intended target from others. The opponents we face simply don't care about
collateral damage. So  I am not impressed by people boasting about the
ability of some country (not an ally of my country BTW) to perform targeted
murder overlooks the fact that they can and likely will retaliate with
indiscriminate murder in return.

I bet people are less fond of drones when they start to realize other
countries have them as well.


Lets just stick to defense and make the NATO civilian infrastructure secure
against cyber attack regardless of what making that technology public might
do for what some people insist we should consider enemies.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] The cost of National Security Letters

2013-10-09 Thread Phillip Hallam-Baker
One of the biggest problems with the current situation is that US
technology companies have no ability to convince others that their
equipment has not been compromised by a government mandated backdoor.

This is imposing a significant and real cost on providers of outsourced Web
Services and is beginning to place costs on manufacturers. International
customers are learning to shop elsewhere for their IT needs.

While moving from the US to the UK might seem to leave the customer equally
vulnerable to warrant-less NSA/GCHQ snooping, there is a very important
difference. A US provider can be silenced using a National Security Letter
which is an administrative order issued by a government agency without any
court sanction. There is no equivalent capability in UK law.

A UK court can make an intercept order or authorize a search etc. but that
is by definition a Lawful Intercept and that capability exists regardless
of jurisdiction. What is unique in the US at the moment is the National
Security Letter.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] PGP Key Signing parties

2013-10-09 Thread Phillip Hallam-Baker
Does PGP have any particular support for key signing parties built in or is
this just something that has grown up as a practice of use?

I am looking at different options for building a PKI for securing personal
communications and it seems to me that the Key Party model could be
improved on if there were some tweaks so that key party signing events were
a distinct part of the model.


I am specifically thinking of ways that key signing parties might be made
scalable so that it was possible for hundreds of thousands of people to
participate in an event and there were specific controls to ensure that the
use of the key party key was strictly bounded in space and time.

So for example, it costs $2K to go to RSA. So if there is a key signing
event associated that requires someone to be physically present then that
is a $2K cost factor that we can leverage right there.

Now we can all imagine ways in which folk on this list could avoid or evade
such controls but they all have costs. I think it rather unlikely that any
of you would want to be attempting to impersonate me at multiple cons.

If there is a CT infrastructure then we can ensure that the use of the key
party key is strictly limited to that one event and that even if the key is
not somehow destroyed after use that it is not going to be trusted.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Elliptic curve question

2013-10-09 Thread Phillip Hallam-Baker
On Tue, Oct 8, 2013 at 4:14 PM, James A. Donald  wrote:

>  On 2013-10-08 03:14, Phillip Hallam-Baker wrote:
>
>
> Are you planning to publish your signing key or your decryption key?
>
>  Use of a key for one makes the other incompatible.�
>
>
> Incorrect.  One's public key is always an elliptic point, one's private
> key is always a number.
>
> Thus there is no reason in principle why one cannot use the same key (a
> number) for signing the messages you send, and decrypting the messages you
> receive.
>

 The original author was proposing to use the same key for encryption and
signature which is a rather bad idea.



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Other Backdoors?

2013-10-10 Thread Phillip Hallam-Baker
I sarcastically proposed the use of GOST as an alternative to NIST crypto.
Someone shot back a note saying the elliptic curves might be 'bent'.

Might be interesting for EC to take another look at GOST since it might be
the case that the GRU and the NSA both found a similar backdoor but one was
better at hiding it than the other.


On the NIST side, can anyone explain the reason for this mechanism for
truncating SHA512?

Denote H(0)′
to be the initial hash value of SHA-512 as specified in Section 5.3.5
above.
Denote H(0)′′ to be the initial hash value computed below.
H(0) is the IV for SHA-512/t.
For i = 0 to 7
{
(0)′′ (0)′ Hi = Hi ⊕ a5a5a5a5a5a5a5a5(in hex).

}

H(0) = SHA-512 (“SHA-512/t”) using H(0)′′
as the IV, where t is the specific truncation value.
(end.)

[Can't link to FIPS180-4 right now as its down]

I really don't like the futzing with the IV like that, not least because a
lot of implementations don't give access to the IV. Certainly the object
oriented ones I tend to use don't.

But does it make the scheme weaker?

Is there anything wrong with just truncating the output?

The only advantage I can see to the idea is to stop the truncated digest
being used as leverage to reveal the full digest in a scheme where one was
public and the other was not.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PGP Key Signing parties

2013-10-11 Thread Phillip Hallam-Baker
Reply to various,

Yes, the value in a given key signing is weak, in fact every link in the
web of trust is terribly weak.

However, if you notarize and publish the links in CT fashion then I can
show that they actually become very strong. I might not have good evidence
of John Gilmore's key at RSA 2001, but I could get very strong evidence
that someone signed a JG key at RSA 2001.

Which is actually quite a high bar since the attacker would haver to buy a
badge which is $2,000. Even if they were going to go anyway and it is a
sunk cost, they are rate limited.


The other attacks John raised are valid but I think they can be dealt with
by adequate design of the ceremony to ensure that it is transparent.

Now stack that information alongside other endorsements and we can arrive
at a pretty strong authentication mechanism.

The various mechanisms used to evaluate the trust can also be expressed in
the endorsement links.


What I am trying to solve here is the distance problem in Web o' trust. At
the moment it is pretty well impossible for me to have confidence in keys
for people who are ten degrees out. Yet I am pretty confident of the
accuracy of histories of what happened 300 years ago (within certain
limits).

It is pretty easy to fake a web of trust, I can do it on one computer, no
trouble. But if the web is grounded at just a few points to actual events
then it becomes very difficult to spoof.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Key stretching

2013-10-11 Thread Phillip Hallam-Baker
All,

Quick question, anyone got a good scheme for key stretching?

I have this scheme for managing private keys that involves storing them as
encrypted PKCS#8 blobs in the cloud.

AES128 seems a little on the weak side for this but there are (rare)
circumstances where a user is going to need to type in the key for recovery
purposes so I don't want more than 128 bits of key to type in (I am betting
that 128 bits is going to be sufficient to the end of Moore's law).


So the answer is to use AES 256 and stretch the key, but how? I could just
repeat the key:

K = k + k

Related key attacks make me a little nervous though. Maybe:

K = (k + 01234567) XOR SHA512 (k)


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Snowden "fabricated digital keys" to get access to NSA servers?

2013-07-04 Thread Phillip Hallam-Baker
I think that fabricating a key here is more likely to mean fabricating an
authentication 'key' rather than an encryption key. Alexander is talking to
Congress and is deliberately being less than precise.

So I would think in terms of application level vulnerabilities in Web based
document servers.

One of the things that I have thought weak in our current approach to use
of crypto is the way that we divide up access control into authentication
and authorization. So basically if Bradley had a possible need to see a
file then he has an authorization letting him see it. Using access control
alone encourages permissions to be given out promiscuously.

The Snowden situation sounds like something slightly different. Alexander
says he was not authorized but he was able to get access. The common way
that happens on the Web is that Alice has account number 1234 and
authenticates herself to the server and gets back a URI ending something
like ?acct=1234& To get access to Bob's account she simply changes that
to ?acct=1235&...

It should not work, but it works very often in the real world. Having
worked with contractors I have seen people hired out as 'programers' at
$1500 per day whose only coding experience was hacking Dephi databases. No
C, C++, Java or C#. Not even a scripting language.

So it would not shock me to find out that their document security comes
undone in the same way that it does in commercial systems.

Heads should be rolling on this one. But they won't.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Snowden "fabricated digital keys" to get access to NSA servers?

2013-07-04 Thread Phillip Hallam-Baker
I read an article today that claims one and a half million people have a
Top Secret clearance.

That kind of demonstrates how little Top Secret now means.


On Sun, Jun 30, 2013 at 2:16 PM, Florian Weimer  wrote:

> * John Gilmore:
>
> > [John here.  Let's try some speculation about what this phrase,
> > "fabricating digital keys", might mean.]
>
> Most likely, as part of his job at the contractor, he had
> administrator access to a system which was used for key management,
> perhaps to apply security updates, manage backups or fix the
> occasional glitch.  This is precisely the kind of low-level grunt work
> that I expect is outsourced to contractors.
>
> It's also possible that he was directly charged with key management.
> I can image that someone thought that as long as some agency committee
> made the actual decisions, it was fine to hire an external data typist
> who entered the committee decision in to the key management system.
>
> It's really funny that "NSA-level security" has now turned pejorative.
> ___
> The cryptography mailing list
> cryptography@metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
>



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] What is the state of patents on elliptic curve cryptography?

2013-08-21 Thread Phillip Hallam-Baker
It is almost certain that most uses of EC would not infringe the remaining
patents.

But the patent holder can force anyone attempting to use them to spend
about $3-5 million to defend their right to use EC and so there is very
little incentive to do so given that RSA 2048 is sufficient for almost any
need.


The situation might change depending on who buys RIM.


On Tue, Aug 20, 2013 at 1:38 PM, Perry E. Metzger wrote:

> What is the current state of patents on elliptic curve cryptosystems?
> (It would also be useful to know when the patents on such patents as
> exist end.)
>
> Perry
> --
> Perry E. Metzgerpe...@piermont.com
> ___
> The cryptography mailing list
> cryptography@metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
>



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] What is the state of patents on elliptic curve cryptography?

2013-08-23 Thread Phillip Hallam-Baker
On Thu, Aug 22, 2013 at 1:20 AM, Daira Hopwood  wrote:

> On 20/08/13 19:26, Phillip Hallam-Baker wrote:
> > It is almost certain that most uses of EC would not infringe the
> remaining patents.
> >
> > But the patent holder can force anyone attempting to use them to spend
> about $3-5 million
> > to defend their right to use EC and so there is very little incentive to
> do so given that
> > RSA 2048 is sufficient for almost any need.
>
> In principle there's no way to be sure of anything being free from
> patents, so why treat
> EC as a special case? Seems like you're just doing Certicom's
> FUD-spreading for them :-(
>

Given that I am an expert witness specialising in finding prior art for
patent defences, my original post was a statement against interest as I
would be in with a good shot of getting a $100K gig if someone did decided
to test patentability of EC.

There is no way to be sure that anything is free of patents, but in this
case we are pretty sure that there will be a suit.

This is not an exception to the usual approach either, quite a few of my
design proposals in IETF have been shot down as 'too clever', i.e. someone
might have filed a patent.

What worries me on the Certicom patents is whether the 20 year from filing
or 17 years from issue applies since they are continuations in part on a
filing made prior to 7 June 1995.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PRISM PROOF Email

2013-08-23 Thread Phillip Hallam-Baker
On Fri, Aug 23, 2013 at 6:02 PM, Philip Whitehouse  wrote:

> Let me just see if I get where you're going:
>


> So essentially you've increased the number of CAs to the number of
> companies without really solving the PRISM problem. The sheer number mean
> it's impractical to do much more than a cursory check before approval.
>

The number of CAs would not need to be very large, I would expect it to be
in the hundreds in a global system but that is pretty much a function of
their being hundreds of countries.

If example.com wanted to run their own CA for their own email certs then
the way to do it would be to issue them a cert signing cert that has name
constraints to limit its use to just n...@example.com.


The idea is that there are multiple CAs but their actions are all vetted
for transparency and they all check up on each other.

Any one CA can be served with an NSL, but if they issue a coerced
certificate it will be immediately visible to the target. So a government
can perform a DoS attack but not get away with an impersonation attack.



> PRISM for email is bad because we don't even know who we can trust. I
> can't trust the provider because they could have been served an NSL. The
> provider has to see the metadata or they can't route the email. So I'm
> doomed. Best case is I can secure the contents and use an alternate name.
> At that point I need an organization I trust to act as my Omnibroker who
> for some reason I don't trust with the mail itself.
>
> One other question: PPE = Prism Proof Email?
>
> Nor do I think key chain length was the problem - initial key
> authentication and distribution is the first issue.
>
> Philip Whitehouse
>


Well the way that was solved in practice for PGP was Brian LaMachia's PGP
Key server :-) Which turned into a node of very high degree...


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PRISM PROOF Email

2013-08-23 Thread Phillip Hallam-Baker
On Fri, Aug 23, 2013 at 6:42 PM, Joe St Sauver wrote:
>
> I wouldn't take Snowden's alleged opsec practice, or lack thereof, as
> a demonstration proof that PGP and/or S/MIME are impossibly difficult for
> technical people (or even motivated NON-technical people) to use when
> necessary or appropriate.
>

Thats what the IETF folk told us when I worked on HTTP 0.9 against Gopher
and FTP.

Usability matters. In fact it is all that matters for adoption. Angry Birds
has made a billion dollars because it is so nice to use that people will
pay to use it.

-- most email clients only integrate support for S/MIME; if you want
> to try to push anything else, your mission effectively devolves to
> advocating for native support for PGP in popular email clients (such
> as Thunderbird and Outlook), but when you do so, be prepared for
> pushback.
>

Yep, I see no particular value in pushing PGP over S/MIME. Other than the
fact that it has mind share.

-- "PRISM-proofing" isn't just about encryption, since traffic analysis
> doesn't require full contents (and in fact, arguably, encryption ENHANCES
> traffic analysis in some ways, depending on how it ends up being used).
>

Thats why message layer security is not a substitute for TLS. And the TLS
should be locked to the email service via a policy statement such as DANE.



> #Everything has to be transparent to the
> #end user who is not a crypto expert and may well be a bit of a doof.
>
> You simply cannot produce doof-proof message-level crypto (I'd be
> surprised if there isn't already a CafePress tee shirt with this meme,
> in fact), any more than you can keep doofs from driving their cars
> into other vehicles, etc.
>

I disagree. I think it is entirely tractable.

If I understand your architecture correctly, it isn't end-to-end, is it?
> If it isn't end-to-end, that just means that the attack point shifts,
> it doesn't get eliminated.
>

Depends on what you call the ends.

The messages are encrypted email client to email client. But the trust
relationships run from the CA to the Omnibroker. If you want to have full
control then you would run your own omnibroker and configure it with the
appropriate policy. If you are worried about foreign governments
intercepting your email but not your own then a Symantec or Comodo provided
Omnibroker service would be acceptable.

People who trust us sufficiently to run our anti-virus are already trusting
us to a far greater degree.


> And remember, end-to-end encryption isn't free. You may be reducing the
> risk of message eavesdropping, but the tradeoff may be that malicious
> content doesn't get scanned and blocked prior to delivery, just to
> mention one potential concern. (And of course, if your endpoint gets
> 0wn3d, your privacy expectations shouldn't be very high, right?)
>

Which is one reason people would run their own omnibroker in certain
situations (like enterprise) and encrypted mail is likely to be subject to
policy controls (no executables) and only accepted from known parties with
established reputations.



> #For spam control reasons, every email sent has to be authenticated which
> #means using digital signatures on the message (and likely DKIM + SSL
> client
> #auth).
>
> Auth doesn't prevent spam. Auth just enables the accumulation of
> reputation,
> which can then drive filtering decisions.
>

Which is what most spam filtering works of these days, content filtering is
not a very successful anti-spam strategy.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] PRISM PROOF Email

2013-08-23 Thread Phillip Hallam-Baker
On Fri, Aug 23, 2013 at 3:34 PM, Ben Laurie  wrote:

>
> On 22 August 2013 10:36, Phillip Hallam-Baker  wrote:
>
>> Preventing key substitution will require a combination of the CT ideas
>> proposed by Ben Laurie (so catenate proof notaries etc) and some form of
>> 'no key exists' demonstration.
>
>
> We have already outline how to make verifiable maps as well as verifiable
> logs, which I think is all you need.
> http://www.links.org/files/RevocationTransparency.pdf.
>

Yeah, I think it is just a matter of being clear about the requirements and
making sure that we fully justify the requirements for email rather than
assume that email is the same.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Traffic Analysis (was Re: PRISM PROOF Email)

2013-08-25 Thread Phillip Hallam-Baker
There has to be a layered approach.

Traffic analysis is probably going to demand steganography and that is
almost by definition outside standards work.


The part of Prism that I consider to be blatantly unconstitutional is that
they keep all the emails so that they can search them years later should
the need arise. Strikes me that is the type of sophistry that John Yoo used
when he wrote those memos claiming that torture isn't torture.

There will be a reckoning in the end. Takes about twenty to thirty years
before the point is reached that nobody in the establishment has a reason
to protect the war criminals of years past.


I have a little theory about the reason the CIA engineered coups were so
successful from 53 to 73 and then suddenly stopped working. Seems to me
that the CIA would have been nuts to try operation Ajax without some very
powerful intel like being able to break the Persian codes. CIa stopped
being able to mount those exercises after electronic ciphers were
introduced.

Given how the NSA used their powers last time round to topple democracies
and install dictators I don't think they deserve a second chance.




On Sun, Aug 25, 2013 at 3:34 PM, Perry E. Metzger wrote:

> On Fri, 23 Aug 2013 09:38:21 -0700 Carl Ellison  wrote:
> > Meanwhile PRISM was more about metadata than content, right? How
> > are we going to prevent traffic analysis worldwide?
>
> The best technology for that is mix networks.
>
> At one point, early in the cypherpunks era, mix networks were
> something of an expensive idea. Now, however, everyone in sight is
> connected 24x7 to the internet. Similarly, at one point, bandwidthwas
> scarce, but now, most traffic is video, and even if instant messages
> and email equivalents took many hops through the network, the
> bandwidth used (except for mobiles, which need not be interior mix
> nodes per se) is negligible.
>
> Perry
> --
> Perry E. Metzgerpe...@piermont.com
> ___
> The cryptography mailing list
> cryptography@metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
>



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Email and IM are ideal candidates for mix networks

2013-08-26 Thread Phillip Hallam-Baker
On Mon, Aug 26, 2013 at 1:47 AM, Richard Clayton wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> In message , Jerry Leichter
>  writes
>
> >On the flip side, mail systems like gMail or Yahoo mail are complex and
> >difficult to run *exactly because they are immense*.
>
> The mail systems part is really rather simple... and pretty much looks
> after itself. That's not where all the employees work.
>
> >  But what are they getting
> >for that size?  There are no economies of scale here - in fact, there are
> clear
> >*dis*economies.
>
> ... the economy of scale is in identifying and routing spam of various
> kinds. Some can be detected a priori -- the majority of the detection
> relies on feedback from users (the chances are that someone else got the
> bad mail before you did, so it can be arranged that you are not bothered)
>
> >Even without the recent uproar over email privacy, at some point, someone
> was
> >going to come up with a product along the following lines:  Buy a cheap,
> >preconfigured box with an absurd amount of space (relative to the "huge"
> amounts
> >of space, like 10GB, the current services give you); then sign up for a
> service
> >that provides your MX record and on-line, encrypted backup space for a
> small
> >monthly fee.  (Presumably free services to do the same would also appear,
> >perhaps from some of the dynamic DNS providers.)
>
> Just what the world needs, more free email sending provision!  sigh
>
> >What's the value add of one of the giant providers?
>
> If you run your own emails system then you'll rapidly find out what
> 2013's spam / malware problem looks like.
>
> Just as success in crypto deployment isn't about algorithms or file
> formats, success in mail handling isn't about MX records and MTAs.
>

Which is why I think Ted Lemon's idea about using Facebook type friending
may be necessary.

I don't think we can rely on that for Key distribution. But I think it
needs to be a part of the mix.


I have a protocol compiler. Just give it an abstract schema and out pops a
server and client API library. Just need to add the code to implement the
semantics. It is up on Sourceforge, will update later this week.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Implementations, attacks on DHTs, Mix Nets?

2013-08-26 Thread Phillip Hallam-Baker
On Sun, Aug 25, 2013 at 7:42 PM, Christian Huitema wrote:

> > My knowledge of the field is pretty spotty in general as I've never paid
> much
> > attention up until now -- mostly I know about how people have built DHTs
> in
> > non-hostile environments. I'm close enough to starting from scratch that
> I
> don't
> > know yet what I don't know.
>
> I studied such systems intensely, and designed some
> (http://en.wikipedia.org/wiki/Peer_Name_Resolution_Protocol). Using a
> distributed hash table securely is really hard. The basic idea of DHT is
> that information is spread on the network based on matches between the hash
> of a resource identifier and the hash of a node identifier. All nodes are
> effectively relying on every other node. In an open network, that is pretty
> much equivalent to "relying on the goodness of strangers." You can be sure
> that if our buddies at the NSA set up to watch the content of a DHT, they
> will succeed.
>

I am doing a history of the Web. I came to the conclusion that the clever
part is the problems it decides not to solve. Ted Nelson was absolutely
right on what was desirable, but what he considered 'essential' turned out
to be easily added as layers (search for example).

A confidentiality solution that tells the user 'you can't send mail right
now because you may be subject to an intercept' is more than acceptable.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Using Raspberry Pis

2013-08-26 Thread Phillip Hallam-Baker
I really like RPis as a cryptographic tool. The only thing that would make
them better is a second Ethernet interface so they could be used as a
firewall type device.

However that said, the pros are:

* Small, cheap, reasonably fast, has ethernet and even a monitor output

* Boot from an SD card which can be preloaded with the OS and application
build. So it is really easy to use RPi as an embedded device controller.

The main con is that they are not so fast that you want to be routing
packets through them unnecessarily. So they are a great device to make use
of for connection brokering, not such a great idea to tunnel video packets
through them.


It is entirely reasonable to tell someone to get an RPi, download a config
onto an SD card, plug it into their network and apply power and ethernet.
And they take so little power that we could even tell them to install a
pair so that they had a fault tolerant setup (although they are low enough
power, low enough complexity that this may not be necessary or helpful).


In the home of the future there will be hundreds of devices on the network
rather than just the dozens I have today. So trying to configure security
at every point is a non starter. Peer to peer network configurations tend
to end up being unnecessarily chatty and are hard to debug because you
can't tell who is meant to be in command.

The approach that makes most sense to me is to have one or two network
controller devices built on something like RPis and vest all the trust
decisions in those. So rather than trying to configure PKI at hundreds of
devices, concentrate those decisions in just one logical point.


So I would like at minimum such a device to be my DNS + DHCP + PKI + NTP
configuration service and talk a consistent API to the rest of the network.
Which is the work I am doing on Omnibroker.

Putting a mail server on the system as well would be logical, though it
would increase complexity and more moving parts on a trusted system makes
me a little nervous.




-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Using Raspberry Pis

2013-08-26 Thread Phillip Hallam-Baker
On Mon, Aug 26, 2013 at 5:43 PM, Perry E. Metzger wrote:

> On Mon, 26 Aug 2013 16:12:22 -0400 Phillip Hallam-Baker
>  wrote:
> > I really like RPis as a cryptographic tool. The only thing that
> > would make them better is a second Ethernet interface so they could
> > be used as a firewall type device.
>
> You can of course use a USB ethernet with them, but to me, they're
> more a proof of what you can do with a very small bill of materials.
>
> If you're designing your own, adding another ethernet (and getting
> rid of unneeded things like the video adapter) is easy.
>
> Custom built hardware will probably be the smartest way to go for an
> entrepreneur trying to sell these in bulk to people as home gateways
> anyway -- you want the nice injection molded case, blinkylights and
> package as well. :)


I don't think the video adds much to the cost.

I do have a USB ethernet adapter... but that cost me as much as the Pi.

Problem with all these things is that the Pi is cheap because they have the
volume. Change the spec and the price shoots up :(



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Email and IM are ideal candidates for mix networks

2013-08-27 Thread Phillip Hallam-Baker
On Tue, Aug 27, 2013 at 5:04 PM, Wendy M. Grossman <
wen...@pelicancrossing.net> wrote:

> On 08/27/2013 18:34, ianG wrote:
> > Why do we need the 1980s assumption of being able to send freely to
> > everyone, anyway?
>
> It's clear you're not a journalist or working in any other profession
> where you actually need to be able to communicate spontaneously with
> strangers.
>

True, but you are probably willing to tolerate a higher level of spam
getting through in that case.

One hypothesis that I would like to throw out is that there is no point in
accepting encrypted email from someone who does not have a key to encrypt
the response.



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Implementations, attacks on DHTs, Mix Nets?

2013-08-27 Thread Phillip Hallam-Baker
On Tue, Aug 27, 2013 at 10:18 PM, Perry E. Metzger wrote:

> On Tue, 27 Aug 2013 19:57:30 -0600 Peter Saint-Andre
>  wrote:
> > On 8/27/13 7:47 PM, Jonathan Thornburg wrote:
> > > On Tue, 27 Aug 2013, Perry E. Metzger wrote:
> > >> Say that you want to distribute a database table consisting of
> > >> human readable IDs, cryptographic keys and network endpoints for
> > >> some reason. Say you want it to scale to hundreds of millions of
> > >> users.
> > >
> > > This sounds remarkably like a description of DNSSEC.
> > >
> > > Assuming it were widely deployed, would
> > > DNSSEC-for-key-distribution be a reasonable way to store
> > >   email_address --> public_key
> > > mappings?
> >
> > You mean something like this (email address --> OTR key)?
> >
> > https://datatracker.ietf.org/doc/draft-wouters-dane-otrfp/
>
> My problem with the use of DNSSEC for such things is the barrier to
> entry. It requires that a systems administrator for the domain your
> email address is in cooperate with you. This has even slowed DNSSEC
> deployment itself.
>

How about the fact that the US govt de facto controls the organization
controlling the root key and it is a single rooted hierarchy of trust?

But in general, the DNS is an infrastructure for making assertions about
hosts and services. It is not a good place for assertions about users or
accounts. So it is a good place to dump DANE records for your STARTTLS
certs but not for S/MIME certs.


> It is, of course, clearly the "correct" way to do such things, but
> trying to do things architecturally correctly sometimes results in
> solutions that don't deploy.
>
> I prefer solutions that require little or no buy in from anyone other
> than yourself. One reason SSH deployed so quickly was it needed no
> infrastructure -- if you controlled a single server, you could log in
> to it with SSH and no one needed to give you permission.
>
> This is a guiding principle in the architectures I'm now considering.


 I very much agree that deployment is all.

One thing I would like to do is to separate the email client from the
crypto decision making even if this is just a temporary measure for testbed
purposes. I don't want to hack plugs into a dozen email clients for a dozen
experiments and have to re-hack them for every architectural tweak.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Source for protocol compiler

2013-08-28 Thread Phillip Hallam-Baker
The source is up on sourceforge now. It does need some spring cleaning and
documenting which I hope to get to next week.

The documentation is in the following directory
https://sourceforge.net/p/jsonschema/code/ci/master/tree/Web/

The origins of this work is that about 70% of the effort in working groups
goes into debating 'bikeshed' type issues (as in what color to paint it)
that really don't matter. Things like choice of encoding (just use JSON) or
binding (Any reason to not use HTTP + SSL) and so on.

And 70% of the effort of the editor would go into making changes to the
spec which would need to be reflected accurately in six different parts of
the document and the reference code and then conformant examples generated
and inserted at the right place and then other folk would have to check it
was all done right.


So JSONSchema converts an abstract schema definition (in a programming
language syntax, not JSON encoding!) and produces a stub client API and a
stub server with the appropriate holes to plug in your semantics. You can
then write documentation and insert examples from running code (provided
you like documentation in either HTML or Internet Draft format at the
moment).

It is all written in C# and has been run on OSX and Linux under Mono
(binary distributions to follow). The synth currently only generates code
in C# but I plan to add C and probably Objective C down the line. The
meta-synthesiser is also on sourceforge and open source:

https://sourceforge.net/projects/goedel/


The compiler only supports RPC like interactions at the moment, i.e.
query/response. But I am planning to expand the generator to support an
additional interaction pattern in which the client opens a transaction and
receives a series of async callbacks. That would be suited to supporting
chat like protocols.

One of the things I realized as I was doing all this is that all Web
Services really consist of are glorified RPC calls in a different syntax.


The code generated right now is targeted at being reference code but there
is no reason why the synth should not generate production code.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Separating concerns

2013-08-29 Thread Phillip Hallam-Baker
On Thu, Aug 29, 2013 at 7:15 AM, Jerry Leichter  wrote:

> On Aug 28, 2013, at 2:04 PM, Faré wrote:
> >> My target audience, like Perry's is people who simply can't cope with
> anything more complex than an email address. For me secure mail has to look
> feel and smell exactly the same as current mail. The only difference being
> that sometime the secure mailer will say 'I can't contact that person
> securely right now because…'
> >>
> > I agree with Perry and Phill that email experience should be
> > essentially undisturbed in the normal case, though it's OK to add an
> > additional authorization step.
> >
> > One thing that irks me, though, is the problem of the robust, secure
> > terminal: if everything is encrypted, how does one survive the
> > loss/theft/destruction of a computer or harddrive? I'm no ignoramus,
> > yet I have, several times, lost data I cared about due to hardware
> > failure or theft combined with improper backup. How is a total newbie
> > to do?
> This is a broader problem, actually.  If you've ever had to take care of
> someone's estate, you'll know that one of the problems is contacting all
> the banks, other financial institutions, service providers, and other such
> parties they dealt with in life.  My experience dealing with my father's
> estate - a fairly simple one - was that having the *paper* statements was
> the essential starting point.  (Even so, finding his safe deposit box - I
> had the unlabeled keys - could have been a real pain if my sister didn't
> remember which bank it was at.)  Had he been getting email statements, just
> finding his mail accounts - and getting access to them - could have been a
> major undertaking.  Which is one reason I refuse to sign up for email
> statements ... just send me the paper, thank you.  (This is getting harder
> all the time.  I expect to start getting charged for paper statements any
> time now.)
>
> Today at least, my executor, in principle, work with the mail provider to
> get access.  But for truly secure mail, my keys presumably die with me, and
> it's all gone.
>
> You don't even have to consider the ultimate loss situation.  If I'm
> temporarily disabled and can't provide my keys - how can someone take care
> of my bills for me?
>
> We can't design a system that can handle every variation and eventuality,
> but if we're going to design one that we intend to be broadly used, we have
> to include a way to handle the perfectly predictable, if unpleasant to
> think about, aspects of day to day life.  Absolute security *creates* new
> problems as it solves old ones.  There may well be aspects to my life I
> *don't* want revealed after I'm gone.  But there are many things I *do*
> want to be easily revealed; my heirs will have enough to do to clean up
> after me and move on as it is.
>
> So, yes, we have to make sure we have backup mechanisms - as well as key
> escrow systems, much as the term "key escrow" was tainted by the Clipper
> experience.
>

Systems do need to be usable in practice and too much security can be a bad
thing. I am thinking about 'PRISM Proof' as a hierarchy of needs:

0 No confidentiality requirement
1 Content Confidentiality Passive intercept (met by STARTTLS)
2 Content Confidentiality Active Intercept (met by STARTTLS + validated
recipient server cert)
3 Content Confidentiality Coercion or compromise of Mail service provider
4 Content Confidentiality Coercion or compromise of Trusted Third Party
5 MetaData Confidentiality
6 Traffic Analysis Confidentiality

At present we only have a widely deployed solution for level 1.

The constituency that has a requirement for level 6 is probably very small.
Certainly none of us would benefit. Is is a hard goal or a stretch goal?

It is certainly a desirable goal for people like journalists but the cost
of meeting the requirement may not be acceptable.

At any rate, I think that starting by trying to build something to level 4
would be a good start and provide an essential basis for getting through to
levels 5 and 6.

It might be that to get from level 4 to level 6 the solution is as simple
as 'use a German ISP'.


Since we are talking about Snowden and Greenwald, folk might be amused to
learn that I was the other party who contacted Baghdad Boylen, General
Pertreaus's spokesperson who sent Greenwald a bizarre email which he then
lied about having sent (to me, Greenwald and Petreaus), apparently unaware
that while an email message can indeed be faked, it is improbable that
these particular message headers are faked.

Further, had any such attempted impersonation of Boylan taken place it
would have been a very serious matter requiring urgent investigation. Since
I was never contacted it is clear that no investigation took place which
can only mean that Boylen did send the emails and then lied about sending
them.

http://www.salon.com/2007/10/28/boylan/

If a UK military officer had sent a similar email he would be cashiered.
But then again, in the British army Colonels 

Re: [Cryptography] Separating concerns

2013-08-29 Thread Phillip Hallam-Baker
On Thu, Aug 29, 2013 at 4:27 AM, ianG  wrote:

> Hi Phill,
>
>
> On 28/08/13 21:31 PM, Phill wrote:
>
>> And for a company it is almost certain that 'secure against intercept by
>> any government other than the US' is an acceptable solution.
>>
>
>
> I think that was acceptable in general up until recently.  But, I believe
> the threat scenario has changed, and for the worse.
>
> The firewall between national intelligence and all-of-government has been
> breached.  It is way beyond leaks, it is now a documented firehose with
> pipelines so well laid that the downstream departments have promulgated
> their deception plans.
>

Quite, I had a conversation with a government type this morning. His
question, 'what if the intercepts are shared with the IRS'

Moreover Snowden has proved that the internal controls in the NSA are lax.
If a low level grunt working for a contractor has such access to the NSA's
own crown jewels it is idiotic to imagine that they keep the confidential
secrets of IBM or Microsoft or GE with greater care.


And, they told us so.  In the comments made by the NSA, they have very
> clearly stated that if there is evidence of a crime, they will keep the
> data.  The statement they made is a seismic shift;  the NSA is now a
> domestic & criminal intelligence agency.  I suspect the penny has not
> dropped on this shift as yet, but they have said it is so.
>

They will keep the data anyway. They will query it if there is evidence of
a crime but otherwise they are keeping everything.

And worse, they are creating fake stories to explain how the data was
collected. So they have perjured themselves in numerous criminal
prosecutions that are likely to be found unsafe when the full extent of the
scheme emerges.


This is not a stable situation. It is easy to see why Obama was infatuated
with the intelligence community and thus willing to give them carte
blanche. He came into office with the US losing two wars and a military in
which every staff officer who had had the courage to tell Rumsfeld his
plans were insane was dismissed. The intelligence services were the only
part of the military Obama could trust to provide an exit strategy.

But the next President is not going to be beholden to the intel services in
quite the same way. Even Obama appears to be starting to ask questions
about how the intelligence results are being achieved.




> In threat & risk terms, it is now reasonable to consider that the USA
> government will provide national intelligence to back up a criminal
> investigation against a large company.  And, it is not unreasonable to
> assume that they will launch a criminal investigation in order to force
> some other result, nor is it unreasonable for a competitor to USA
> commercial interests to be facing a USA supplier backed by leaks.
>
> E.g., Airbus or Huawei or Samsung ...  Or any company that is engaged in a
> lawsuit against the US government.  Or any wall street bank being
> investigated by the DoJ for mortgage fraud, or any international bank with
> ops in the USA.  Or any company in Iran, Iraq, Syria, Afghanistan,
> Pakistan, India, Palestine,   or gambling companies in the Caribbean,
> Gibraltar, Australia, Britain.  Or any arms deal or energy deal.
>
> (Yes, that makes the task harder.)


Not necessarily.

We have lots of technology. This is not a technology problem, it is a
deployment problem. The greater the level of concern, the easier deployment
becomes.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] IPv6 and IPSEC

2013-08-29 Thread Phillip Hallam-Baker
On Thu, Aug 29, 2013 at 1:59 PM, Taral  wrote:

> On Wed, Aug 28, 2013 at 12:08 PM, Lucky Green 
> wrote:
> > "Additional guidelines for IPv6
> >
> > The sending IP must have a PTR record (i.e., a reverse DNS of the
> sending IP) and it should match the IP obtained via the forward DNS
> resolution of the hostname specified in the PTR record. Otherwise, mail
> will be marked as spam or possibly rejected."
>
> Because under ipv6 your prefix is supposed to be stable (customer
> identifier) and the namespace delegated to you on request. Have you
> asked your provider for an ipv6 namespace delegation?


It is a stupid and incorrect requirement.

The DNS has always allowed multiple A records to point to the same IP
address. In the general case a mail server will support hundreds, possibly
tens of thousands of receiving domains.

A PTR record can only point to one domain.

The reason that an MX record has a domain name as the target rather than an
IP address is to facilitate administration. Forcing the PTR and  record
to match means that there has to be a one to one mapping and thus defeats
many commonly used load balancing strategies.

Google is attempting to impose a criteria that is simply wrong.



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Keeping backups (was Re: Separating concerns

2013-08-29 Thread Phillip Hallam-Baker
On Thu, Aug 29, 2013 at 1:30 PM, Perry E. Metzger wrote:

> On Wed, 28 Aug 2013 20:04:34 +0200 Faré  wrote:
> > One thing that irks me, though, is the problem of the robust, secure
> > terminal: if everything is encrypted, how does one survive the
> > loss/theft/destruction of a computer or harddrive?
>
> So, as has been discussed, I envision people having small cheap
> machines at home that act as their "cloud", and the system prompting
> them to pick a friend to share encrypted backups with.
>
> Inevitably this means that said backups are going to either be
> protected by a fairly weak password or that the user is going to have
> to print the key out and put it in their desk drawer and risk having
> it lost or stolen or destroyed in a fire.
>
> I think I can live with either problem. Right now, most people
> have very little protection at all. I think making the perfect the
> enemy of the good is a mistake. If doing bad things to me requires
> breaking in to my individual home, that's fine. If it is merely much
> less likely that I lose my data rather than certain that I have no
> backup at all, that's fine.
>
> BTW, automation *does* do a good job of making such things invisible.
> I haven't lost any real data since I started using Time Machine from
> Apple, and I have non-technical friends who use it and are totally
> happy with the results. I wish there was an automated thing in Time
> Machine to let me trade backups with an offsite friend as well.
>

Now this is an area where QR codes might be useful

First point of simplification is that we only ever need to worry about
symmetric key backup since we can always add a private key to the rest of
our encrypted archive. We can encrypt the key and backup the encryption key
or we can use a deterministic keygen algorithm and escrow the seed. either
way we only need to escrow 256 bits.


Second point is that we can reduce exposure to risk by using some sort of
key splitting scheme. We can also use this to effect key transport between
devices. The problem with 'end to end' encryption these days is that most
of us have multiple devices we receive email on, which is why WebMail has
become so attractive.

I have to be able to read my email on any of my 5 principal machines
(Windows, 2 MacBooks, iPhone, iPad). Any email scheme that does not support
all of them is useless.


Third point of simplification: ditch key rollover. Don't expire a key
unless it is necessary because of a suspected or known compromise. Use a
sufficiently strong key to make cryptanalysis infeasible.

I know that key rollover is part of the ideology but it introduces more
vulnerability than it eliminates. Any encryption key you have that ends up
compromised is likely a maximum damage situation. So using ten keys in ten
years gives the attacker ten opportunities to compromise you if you muck up
the distribution or they manage to compromise a CA.


Fourth point of simplification: Just duplicate the key into every end point
rather than attempting a key service with split control and key shares.

A better way to manage crypto in a mobile device like a phone would be to
split the prime into two (or more parts) for each mobile device to be
enabled. To decrypt data the device would have to ask a service(s) with the
other part(s) of the key to do their work and then combine with the local
result.


So lets imagine the full key establishment sequence from the user's point
of view.

Key Generation.

To generate my first key, I tell my MUA my email address and the CA domain
name[1]. It checks to see if the email address already has a key
registered. If so it will ask if I am really, really sure I want to replace
it etc. Otherwise it generates for me a new keypair.

The CA is hopefully going to do validation of my key before issuing the
certificate. At minimum an email callback. We might push the encrypted
escrowed key out to the CA at the same time. But that is orthogonal to the
private key backup and distribution.


To backup the key we tell the device to print out the escrow data on paper.
Let us imagine that there there is a single sheet of paper which is cut
into six parts as follows:

1) Three copies of the encrypted private key, either as raw data or a link
to the raw data.

2) Three key shares allowing the key to be reconstructed from 2 of them.
For a 256 bit key that would be no more than 512 bits doing it the simple
way and there is probably a cleverer encoding.

The data for each would be presented as both a QR code (for installing in a
phone) and a BASE32 alphanumeric code (for installing on a machine without
a camera.


The user can easily escrow the keys by cutting the paper into 3 parts and
storing them in an acceptably safe location.

In my case that would probably mean mailing the shares to my parents and
family for offsite backup. Or I might give them to my broker or a bank or...

Banks are quite possibly going to be interested in helping this type of
scheme because it helps them meet t

Re: [Cryptography] IPv6 and IPSEC

2013-08-29 Thread Phillip Hallam-Baker
On Thu, Aug 29, 2013 at 4:53 PM, Taral  wrote:

> Oh, wait. I misread the requirement. This is a pretty normal
> requirement -- your reverse DNS has to be valid. So if you are
> 3ffe::2, and that reverses to abc.example.com, then abc.example.com
> better resolve to 3ffe::2.
>
> On Thu, Aug 29, 2013 at 1:38 PM, Phillip Hallam-Baker 
> wrote:
> >
> >
> >
> > On Thu, Aug 29, 2013 at 1:59 PM, Taral  wrote:
> >>
> >> On Wed, Aug 28, 2013 at 12:08 PM, Lucky Green 
> >> wrote:
> >> > "Additional guidelines for IPv6
> >> >
> >> > The sending IP must have a PTR record (i.e., a reverse DNS of the
> >> > sending IP) and it should match the IP obtained via the forward DNS
> >> > resolution of the hostname specified in the PTR record. Otherwise,
> mail will
> >> > be marked as spam or possibly rejected."
> >>
> >> Because under ipv6 your prefix is supposed to be stable (customer
> >> identifier) and the namespace delegated to you on request. Have you
> >> asked your provider for an ipv6 namespace delegation?
> >
> >
> > It is a stupid and incorrect requirement.
> >
> > The DNS has always allowed multiple A records to point to the same IP
> > address. In the general case a mail server will support hundreds,
> possibly
> > tens of thousands of receiving domains.
> >
> > A PTR record can only point to one domain.
> >
> > The reason that an MX record has a domain name as the target rather than
> an
> > IP address is to facilitate administration. Forcing the PTR and 
> record
> > to match means that there has to be a one to one mapping and thus defeats
> > many commonly used load balancing strategies.
> >
> > Google is attempting to impose a criteria that is simply wrong.
>
>
So Lucky's problem seems to be that the ISPs providing IPv6 have decided on
a convention that they identify residential IPv6 ranges by not filling in
the reverse PTR info

And the problem he has is that Google won't take email from a residential
IPv6.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The Case for Formal Verification

2013-08-29 Thread Phillip Hallam-Baker
On Thu, Aug 29, 2013 at 4:46 PM, Perry E. Metzger wrote:

> Taking a break from our discussion of new privacy enhancing protocols,
> I thought I'd share something I've been mumbling about in various
> private groups for a while. This is almost 100% on the security side
> of things, and almost 0% on the cryptography side of things. It is
> long, but I promise that I think it is interesting to people doing
> security work.
>

Whitt Diffie was meant to be working on formal methods when he came up with
public key crypto...

My D.Phil Thesis was on applying formal methods to a large, non trivial
real time system (raw input bandwidth was 6Tb/sec, the immediate
fore-runner to the LHC data acquisition scheme). My college tutor was Tony
Hoare but I was in the nuclear physics dept because they had the money to
build the machine.

The problemI saw with formal methods was that the skills required were
already at the limit of what Oxford University grad students were capable
of and building systems large enough to matter looked like it would take
tools like category theory which are even more demanding.

The code synthesis scheme I developed was an attempt to address the scaling
problem from the other end. The idea being that to build a large system you
create a very specific programming language that is targeted at precisely
that class of problems. Then you write a back end that converts the
specification into code for that very restricted domain. If you want a
formal proof you have the synthesizer generate it from the specification as
well. That approach finesses the problem of having to validate the
synthesizer (which would likely take category theory) because only the
final target code need be validated.


That is the code I re-implemented in C after leaving VeriSign and released
onto SourceForge earlier this year and the tool I used to build the JSON
Schema tool.

I would probably have released it earlier only I met this guy at CERN who
had some crazy idea about hypertext.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Email and IM are ideal candidates for mix networks

2013-08-29 Thread Phillip Hallam-Baker
On Thu, Aug 29, 2013 at 3:31 PM, Callme Whatiwant wrote:

> Hello, I'm new here, so I apologize if I'm repeating past arguments or
> asking old questions.
>
>
> On Tue, Aug 27, 2013 at 8:52 PM, Jerry Leichter  wrote:
> >
> > On Aug 27, 2013, at 9:48 PM, Perry E. Metzger wrote:
> >
> >> On Tue, 27 Aug 2013 22:04:22 +0100 "Wendy M. Grossman"
> >>  wrote:
> >>> On 08/27/2013 18:34, ianG wrote:
>  Why do we need the 1980s assumption of being able to send freely
>  to everyone, anyway?
> >>>
> >>> It's clear you're not a journalist or working in any other
> >>> profession where you actually need to be able to communicate
> >>> spontaneously with strangers.
> >>
> >> Of course, as a reporter, you are probably getting email addresses of
> >> people to talk to via referral, and that could be used to get past the
> >> barrier. The problem of people spontaneously contacting a published
> >> address is harder.
> > Actually, it isn't, or shouldn't be.  Email addresses were originally
> things you typed into a terminal.  They had to be short, memorable, and
> easy to type.  "Published" meant "printed on paper", which implied typing
> the thing back in.
> >
> > But none of that matters much any more.
>
> This is (anecdotally) completely untrue.
>
> A great way to experience this personally is to start using a
> "strange" email address, like mine.  You quickly realize how often you
> *say* or *write on paper* your email address.  Because my email
> address is "odd", almost every time I say it, the listener asks me to
> spell it.  I suspect if I could just say "bob at gmail" I wouldn't
> notice how often this occurs.
>

I have enough problems with mine. hal...@gmail.com, someone else registered
hal...@gmail.com.


But more generally, I want to make it easy for people to send me email. If
they already have my address then it does not matter how easy it would be
to add an encryption key, the opportunity to do so has passed.

What I did realize would be useful is some sort of verification code. So
this morning I was arranging a delivery of a screw for the shower. I give
them the email address but they were going to do hallambaker@gmail.cominstead.

So it would be nice if there was a code that someone could read back to
tell you that they got the address right. It does not need to be
particularly long, two maybe three letters. Just enough to provide a
confirmation.


And extending the concept. Let us imagine that I have a separate email
address that I am only going to use for online purchases and that I have
filled out a delivery address form somewhere for it and that agent will
only give out the address to a party that presents an EV certificate to
show that they are accountable and keep a record of everyone who asks.

This does not really raise particular confidentiality concerns to me
because it is simply a form of compression. My delivery addresses appear
many times in my email inbox, I have a new entry every time I buy something
online. If the mails travel through my ISP's server they will get that info
soon enough (unless the sender encrypts). But it would make filling in
online forms a lot easier and less error prone.



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] NSA and cryptanalysis

2013-09-02 Thread Phillip Hallam-Baker
On Sun, Sep 1, 2013 at 10:35 PM, James A. Donald  wrote:

> On 2013-09-01 9:11 PM, Jerry Leichter wrote:
>
>> Meanwhile, on the authentication side, Stuxnet provided evidence that the
>> secret community *does* have capabilities (to conduct a collision attacks)
>> beyond those known to the public - capabilities sufficient to produce fake
>> Windows updates.
>>
>
> Do we know they produced fake windows updates without assistance from
> Microsoft?


Given the reaction from Microsoft, yes.

The Microsoft public affairs people have been demonstrating real anger at
the Flame attack in many forums.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] NSA and cryptanalysis

2013-09-02 Thread Phillip Hallam-Baker
You know, if there was a completely ironclad legal opinion that made use of
ECC possible without the risk of a lawsuit costing over $2 million from
Certicom then I would be happy to endorse a switch to ECC like the NSA is
pushing for as well.

I would not therefore draw the conclusion that NSA advice to move to ECC is
motivated by knowledge of a crack of RSA, if anything that would argue
against moving from ECC. It is merely a consequence of the US government
having a license which we don't have.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] NSA and cryptanalysis

2013-09-03 Thread Phillip Hallam-Baker
On Tue, Sep 3, 2013 at 12:49 AM, Jon Callas  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
>
> On Sep 2, 2013, at 3:06 PM, "Jack Lloyd"  wrote:
>
> > On Mon, Sep 02, 2013 at 03:09:31PM -0400, Jerry Leichter wrote:
> >
> >> a) The very reference you give says that to be equivalent to 128
> >> bits symmetric, you'd need a 3072 bit RSA key - but they require a
> >> 2048 bit key.  And the same reference says that to be equivalent to
> >> 256 bits symmetric, you need a 521 bit ECC key - and yet they
> >> recommend 384 bits.  So, no, even by that page, they are not
> >> recommending "equivalent" key sizes - and in fact the page says just
> >> that.
> >
> > Suite B is specified for 128 and 192 bit security levels, with the 192
> > bit level using ECC-384, SHA-384, and AES-256. So it seems like if
> > there is a hint to be drawn from the Suite B params, it's about
> > AES-192.
> >
>
> The real issue is that the P-521 curve has IP against it, so if you want
> to use freely usable curves, you're stuck with P-256 and P-384 until some
> more patents expire. That's more of it than 192 bit security. We can hold
> our noses and use P-384 and AES-256 for a while.
>
> Jon
>

What is the state of prior art for the P-384? When was it first published?

Given that RIM is trying to sell itself right now and the patents are the
only asset worth having, I don't have good feelings on this. Well apart
from the business opportunities for expert witnesses specializing in crypto.

The problem is that to make the market move we need everyone to decide to
go in the same direction. So even though my employer can afford a license,
there is no commercial value to that license unless everyone else has
access.


Do we have an ECC curve that is (1) secure and (2) has a written
description prior to 1 Sept 1993?

Due to submarine patent potential, even that is not necessarily enough but
it would be a start.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Keeping backups (was Re: Separating concerns

2013-09-03 Thread Phillip Hallam-Baker
Want to collaborate on an Internet Draft?

This is obviously useful but it can only be made useful if everyone does it
in the same way.


On Tue, Sep 3, 2013 at 10:14 AM, Peter Gutmann wrote:

> Phillip Hallam-Baker  writes:
>
> >To backup the key we tell the device to print out the escrow data on
> paper.
> >Let us imagine that there there is a single sheet of paper which is cut
> into
> >six parts as follows:
>
> You read my mind :-).  I suggested more or less this to a commercial
> provider
> a month or so back when they were trying to solve the same problem.
> Specifically it was "if you lose your key/password/whatever, you can't call
> the helpdesk to get your data back, it's really gone", which was causing
> them
> significant headaches because users just weren't expecting this sort of
> thing.
> My suggestion was to generate a web page in printable format with the key
> shares in standard software-serial-number form (X-X-X etc) and
> tell people to keep one part at home and one at work, or something similar,
> and to treat it like they'd treat their passport or insurance
> documentation.
>
> Peter.
>



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Backup is completely separate

2013-09-03 Thread Phillip Hallam-Baker
On Mon, Sep 2, 2013 at 11:03 PM, John Kelsey  wrote:

> The backup access problem isn't just a crypto problem, it's a social/legal
> problem.  There ultimately needs to be some outside mechanism for using
> social or legal means to ensure that, say, my kids can get access to at
> least some of my encrypted files after I drop dead or land in the hospital
> in a coma.  Or that I can somehow convince someone that it's really me and
> I'd like access to the safe deposit box whose password I forgot and lost my
> backup copy of.  Or whatever.
>
> This is complicated by the certainty that if someone has the power to get
> access to my encrypted data, they will inevitably be forced to do so by
> courts or national security letters, and will also be subject to extralegal
> pressures or attacks to make them turn over some keys.  I suspect the best
> that can be workably done now is to make any key escrow service's key
> accesses transparent and impossible to hide from the owner of the key, and
> then let users decide what should and shoudn't be escrowed.  But this isn't
> all that great an answer.
>

To avoid mandated/coerced release substitute 'keep at bank' with 'bury at
undisclosed location'.

There is really no 100% reliable way to make things available to your heirs
while avoiding government coercion. Particularly since the government
issues the documents saying that you are dead.




-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Three kinds of hash: Two are still under ITAR.

2013-09-04 Thread Phillip Hallam-Baker
While doing some research on the history of hashing for a client I
discovered that it is described in the very first edition of the ACM
journal and the paper is a translation of a Russian paper.

One of the many problems with the ITAR mindset is the assumption that all
real ideas are invented inside the US by white men wearing white lab coats
and that the rest of the undeserving world is stealing them.

Anyone with any grasp of history recognizes that the industrial scale
industrial espionage practiced by China on the industrial powers is merely
DIY reparations for the 19th century and the first half of the 20th.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Hashes into Ciphers

2013-09-04 Thread Phillip Hallam-Baker
On a more theoretical basis, Phil Rogaway gave a presentation at MIT many
years ago where he showed the use of a one-way function as the construction
primitive for every other type of symmetric algorithm.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Google's Public Key Size (was Re: NSA and cryptanalysis)

2013-09-05 Thread Phillip Hallam-Baker
On Wed, Sep 4, 2013 at 6:58 PM, Andy Steingruebl  wrote:

> On Wed, Sep 4, 2013 at 3:54 PM, Paul Hoffman wrote:
>
>> On Sep 4, 2013, at 2:15 PM, Andy Steingruebl  wrote:
>>
>> > As of Jan-2014 CAs are forbidden from issuing/signing anything less
>> than 2048 certs.
>>
>> For some value of "forbidden". :-)
>>
>
> This is why you're seeing Mozilla and Google implementing these checks for
> compliance with the CABF Basic Requirements in  code
>
> - Andy
>

Which is rather easier to effect since the browser providers have no
longstanding contractual agreements made prior to the BRs being adopted.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-05 Thread Phillip Hallam-Baker
OK how about this:

If a person at Snowden's level in the NSA had any access to information
that indicated the existence of any program which involved the successful
cryptanalysis of any cipher regarded as 'strong' by this community then the
Director of National Intelligence, the Director of the NSA and everyone
involved in those decisions should be fired immediately and lose their
pensions.

What was important in Ultra was the fact that the Germans never discovered
they were being intercepted and decrypted. They would have strengthened
their cipher immediately if they had known it was broken.


So either the NSA has committed an unpardonable act of carelessness (beyond
the stupidity of giving 50,000 people like Snowden access to information
that should not have been shared beyond 500) or the program involves lower
strength ciphers that we would not recommend the use of but are still there
in the cipher suites.

I keep telling people that you do not make a system more secure by adding
the choice of a stronger cipher into the application. You make the system
more secure by REMOVING the choice of the weak ciphers.

I would bet that there is more than enough DES traffic to be worth attack
and probably quite a bit on IDEA as well. There is probably even some 40
and 64 bit crypto in use.


Before we assume that the NSA is robbing banks by using an invisibility
cloak lets consider the likelihood that they are beating up old ladies and
taking their handbags.


On Thu, Sep 5, 2013 at 3:58 PM, Perry E. Metzger  wrote:

> I would like to open the floor to *informed speculation* about
> BULLRUN.
>
> Informed speculation means intelligent, technical ideas about what
> has been done. It does not mean wild conspiracy theories and the
> like. I will be instructing the moderators (yes, I have help these
> days) to ruthlessly prune inappropriate material.
>
> At the same time, I will repeat that reasonably informed
> technical speculation is appropriate, as is any solid information
> available.
>
>
> Perry
> --
> Perry E. Metzgerpe...@piermont.com
> ___
> The cryptography mailing list
> cryptography@metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
>



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-05 Thread Phillip Hallam-Baker
On Thu, Sep 5, 2013 at 3:58 PM, Perry E. Metzger  wrote:

> I would like to open the floor to *informed speculation* about
> BULLRUN.
>
> Informed speculation means intelligent, technical ideas about what
> has been done. It does not mean wild conspiracy theories and the
> like. I will be instructing the moderators (yes, I have help these
> days) to ruthlessly prune inappropriate material.
>
> At the same time, I will repeat that reasonably informed
> technical speculation is appropriate, as is any solid information
> available.


http://www.theguardian.com/world/2013/sep/05/nsa-gchq-encryption-codes-security
• The NSA spends $250m a year on a program which, among other goals, works
with technology companies to "covertly influence" their product designs.

I believe this confirms my theory that the NSA has plants in the IETF to
discourage moves to strong crypto.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-05 Thread Phillip Hallam-Baker
On Thu, Sep 5, 2013 at 4:41 PM, Perry E. Metzger  wrote:

> On Thu, 5 Sep 2013 15:58:04 -0400 "Perry E. Metzger"
>  wrote:
> > I would like to open the floor to *informed speculation* about
> > BULLRUN.
>
> Here are a few guesses from me:
>
> 1) I would not be surprised if it turned out that some people working
> for some vendors have made code and hardware changes at the NSA's
> behest without the knowledge of their managers or their firm. If I
> were running such a program, paying off a couple of key people here
> and there would seem only rational, doubly so if the disclosure of
> their involvement could be made into a crime by giving them a
> clearance or some such.
>

Or they contacted the NSA alumni working in the industry.



> 2) I would not be surprised if some of the slow speed at which
> improved/fixed hashes, algorithms, protocols, etc. have been adopted
> might be because of pressure or people who had been paid off.
>


> At the very least, anyone whining at a standards meeting from now on
> that they don't want to implement a security fix because "it isn't
> important to the user experience" or adds minuscule delays to an
> initial connection or whatever should be viewed with enormous
> suspicion. Whether I am correct or not, such behavior clearly serves
> the interest of those who would do bad things.
>

I think it is subtler that that. Trying to block a strong cipher is too
obvious. Much better to push for something that is overly complicated or
too difficult for end users to make use of.

* The bizare complexity of IPSEC.

* Allowing deployment of DNSSEC to be blocked in 2002 by blocking a
technical change that made it possible to deploy in .com.

* Proposals to deploy security policy information (always send me data
encrypted) have been consistently filibustered by people making nonsensical
objections.

3) I would not be surprised if random number generator problems in a
> variety of equipment and software were not a very obvious target,
> whether those problems were intentionally added or not.
>

Agreed, the PRNG is the easiest thing to futz with.

It would not surprise me if we discovered kleptography at work as well.


> 4) Choices not to use things like Diffie-Hellman in TLS connections
> on the basis that it damages user experience and the like should be
> viewed with enormous suspicion.
>
> 5) Choices not to make add-ons available in things like chat clients
> or mail programs that could be used for cryptography should be viewed
> with suspicion.


I think the thing that discouraged all that was the decision to make end
user certificates hard to obtain (still no automatic spec) and expire after
a year.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] tamper-evident crypto? (was: BULLRUN)

2013-09-05 Thread Phillip Hallam-Baker
Sent from my difference engine


On Sep 5, 2013, at 9:22 PM, Peter Gutmann  wrote:

> John Denker  writes:
>
>> To say the same thing the other way, I was always amazed that the Nazis were
>> unable to figure out that their crypto was broken during WWII.  There were
>> experiments they could have done, such as sending out a few U-boats under
>> strict radio silence and comparing their longevity to others.
>
> Cognitive dissonance.  "We have been...", sorry "Ve haff been reassured zat
> our cipher is unbreakable, so it must be traitors, bad luck, technical issues,
> ...".

Not necessarily

Anyone who raised a suspicion was risking their life.



> Peter.
> ___
> The cryptography mailing list
> cryptography@metzdowd.com
> http://www.metzdowd.com/mailman/listinfo/cryptography
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-07 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 3:13 PM, Gregory Perry wrote:

> >If so, then the domain owner can deliver a public key with authenticity
> >using the DNS.  This strikes a deathblow to the CA industry.  This
> >threat is enough for CAs to spend a significant amount of money slowing
> >down its development [0].
> >
> >How much more obvious does it get [1] ?
>
> The PKI industry has been a sham since day one, and several root certs
> have been compromised by the proverbial "bad guys" over the years (for
> example, the "Flame" malware incident used to sign emergency Windows
> Update packages which mysteriously only affected users in Iran and the
> Middle East, or the Diginotar debacle, or the Tunisian "Ammar" MITM
> attacks etc).  This of course is assuming that the FBI doesn't already
> have access to all of the root CAs so that on domestic soil they can
> sign updates and perform silent MITM interception of SSL and
> IPSEC-encrypted traffic using transparent inline layer-2 bridging
> devices that are at every major Internet peering point and interconnect,
> because that would be crazy talk.
>

Before you make silly accusations go read the VeriSign Certificate
Practices Statement and then work out how many people it takes to gain
access to one of the roots.

The Key Ceremonies are all videotaped from start to finish and the auditors
have reviewed at least some of the ceremonies. So while it is not beyond
the realms of possibility that such a large number of people were suborned,
I think it drastically unlikely.

Add to which Jim Bizdos is not exactly known for being well disposed to the
NSA or key escrow.


Hacking CAs is a poor approach because it is a very visible attack.
Certificate Transparency is merely automating and generalizing controls
that already exist.

But we can certainly add them to S/MIME, why not.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Protecting Private Keys

2013-09-07 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 10:20 AM, Jeffrey I. Schiller  wrote:

>
> If I was the NSA, I would be scavenging broken hardware from
> “interesting” venues and purchasing computers for sale in interesting
> locations. I would be particularly interested in stolen computers, as
> they have likely not been wiped.
>

+1

And this is why I have been so peeved at the chorus of attack against
trustworthy computing.

All I have ever really wanted from Trustworthy computing is to be sure that
my private keys can't be copied off a server.


And private keys should never be in more than one place unless they are
either an offline Certificate Signing Key for a PKI system or a decryption
key for stored data.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-07 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 5:19 AM, ianG  wrote:

> On 7/09/13 10:15 AM, Gregory Perry wrote:
>
>  Correct me if I am wrong, but in my humble opinion the original intent
>> of the DNSSEC framework was to provide for cryptographic authenticity
>> of the Domain Name Service, not for confidentiality (although that
>> would have been a bonus).
>>
>
>
> If so, then the domain owner can deliver a public key with authenticity
> using the DNS.  This strikes a deathblow to the CA industry.  This threat
> is enough for CAs to spend a significant amount of money slowing down its
> development [0].
>
> How much more obvious does it get [1] ?
>

Good theory only the CA industry tried very hard to deploy and was
prevented from doing so because Randy Bush abused his position as DNSEXT
chair to prevent modification of the spec to meet the deployment
requirements in .com.

DNSSEC would have deployed in 2003 with the DNS ATLAS upgrade had the IETF
followed the clear consensus of the DNSEXT working group and approved the
OPT-IN proposal. The code was written and ready to deploy.

I told the IESG and the IAB that the VeriSign position was no bluff and
that if OPT-IN did not get approved there would be no deployment in .com. A
business is not going to spend $100million on deployment of a feature that
has no proven market demand when the same job can be done for $5 million
with only minor changes.


CAs do not make their money in the ways you imagine. If there was any
business case for DNSSEC I will have no problem at all finding people
willing to pay $50-100 to have a CA run their DNSSEC for them because that
is going to be a lot cheaper than finding a geek with the skills needed to
do the configuration let alone do the work.

One reason that PGP has not spread very far is that there is no group that
has a commercial interest in marketing it.

At the moment revenues from S/MIME are insignificant for all the CAs.
Comodo gives away S/MIME certs for free. Its just not worth enough to try
to charge for right now.

If we can get people using secure email or DNSSEC on a large scale then CAs
will figure out how to make money from it. But right now nobody is making a
profit from either.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 8:53 PM, Gregory Perry wrote:

> On 09/07/2013 07:52 PM, Jeffrey I. Schiller wrote:
> > Security fails on the Internet for three important reasons, that have
> > nothing to do with the IETF or the technology per-se (except for point
> > 3).
> >  1.  There is little market for “the good stuff”. When people see that
> >  they have to provide a password to login, they figure they are
> >  safe... In general the consuming public cannot tell the
> >  difference between “good stuff” and snake oil. So when presented
> >  with a $100 “good” solution or a $10 bunch of snake oil, guess
> >  what gets bought.
> The IETF mandates the majority of the standards used on the Internet
> today.


No they do not. There is W3C and OASIS both of which are larger now. And
there has always been IEEE.

And they have no power to mandate anything. In fact one of the things I
have been trying to do is to persuade people that the Canute act commanding
the tides to turn is futile. People need to understand that the IETF does
not have any power to mandate anything and that stakeholders will only
follow standards proposals if they see a value in doing so.




>  If the IETF were truly serious about authenticity and integrity
> and confidentiality of communications on the Internet, then there would
> have been interim ad-hoc link layer encryption built into SMTP
> communications since the end of U.S. encryption export regulations.
>

Like STARTTLS which has been in the standards and deployed for a decade now?



> There would have been an IETF-mandated requirement for Voice over IP
> transport encryption, to provide a comparable set of confidentiality
> with VoIP communications that are inherent to traditional copper-based
> landline telephones.  There would at the very least be ad-hoc (read
> non-PKI integrated) DNSSEC.
>

What on earth is that? DNS is a directory so anything that authenticates
directory attributes is going to be capable of being used as a PKI.



> And then there is this Bitcoin thing.  I say this as an individual that
> doesn't even like Bitcoin.  For the record and clearly off topic, I hate
> Bitcoin with a passion and I believe that the global economic crisis
> could be easily averted by returning to a precious metal standard with
> disparate local economies and currencies, all in direct competition with
> each other for the best possible GDP.
>

The value of all the gold in the world ever mined is $8.2 trillion. The
NASDAQ alone traded $46 trillion last Friday.

There are problems with bitcoin but I would worry rather more about the
fact that the Feds have had no trouble at all shutting down every prior
attempt at establishing a currency of that type and the fact that there is
no anonymity whatsoever.





> So how does Bitcoin exist without the IETF?  In its infancy, millions of
> dollars of transactions are being conducted daily via Bitcoin, and there
> is no IETF involved and no central public key infrastructure to validate
> the papers of the people trading money with each other.  How do you
> counter this Bitcoin thing, especially given your tenure and experience
> at the IETF?


Umm I would suggest that it has more to do with supply and demand and the
fact that there is a large amount of economic activity that is locked out
of the formal banking system (including the entire nation of Iran) that is
willing to pay a significant premium for access to a secondary.


> Nonsense.  Port 25 connects to another port 25 and exchanges a public
> key.  Then a symmetrically keyed tunnel is established.  This is not a
> complex thing, and could have been written into the SMTP RFC decades ago.


RFC 3702 published in 2002.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 10:35 PM, Gregory Perry
wrote:

>  >On 09/07/2013 09:59 PM, Phillip Hallam-Baker wrote:
> >
> >Anyone who thinks Jeff was an NSA mole when he was one of the main people
> behind the MIT version of PGP and the distribution of Kerberos is >talking
> daft.
> >
>  >I think that the influence was rather more subtle and was more directed
> at encouraging choices that would make the crypto hopelessly impractical
> >so people would not use it than in adding backdoors.
> >
>  >
>  >One of the lessons of PRISM is that metadata is very valuable. In
> particular social network analysis. If I know who is talking to whom then I
> have >pretty much 90% of the data needed to wrap up any conspiracy against
> the government. So lets make sure we all use PGP and sign each other's
> >keys...
>
> 1) At the core of the initial PGP distribution authored by Philip R.
> Zimmermann, Jr. was the RSA public key encryption method
>
> 2) At that time, the Clinton administration and his FBI was advocating
> widespread public key escrow mechanisms, in addition to the inclusion of
> the Clipper chip to all telecommunication devices to be used for remote
> "lawful intercepts"
>
> 3) Shortly after the token indictment of Zimmerman (thus prompting
> widespread use and promotion of the RSA public key encryption algorithm),
> the Clinton administration's FBI then advocated a relaxation of encryption
> export regulations in addition to dropping all plans for the Clipper chip
>
> 4) On September 21, 2000, the patent for the RSA public key encryption
> algorithm expired, yet RSA released their open source version of the RSA
> encryption algorithm two weeks prior to their patent's expiry for use
> within the public domain
>
> 5) Based upon the widespread use and public adoption of the RSA public key
> encryption method via the original PGP debacle, RSA (now EMC) could have
> easily adjusted the initial RSA patent term under the auspice of national
> security, which would have guaranteed untold millions (if not billions) of
> additional dollars in revenue to the corporate RSA patent holder
>
> You do the math
>

This is seriously off topic here but the idea that the indictment of Phil
Zimmerman was a token effort is nonsense. I was not accusing Phil Z. of
being a plant.

Not only was Louis Freeh going after Zimmerman for real, he went against
Clinton in revenge for the Clipper chip program being junked. He spent much
of Clinton's second term conspiring with Republicans in Congress to get
Clinton impeached.

Clipper was an NSA initiative that began under Bush or probably even
earlier. They got the incoming administration to endorse it as a fait
accompli.


Snowden and Manning on the other hand... Well I do wonder if this is all
some mind game to get people to secure the Internet against cyberattacks.
But the reason I discount that as a possibility is that what has been
revealed has completely destroyed trust. We can't work with the Federal
Government on information security the way that we did in the past any more.

I think the administration needs to make a downpayment on restoring trust.
They could begin by closing the gulag in Guantanamo.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] MITM source patching [was Schneier got spooked]

2013-09-08 Thread Phillip Hallam-Baker
On Sun, Sep 8, 2013 at 1:42 AM, Tim Newsham  wrote:

> Jumping in to this a little late, but:
>
> >  Q: "Could the NSA be intercepting downloads of open-source
> > encryption software and silently replacing these with their own
> versions?"
> >  A: (Schneier) Yes, I believe so.
>
> perhaps, but they would risk being noticed. Some people check file hashes
> when downloading code. FreeBSD's port system even does it for you and
> I'm sure other package systems do, too.   If this was going on en masse,
> it would get picked up pretty quickly...  If targeted, on the other hand,
> it
> would work well enough...
>

But is the source compromised in the archive?


It think we need a different approach to source code management. Get rid of
user authentication completely, passwords and SSH are both a fragile
approach. Instead every code update to the repository should be signed and
recorded in an append only log and the log should be public and enable any
party to audit the set of updates at any time.

This would be 'Code Transparency'.

Problem is we would need to modify GIT to implement.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Opening Discussion: Speculation on "BULLRUN"

2013-09-08 Thread Phillip Hallam-Baker
On Sat, Sep 7, 2013 at 9:50 PM, John Gilmore  wrote:

> > >> First, DNSSEC does not provide confidentiality.  Given that, it's not
> > >> clear to me why the NSA would try to stop or slow its deployment.
>
> DNSSEC authenticates keys that can be used to bootstrap
> confidentiality.  And it does so in a globally distributed, high
> performance, high reliability database that is still without peer in
> the world.
>
> It was never clear to me why DNSSEC took so long to deploy, though
> there was one major moment at an IETF in which a member of the IESG
> told me point blank that Jim Bidzos had made himself so hated that the
> IETF would never approve a standard that required the use of the RSA
> algorithm -- even despite a signed blanket license for use of RSA for
> DNSSEC, and despite the expiration of the patent.  I


No, that part is untrue. I sat at the table with Jeff Schiller and Burt
Kaliski when Burt pitched S/MIME at the IETF. He was Chief Scientist of RSA
Labs at the time.

Jim did go after Phil Z. over PGP initially. But Phil Z. was violating the
patent at the time. That led to RSAREF and the MIT version of PGP.


DNSSEC was (and is) a mess as a standard because it is an attempt to
retrofit a directory designed around some very tight network constraints
and with a very poor architecture to make it into a PKI.

PS: My long-standing domain registrar (enom.com) STILL doesn't support
> DNSSEC records -- which is why toad.com doesn't have DNSSEC
> protection.  Can anybody recommend a good, cheap, reliable domain
> registrar who DOES update their software to support standards from ten
> years ago?


The Registrars are pure marketing operations. Other than GoDaddy which
implemented DNSSEC because they are trying to sell the business and more
tech looks kewl during due diligence, there is not a market demand for
DNSSEC.

One problem is that the Registrars almost invariably sell DNS registrations
at cost or at a loss and make the money up on value added products. In
particular SSL certificates.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Trapdoor symmetric key

2013-09-08 Thread Phillip Hallam-Baker
Two caveats on the commentary about a symmetric key algorithm with a
trapdoor being a public key algorithm.

1) The trapdoor need not be a good public key algorithm, it can be flawed
in ways that would make it unsuited for use as a public key algorithm. For
instance being able to compute the private key from the public or deduce
the private key from multiple messages.

2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced the
search space for brute force search from 128 bits to 64 or only worked on
some messages would be enough leverage for intercept purposes but make it
useless as a public key system.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Trapdoor symmetric key

2013-09-08 Thread Phillip Hallam-Baker
On Sun, Sep 8, 2013 at 12:19 PM, Faré  wrote:

> On Sun, Sep 8, 2013 at 9:42 AM, Phillip Hallam-Baker 
> wrote:
> > Two caveats on the commentary about a symmetric key algorithm with a
> > trapdoor being a public key algorithm.
> >
> > 1) The trapdoor need not be a good public key algorithm, it can be
> flawed in
> > ways that would make it unsuited for use as a public key algorithm. For
> > instance being able to compute the private key from the public or deduce
> the
> > private key from multiple messages.
> >
> Then it's not a symmetric key algorithm with a trapdoor, it's just a
> broken algorithm.


But the compromise may only be visible if you have access to some
cryptographic technique which we don't currently have.

The point I am making is that a backdoor in a symmetric function need not
be a secure public key system, it could be a breakable one. And that is a
much wider class of function than public key cryptosystems. There are many
approaches that were tried before RSA and ECC were settled on.




> > 2) The trapdoor need not be a perfect decrypt. A trapdoor that reduced
> the
> > search space for brute force search from 128 bits to 64 or only worked on
> > some messages would be enough leverage for intercept purposes but make it
> > useless as a public key system.
> >
> I suppose the idea is that by using the same trapdoor algorithm or
> algorithm family
> and doubling the key size (e.g. 3DES style), you get a 256-bit
> symmetric key system
> that can be broken in 2^128 attempts by someone with the system's private
> key
> but 2^256 by someone without. If in your message you then communicate 128
> bits
> of information about your symmetric key, the guy with the private key
> can easily crack your symmetric key, whereas others just can't.
> Therefore that's a great public key cryptography system.
>

2^128 is still beyond the reach of brute force.

2^64 and a 128 bit key which is the one we usually use on the other hand...



Perhaps we should do a test, move to 256 bits on a specific date across the
net and see if the power consumption rises near the NSA data centers.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Points of compromise

2013-09-08 Thread Phillip Hallam-Baker
I was asked to provide a list of potential points of compromise by a
concerned party. I list the following so far as possible/likely:


1) Certificate Authorities

Traditionally the major concern (perhaps to the point of distraction from
other more serious ones). Main caveat, CA compromises leave permanent
visible traces as recent experience shows and there are many eyes looking.
Even if Google was compromised I can't believe Ben Laurie and Adam Langley
are proposing CT in bad faith.


2) Covert channel in Cryptographic accelerator hardware.

It is possible that cryptographic accelerators have covert channels leaking
the private key through TLS (packet alignment, field ordering, timing,
etc.) or in key generation (kleptography of the RSA modulus a la Motti
Young).


3) Cryptanalytic attack on one or more symmetric algorithms.

I can well believe that RC4 is bust and that there is enough RC4 activity
going on to make cryptanalysis worth while. The idea that AES is
compromised seems very less likely to me.


4) Protocol vulnerability introduced intentionally through IETF

I find this rather unlikely to be a direct action since there are few
places where the spec could be changed to advantage an attacker and only
the editors would have the control necessary to introduce text and there
are many eyes.


5) Protocol vulnerability that IETF might have fixed but was discouraged
from fixing.

Oh more times than I can count. And I would not discount the possibility
that there would be strategies based exploiting on the natural suspicion
surrounding security matters. It would have been easy for a faction to
derail DNSSEC by feeding the WG chair's existing hostility to CAs telling
him to stand firm.


One concern here is that this will fuel the attempt to bring IETF under
control of the ITU and Russia, China, etc.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Market demands for security (was Re: Opening Discussion: Speculation on "BULLRUN")

2013-09-08 Thread Phillip Hallam-Baker
On Sun, Sep 8, 2013 at 3:08 PM, Perry E. Metzger  wrote:

> On Sun, 8 Sep 2013 08:40:38 -0400 Phillip Hallam-Baker
>  wrote:
> > The Registrars are pure marketing operations. Other than GoDaddy
> > which implemented DNSSEC because they are trying to sell the
> > business and more tech looks kewl during due diligence, there is
> > not a market demand for DNSSEC.
>
> Not to discuss this particular case, but I often see claims to the
> effect that "there is no market demand for security".
>
> I'd like to note two things about such claims.
>
> 1) Although I don't think P H-B is an NSA plant here, I do
> wonder about how often we've heard that in the last decade from
> someone trying to reduce security.
>

There is a market demand for security. But it is always item #3 on the list
of priorities and the top two get done.

I have sold seven figure crypto installations that have remained shelfware.

The moral is that we have to find other market reasons to use security. For
example simplifying administration of endpoints. I do not argue like some
do that there is no market for security so we should give up, I argue that
there is little market for something that only provides security and so to
sell security we have to attach it to something they want.




> 2) I doubt that safety is, per se, anything the market demands from
> cars, food, houses, etc. When people buy such products, they don't
> spend much time asking "so, this house, did you make sure it won't
> fall down while we're in it and kill my family?" or "this coffee mug,
> it doesn't leach arsenic into the coffee does it?"
>

People buy guns despite statistics that show that they are orders of
magnitude more likely to be shot with the gun themselves rather than by an
attacker.


However, if you told consumers "did you know that food manufacturer
> X does not test its food for deadly bacteria on the basis that ``there
> is no market demand for safety''", they would form a lynch mob.
> Consumers *presume* their smart phones will not leak their bank
> account data and the like given that there is a banking app for it,
> just as they *presume* that their toaster will not electrocute them.
>

Yes, but most cases the telco will only buy a fix after they have been
burned.

To sell DNSSEC we should provide a benefit to the people who need to do the
deployment. Problem is that the perceived benefit is to the people going to
the site which is different...


It is fixable, people just need to understand that the stuff does not sell
itself.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The One True Cipher Suite

2013-09-09 Thread Phillip Hallam-Baker
On Mon, Sep 9, 2013 at 3:58 AM, ianG  wrote:

> On 9/09/13 02:16 AM, james hughes wrote:
>
>  I am honestly curious about the motivation not to choose more secure
>> modes that are already in the suites?
>>
>
> Something I wrote a bunch of years ago seems apropos, perhaps minimally as
> a thought experiment:
>
>
>
> Hypothesis #1 -- The One True Cipher Suite
>
>
> In cryptoplumbing, the gravest choices are apparently on the nature of the
> cipher suite. To include latest fad algo or not? Instead, I offer you a
> simple solution. Don't.
>
> There is one cipher suite, and it is numbered Number 1.
>
> Cypersuite #1 is always negotiated as Number 1 in the very first message.
> It is your choice, your ultimate choice, and your destiny. Pick well.
>
> If your users are nice to you, promise them Number 2 in two years. If they
> are not, don't. Either way, do not deliver any more cipher suites for at
> least 7 years, one for each hypothesis.
>
>And then it all went to pot...
>
> We see this with PGP. Version 2 was quite simple and therefore stable --
> there was RSA, IDEA, MD5, and some weird padding scheme. That was it.
> Compatibility arguments were few and far between. Grumbles were limited to
> the padding scheme and a few other quirks.
>
> Then came Versions 3-8, and it could be said that the explosion of options
> and features and variants caused more incompatibility than any standards
> committee could have done on its own.
>
>Avoid the Champagne Hangover
>
> Do your homework up front.
>
> Pick a good suite of ciphers, ones that are Pareto-Secure, and do your
> best to make the combination strong [1]. Document the short falls and do
> not worry about them after that. Cut off any idle fingers that can't keep
> from tweaking. Do not permit people to sell you on the marginal merits of
> some crazy public key variant or some experimental MAC thing that a
> cryptographer knocked up over a weekend or some minor foible that allows an
> attacker to learn your aunty's birth date after asking a million times.
>
> Resist the temptation. Stick with The One.
>


Steve Bellovin has made the same argument and I agree with it.
Proliferation of cipher suites is not helpful.

The point I make is that adding a strong cipher does not make you more
secure. Only removing the option of using weak ciphers makes you more
secure.

There are good reasons to avoid MD5 and IDEA but at this point we are very
confident of AES and SHA3 and reasonably confident of RSA.

We will need to move away from RSA at some point in the future. But ECC is
a mess right now. We can't trust the NIST curves any more and the IPR
status is prohibitively expensive to clarify.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The One True Cipher Suite

2013-09-10 Thread Phillip Hallam-Baker
On Tue, Sep 10, 2013 at 7:42 AM, Jerry Leichter  wrote:

> On Sep 9, 2013, at 12:00 PM, Phillip Hallam-Baker wrote:
> > Steve Bellovin has made the same argument and I agree with it.
> Proliferation of cipher suites is not helpful.
> >
> > The point I make is that adding a strong cipher does not make you more
> secure. Only removing the option of using weak ciphers makes you more
> secure.
> I'm not so sure I agree.  You have to consider the monoculture problem,
> combined with the threat you are defending against.
>

I really hate the monoculture argument. It misses the fact that evolution
of Internet applications and attack strategies is not according to
Darwinian evolution.

Diversity is only a successful strategy against Darwinian evolution. It
does not work against intelligent design and malware is a product of
intelligent design.


Whether it is better to put all your eggs in one basket or in many baskets
depends on the consequences of compromise.

If the loss of one egg is acceptable then many baskets is the way to go. If
on the other hand they are dragon eggs and the loss of just one is a
catastrophe then putting them all in one basket is the lowest risk strategy.

1.  If everyone uses the same cipher, the attacker need only attack that
> one cipher.
> 2.  If there are thousands of ciphers in use, the attacker needs to attack
> some large fraction of them.
>

But on the flip side the cost of developing ciphers is large and the
vulnerabilities introduced into a protocol through support for algorithm
negotiation are significant.

Moreover as Newt Gingrich discovered, it only takes one party to your
conversation to be using an old AMPS analog line for your conspiracy to be
revealed.


I would rather choose one algorithm and one additional strong algorithm as
a backup than have the hundreds of algorithms.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-11 Thread Phillip Hallam-Baker
On Wed, Sep 11, 2013 at 2:40 PM, Bill Stewart wrote:

> At 10:39 AM 9/11/2013, Phillip Hallam-Baker wrote:
>
>> Perfect Forward Secrecy is not perfect. In fact it is no better than
>> regular public key. The only difference is that if the public key system is
>> cracked then with PFS the attacker has to break every single key exchange
>> and not just the keys in the certificates and if you use an RSA outer with
>> an ECC inner then you double the cryptanalytic cost of the attack (theory
>> as well as computation).
>>
>
> I wouldn't mind if it had been called Pretty Good Forward Secrecy instead,
> but it really is a lot better than regular public key.
>

My point was that the name is misleading and causes people to look for more
than is there. It took me a long time to work out how PFS worked till I
suddenly realized that it does not deliver what is advertised.



> The main difference is that cracking PFS requires breaking every single
> key exchange before the attack using cryptanalysis, while cracking the RSA
> or ECC outer layer can be done by compromising the stored private key,
> which is far easier to do using subpoenas or malware or rubber hoses than
> cryptanalysis.
>

That is my point precisely.

Though the way you put it, I have to ask if PFS deserves higher priority
than Certificate Transparency. As in something we can deploy in weeks
rather than years.

I have no problem with Certificate Transparency. What I do have trouble
with is Ben L.'s notion of Certificate Transparency and Automatic Audit in
the End Client which I imposes a lot more in the way of costs than just
transparency and moreover he wants to push out the costs to the CAs so he
can hyper-tune the performance of his browser.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Defenses against pervasive versus targeted intercept

2013-09-11 Thread Phillip Hallam-Baker
I have spent most of yesterday writing up much of the traffic on the list
so far in the form of an Internet Draft.

I am now at the section on controls and it occurs to me that the controls
relevant to preventing PRISM-like pervasive intercept capabilities are not
necessarily restricted to controls that protect against targeted intercept.

The problem I have with PRISM is that it is a group of people whose
politics I probably find repellent performing a dragnet search that may
later be used for McCarthyite/Hooverite inquisitions. So I am much more
concerned about the pervasive part than the ability to perform targeted
attacks on a few individuals who have come to notice. If the NSA wanted my
help intercepting Al Zawahiri's private emails then sign me up. My problem
is that they are intercepting far too much an lying about what they are
doing.


Let us imagine for the sake of argument that the NSA has cracked 1024 bit
RSA using some behemoth computer at a cost of roughly $1 million per key
and taking a day to do so. Given such a capability it would be logical for
them to attack high traffic/high priority 1024 bit keys. I have not looked
into the dates when the 2048 bit roll out began (seems to me we have been
talking about it ten years) but that might be consistent with that 2010
date.

If people are using plain TLS without perfect forward secrecy, that crack
gives the NSA access to potentially millions of messages an hour. If the
web browsers are all using PFS then the best they can do is one message a
day.

PFS provides security even when the public keys used in the conversation
are compromised before the conversation takes place. It does not prevent
attack but it reduces the capacity of the attacker.


Similar arguments can be made for other less-than-perfect key exchange
schemes. It is not necessary for a key exchange scheme to be absolutely
secure against all possible attack for it to be considered PRISM-Proof.

So the key distribution scheme I am looking at does have potential points
of compromise because I want it to be something millions could use rather
than just a few thousand geeks who will install but never use. But the
objective is to make those points of compromise uneconomic to exploit on
the scale of PRISM.


The NSA should have accepted court oversight of their activities. If they
had strictly limited their use of the cryptanalytic capabilities then the
existence would not have been known to low level grunts like Snowden and we
probably would not have found out.

Use of techniques like PFS restores balance.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] Summary of the discussion so far

2013-09-11 Thread Phillip Hallam-Baker
I have attempted to produce a summary of the discussion so far for use as a
requirements document for the PRISM-PROOF email scheme. This is now
available as an Internet draft.

http://www.ietf.org/id/draft-hallambaker-prismproof-req-00.txt

I have left out acknowledgements and references at the moment. That is
likely to take a whole day going back through the list and I wanted to get
this out.

If anyone wants to claim responsibility for any part of the doc then drop
me a line and I will have the black helicopter sent round.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] People should turn on PFS in TLS (was Re: Fwd: NYTimes.com: N.S.A. Foils Much Internet Encryption)

2013-09-11 Thread Phillip Hallam-Baker
On Tue, Sep 10, 2013 at 3:56 PM, Bill Stewart wrote:

> At 11:33 AM 9/6/2013, Peter Fairbrother wrote:
>
>> However, while the case for forward secrecy is easy to make, implementing
>> it may be a little dangerous - if NSA have broken ECDH then
>> using it only gives them plaintext they maybe didn't have before.
>>
>
> I thought the normal operating mode for PFS is that there's an initial
> session key exchange (typically RSA) and authentication,
> which is used to set up an encrypted session, and within that session
> there's a DH or ECDH key exchange to set up an ephemeral session key,
> and then that session key is used for the rest of the session.
> If so, even if the NSA has broken ECDH, they presumably need to see both
> Alice and Bob's keyparts to use their break,
> which they can only do if they've cracked the outer session (possibly
> after the fact.)
> So you're not going to leak any additional plaintext by doing ECDH
> compared to sending the same plaintext without it.



One advantage of this approach is that we could use RSA for one and ECC for
the other and thus avoid most consequences of an RSA2048 break (if that is
possible).

The problem I see reviewing the list is that ECC has suddenly become
suspect and we still have doubts about the long term use of RSA.


It also have the effect of pushing the ECC IPR concerns off the CA and onto
the browser/server providers. I understand that many have already got
licenses that allow them to do what they need in that respect.

Perfect Forward Secrecy is not perfect. In fact it is no better than
regular public key. The only difference is that if the public key system is
cracked then with PFS the attacker has to break every single key exchange
and not just the keys in the certificates and if you use an RSA outer with
an ECC inner then you double the cryptanalytic cost of the attack (theory
as well as computation).


I think this is the way forward.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography