Re: [cryptography] John Gilmore: Cryptography list is censoring my emails

2015-01-01 Thread Adam Back
nah what am I thinking probably! 1988 if not earlier, 27 years :)

The point is block lists suck, they're always blocking false things,
and vigilante abusive takes 3x longer to take you off than for you to
complain or unresponsive etc.

They'll also falsely block you not because your config is insecure but
because it doesnt match their preferred configuration.  Quite
irritating if you ever tried running your own mail server.

Adam

On 1 January 2015 at 19:12, Adam Back a...@cypherspace.org wrote:
 He's been running an open relay since like 2000 or something... why
 not its his relay.

 Adam


 On 1 January 2015 at 18:40, Sadiq Saif li...@sadiqs.com wrote:
 On 12/31/2014 07:16, John Young wrote:
 http://cryptome.org/2014/12/gilmore-crypto-censored.htm

 Don't run an open mail relay and your IP will be off the blacklist.

 Why are you running an open relay in 2014?
 --
 Sadiq Saif
 https://staticsafe.ca
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] John Gilmore: Cryptography list is censoring my emails

2015-01-01 Thread Adam Back
He's been running an open relay since like 2000 or something... why
not its his relay.

Adam


On 1 January 2015 at 18:40, Sadiq Saif li...@sadiqs.com wrote:
 On 12/31/2014 07:16, John Young wrote:
 http://cryptome.org/2014/12/gilmore-crypto-censored.htm

 Don't run an open mail relay and your IP will be off the blacklist.

 Why are you running an open relay in 2014?
 --
 Sadiq Saif
 https://staticsafe.ca
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] NSA, FBI creep rule of law, democracy itself (Re: To Protect and Infect Slides)

2014-01-07 Thread Adam Back

This is indeed an interesting and scary question:

On Sun, Jan 05, 2014 at 08:31:42PM +0300, ianG wrote:
What is a game changer is the relationship between the NSA and the 
other USA civilian agencies.  The breach of the civil/military line 
is the one thing that has sent the fear level rocketing sky high, as 
there is a widespread suspicion that the civil agencies cannot be 
trusted to keep their fingers out of the pie.  AKA systemic 
corruption.  If allied to national sigint capabilities, we're in a 
world of pain.


Question:  Is there anything that can put some meatmetrics on how 
developed and advanced this relationship is, how far the poison has 
spread?  How afraid should people in America be?


maybe the most interesting and portenteous shift in power towards
Orwellianism and totalitarianism in a century, as it affects the
effectiveness of rule of law, and already weak separation of politics from
law enforcement and justice system in the (current though slipping)
super-power with unfortunate aspirations of extra-territorialism and
international bullying.  We're still a few decades from the cross over of
financial dominance to Asia and BRICs, and most of those places are probably
worse than the US by aspiration if thats possible, though less internet
spying budget and capability.  Unless something shapes up towards democracy
in the super-power competitors we're in for a dismal century seemingly.

That the NSA, and now seemingly FBI, see this I think maybe this FBI mission
creep suggests the national security / law enforcement separation is
slipping badly:

http://news.slashdot.org/story/14/01/07/0015255/fbi-edits-mission-statement-removes-law-enforcement-as-primary-purpose

| Following the 9/11 attacks, the FBI picked up scores of new
| responsibilities related to terrorism and counterintelligence while
| maintaining a finite amount of resources.  What's not in question is that
| government agencies tend to benefit in numerous ways when considered
| critical to national security as opposed to law enforcement.  'If you tie
| yourself to national security, you get funding and you get exemptions on
| disclosure cases,' said McClanahan.  'You get all the wonderful arguments
| about how if you don't get your way, buildings will blow up and the
| country will be less safe.'

so if even the FBI are getting their nose into the tent of unfetter access
to historical data on everyone, plus informal channels and tip-offs on
dirt on politically unpopular pepople - eg say effective security
researchers like Applebaum, or effective journalists like Greenwald.  (No
foreigners dont feel very comforted, and the explict acknowledgment of
tip-offs, and inforation channels to US domestic and international law
enforcement, basically puts the entire planet at risk of politicaly
motivated interference.)

With retroactive search of your entire lifes electronic foot print including
every encrypted IM, skype voip channel, contacts, emails, attorney client
privileged and not, with no warrant or evidence presented to a judge for
subpoena, the Orwell 2.0 system can probably fabricate or concoct trouble
for 99% of the adult population of the planet.  George Orwell 30 years late.

We're pretty close to fucked as a civilization unless something pretty
radical shifts in the political thinking and authorizations.  And
realistically it not even clear the NSA can politically be controlled
anymore by the political system.  Its very hard to influence something with
that much skull-duggery built into its DNA, that many 10s of billions in
outsourced defense contractor lobbying power, that much inertia and will to
survive as an org, with military PSYOPs to turn on its own populace and
political system, and black bag covert ops ties to dirty tricks in CIA, and
judicial and law virtual immunity.  They probably realistically went full
speed ahead since the 11 Sep 2001, if not earlier on such things, and the
scrapping.  TIA wiki
http://en.wikipedia.org/wiki/Total_Information_Awareness

| Although the program was formally suspended [as of late 2003], its data
| mining software was later adopted by other government agencies, with only
| superficial changes being made.

Probably even before since we nominally won the export regulation debacle
and democractic countries were forced to admit it was inconsistent with
their self-perception as open democratic countries, to be controlling and
banning encryption software.  The 21st century equivalent of book burning.

Can we rectify this with the cypherpunks write code?  Maybe as Schneier said
in a discussion on this topic with Eben Moglen (at Moglen's respective
university) maybe we can make it more expensive by deploying more crypto
that is end to end secure, secure by default.  ie more TOFU, more cert
pinning, more certificate transparency distributed cert validation.  Even
the cert valiation maybe behind the game, perhaps NSA really do already have
a lot of actual SSL private keys via hardware, 

[cryptography] NSA co-chair claimed sabotage on CFRG list/group (was Re: ECC patent FUD revisited

2014-01-06 Thread Adam Back

On Sun, Jan 05, 2014 at 09:36:29AM -, D. J. Bernstein wrote:

NSA's Kevin Igoe writes, on the semi-moderated c...@irtf.org list:

[...] impact the Certicom patents have on the use of newer families of
curves, such as Edwards curves.


[...] patent FUD. [...] used to argue against switching to curves that
improve ECC security.  Notice also the complete failure to specify any
patent numbers---so the FUD doesn't have any built-in expiration date, and
there's no easy way for the reader to investigate further.


I am not sure people are aware and I suppose I am going to stick my neck out
and make it my problem to draw the lists attention to it, but the co-chair
of IRTF CFRG (where Dan Bernstein forwarded the above quote from) is an NSA
employee, and there was a call to remove him from that role on the basis
that the NSA is now openly known to be sabotaging internet security
standards.  And also on the basis of several other specific complaints of
claimed likely sabotage looking back with this new information (with
implications like the above observation by DJB, but in relation to proposing
insecure changes, misrepresenting the groups opinion etc).  The claims are
all spelled out if you want to read below.

Lars who is the person who through IRTF process was to review the question,
and concluded he would leave things as they are with various justifications
quoted below by Trevor.

I support whole-heartedly what Trevor said in response (below) and I
encourge people to read it.  A bit of sunlight might help if the IAB gets
involved perhaps.  Whethere or not there is anything provable is not the
point.

The comments on this relatively long thread on CFRG got a little weird and
hard to follow motives for participants comments in places to my reading. 
Maybe several parties with different slants and motives countervailing the

public interest.  Or just rude pragmatists (an exceedingly dangerous
species of engineer in crypto or privacy areas in my experience).

Adam

==
Date: Mon, 6 Jan 2014 17:48:51 -0800
From: Trevor Perrin tr...@trevp.net
To: c...@irtf.org c...@irtf.org
Cc: IAB IAB i...@iab.org
Subject: Re: [Cfrg] Response to the request to remove CFRG co-chair

Hi Lars,

Thanks for considering this request.

Of course, I'm disappointed with the response.

--

I brought to your attention Kevin's record of technical mistakes and
mismanagement over a two year period, on the major issue he has
handled as CFRG co-chair.  You counted this as a single occurrence,
and considered only the narrow question whether it is of a severity
that would warrant an immediate dismissal.

I appreciate your desire to be fair to Kevin and give him the benefit
of the doubt.  But it would be better to consider what's best for
CFRG.  CFRG needs a competent and diligent chair who could lead review
of something like Dragonfly to a successful outcome, instead of the
debacle it has become.

--

I also raised a conflict-of-interest concern regarding Kevin's NSA
employment.  You considered this from the perspectives of:
 (A) Kevin's ability to subvert the group's work, and
 (B) the impact on RG participation.

Regarding (A), you assessed that IRTF chairs are little more than
group secretaries who do not wield more power over the content of
the ongoing work than other research group participants.

That's a noble ideal, but in practice it's untrue.  Chairs are
responsible for creating agendas, running meetings, deciding when and
how to call for consensus, interpreting the consensus, and liaising
with other parties.  All this gives them a great deal of power in
steering a group's work.

You also assessed that the IETF/IRTF's open processes are an
adequate safeguard against NSA subversion, even by a group chair.  I'm
not sure of that.  I worry about soft forms of sabotage like making
Internet crypto hard to implement securely, and hard to deploy widely;
or tipping groups towards dysfunction and ineffectiveness.  Since
these are common failure modes for IETF/IRTF crypto activities, I'm
not convinced IETF/IRTF process would adequately detect this.


Regarding (B), you judged this a tradeoff between those who would
not participate in an NSA-chaired CFRG (like myself), and those
affiliated with NSA whom you presume we would eliminate from
participating.

Of course, that's a bogeyman.  No-one wants to prevent anyone else
from participating.

But the chair role is not a right given to every participant, it's a
responsibility given to those we trust.  The IETF/IRTF should not
support a chair for any activity X that has a strong interest in
sabotaging X.  This isn't a slippery slope, it's common sense.

--

Finally, I think Kevin's NSA affiliation, and the recent revelations
of NSA sabotage of a crypto standard, raises issues you did not
consider.

You did not consider the cloud of distrust which will hang over an
NSA-chaired CFRG, and over the ideas it endorses.

You also did not 

Re: [cryptography] [Cryptography] Mail Lists In the Post-Snowden Era

2013-10-21 Thread Adam Back

On Sun, Oct 20, 2013 at 06:55:52PM -0400, Peter Todd wrote:

Note that you can use broadcast encryption to efficiently encrypt the
messages to multiple recipients. (a deployed example is in the AACS
video encryption) Or more simply keep people's PGP keys on file and have
the mail server encrypt each email.



(Oh yeah, I top-posted by habit, better copy some text above to preclude an
excuse for censorship, there done!)

In the context of crypto lists I prefer open, unmoderated/uncensored.

For example the paranoid might note that its desirable for the forces of
darkness to control the medium by which the open community communicates,
delete the odd message with plausible deniability, use moderation as a
platform to squelch traffic with a little hidden bias, who's going to know. 
Viz this crypto list went dark for a year or so (ostensibly because -

actually we're not sure - anyway no traffic flowed; and finally the list was
reopened - temporarily, when randombit opened up as an unmoderated list, and
threatened to take over as a continuously flowing open medium.) Then again
another long hiatus on this list followed by only reopening when the world
was exploding with Snowden revelations and recriminations.

Paranoid or not?  If Snowden's episode showed one thing its that people were
too niave, and not paranoid enough.  Its easy pickings to step up for admin
positions in organizations, because momentum and laziness dictates that
others will not, and then some regime of sabotage, discussion shaping,
control can ensue.  Anyone who's done any standardization work, will have
found defense research people holding unlikely chair positions - medical
health care message security - UK defense research agency.  Really?  Why? 
Probably to make avoid use of forward-secrecy or such like soft-sabotage. 
People should re-read the declassified old sabotage manual and dwell on what

could be done with $250m/year against open discussion forum, protocols, open
source software, chairman/organizational positions etc.  Because, its
probaly be actively done right now.  Accident that android is using crap
ciphersuites - or plausibly deniable sabotage.

Adam

(copied to the unmoderated list)
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] was this FIPS 186-1 (first DSA) an attemped NSA backdoor?

2013-10-10 Thread Adam Back

Some may remember Bleichenbacher found a random number generator bias in the
original DSA spec, that could leak the key after soem number of signatures
depending the circumstances.

Its described in this summary of DSA issues by Vaudenay Evaluation Report
on DSA

http://www.ipa.go.jp/security/enc/CRYPTREC/fy15/doc/1002_reportDSA.pdf

   
Bleichenbacher's attack is described in section 5.


The conclusion is Bleichenbacher estimates that the attack would be
practical for a non-negligible fraction of qs with a time complexity of
2^63, a space complexity of 2^40, and a collection of 2^22 signatures.  We
believe the attack can still be made more efficient.

NIST reacted by issuing special publication SP 800-xx to address and I
presume that was folded into fips 186-3.  Of course NIST is down due to the
USG political level stupidity (why take the extra work to switch off the web
server on the way out I dont know).

That means 186-1 and 186-2 were vulnerable.

An even older NSA sabotage spotted by Bleichenbacher?

Anyway it highlights the significant design fragility in DSA/ECDSA not just
in the entropy of the secret key, but in the generation of each and every k
value, which leads to the better (but non-NIST recommended) idea adopted by
various libraries and applied crypto people to use k=H(m,d) so that the
signture is determinstic in fact, and the same k value will only be used
with the same message (which is harmless as thts just reissuing the bitwise
same signature).  


What happens if a VM is rolled back including the RNG and it outputs the
same k value to a different network dependeng m value?  etc.  Its just
unnecessarily fragile in its NIST/NSA mandated form.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] ciphersuite revocation model? (Re: the spell is broken)

2013-10-05 Thread Adam Back
You know part of this problem is the inability to disable dud ciphersuites. 
Maybe its time to get pre-emptive on that issue: pair a protocol revocation

cert with a new ciphersuite.

I am reminded of mondex security model: it was a offline respendable
smart-card based ecash system in the UK, with backing of a few large name UK
banks and card issuers, with little wallets like calculators to check a
cards balance.  Secure up to tamper resistance or ciphersuite cryptographic
break.  Not sure mondex got very far in market penetration beyond a few
trial runs, but the ciphersuite update model is interesting.

So their plan was deploy ciphersuite A, and B in the first card issue.  Now
when someone finds a defect in ciphersuite A, issue a revocation cert for
ciphersuite A, and deploy it together with a signed update for ciphersuite
C, that you work on polishing in the background during the life-cycle of A. 
Then the cards run on ciphersuite B, with C as the backup.  At all times

there is a backup, and at no time do you run on known defective
ciphersuites.

Now the ciphersuite revocation certs are distributed actually p2p because
mondex is offline respendable.  If a card encounters another card that has
heard the news that ciphersuite A is dead, it refuses to use it, and passes
on the signed news.

Maybe to get the update they actually have to go online proper, after a
grace period of running only on ciphersuite B, but thats ok, it'll only
happen once in a few years.  Ciphersuite A is pretty instantly disabled as
the news spreads with each payment. 

Maybe something like that could work for browser ciphersuites.  


It something related to vendor security updates, except there is a prompting
that each site and client you interact with starts warning you the clock is
ticking that you have to disable a ciphersuite.  If you persist to ignore it
your browser or server stops working after 6months. 


Adam

On Sat, Oct 05, 2013 at 02:03:38PM +0300, ianG wrote:

On 4/10/13 10:52 AM, Peter Gutmann wrote:

Jon Callas j...@callas.org writes:


In Silent Text, we went far more to the one true ciphersuite philosophy. I
think that Iang's writings on that are brilliant.


Absolutely.  The one downside is that you then need to decide what the OTS is
going to be.  For example Mozilla (at least via Firefox) seems to think it
involves Camellia (!!!?!!?).



Thanks for those kind words, all.  Perhaps some deeper background.

When I was writing those hypotheses, I was very conscious that there 
was *no silver bullet*.  I was trying to extrapolate what we should 
do in a messy world?


We all know that too many ciphersuites is a mess.  We also know that 
only one suite is vulnerable to catastrophic failure, and two or 
three suites is vulnerable to downgrade attacks, bugs in the 
switching, and expansion attacks in committee.


A conundrum!

Perhaps worse, we know that /our work is probably good/ but we are 
too few!  We need ways to make cryptoplumbing safe for general 
software engineers, not just us.  Not just hand out received wisdom 
like use TLS or follow NIST.  If we've learnt anything recently, it 
is that standards and NISTs and similar are not always or necessarily 
the answer.


There are many bad paths.  I was trying to figure out what the best 
path among those bad paths was.  From theory, I heard no clarity, I 
saw noise.


But in history I found clues, and that is what informs those hypotheses.



If one looks at the lifecycle of suites (or algorithms, or protocols, 
or products) then one sees that typically, stuff sticks around much 
longer than we want.  Suites live way past their sell-by date.  Once 
a cryptosystem is in there, it is there to stay until way past 
embarrassing.  Old, algorithms, old suites are like senile 
great-aunts, they hang around, part of the family, we can't quite see 
how to push them off, and we justify keeping her for all sorts of 
inane reasons.


Alternatively, if one looks at the history of failures, as John 
Kelsey pointed to a few days ago, one sees something surprising:  
rarely is a well-designed, state of the art cryptosuite broken.  
E.g., AES/CBC/HMAC as a suite is now a decade old, and still strong.


Where things go wrong is typically outside the closely designed 
envelope.  More, the failures are like an onion:  the outside skin is 
the UI, it's tatty before it hits the store.  Take the outer layer 
off, and the inner is quite good, but occasionally broken too.  If we 
keep peeling off the layers, our design looks better and better


Those blemished outer onion layers, those breaks, wherever they are, 
provide the next clue in the puzzle.  Not only security issues, but 
we also have many business issues, features, compliance ... all sorts 
of stuff we'd rather ignore.


E.g., I'm now adding photos to a secure datagram protocol -- oops!  
SSL took over a decade for SNI, coz it was a feature-not-bug.  
Examples abound where we've ignored wider issues because it's SOPs, 

Re: [cryptography] A question about public keys

2013-10-03 Thread Adam Back

Well I think there are two issues:

1. if the public key is derived from a password (like a bitcoin
brainwallet), or as in EC based PAKE systems) then if the point derived from
your password isnt on the curve, then you know that is not a candidate
password, hence you can for free narrow the password search.  (Which
particularly for PAKE systems weakens their security).

2. if the software doesnt properly validate that the point is on the curve,
and tries to do an operation involving a private key or secret, anyway, it
may leak some of the secret.  DJB has some slides saying he found some
software does not check.

This is what elligator is about, a more deterministic way to hash keys to
curves.  Which is an improvement with a more friendly curve to the approach
used by eg http://tools.ietf.org/html/draft-harkins-tls-pwd-03 where the way
you do it is x = hash2curve( password, counter ), test (x,y) on curve, start
counter at 0, and repeat until the point is on the curve.  Thats bad because
you have to do it lots of times, to be fairly sure it'll work its timing
dependent so that leaks password entropy etc.  Its much easier to aim for
constant time with elligator technique and curves.

Adam

On Thu, Oct 03, 2013 at 02:41:30PM +0100, Michael Rogers wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 29/09/13 20:24, Nico Williams wrote:  Just because curve25519
accepts every 32-byte value as a public key

doesn't mean that every 32-byte value is a valid public key (one
resulting from applying the curve25519 operation).  The Elligator
paper discusses several methods for distinguishing valid public
keys from random.


On 30/09/13 05:55, Trevor Perrin wrote:

Phrasing this better: check that x^3 + 486662x^2 + x is a square
modulo 2^255-19


Thanks Nico and Trevor for your replies. If I understand right, you're
both pointing to the most severe distinguisher in section 1.1 of the
Elligator 2 paper.

I'm afraid I still don't understand what it means for curve25519 to
accept a string as a public key if that string isn't a valid public
key. Does it just mean that the function has a defined output for that
input, even though that output isn't cryptographically useful?

Silently accepting invalid input and producing useless output seems
like a bug rather than a feature, so I feel like I must still be
misunderstanding the real meaning of accepts.

Cheers,
Michael

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJSTXQKAAoJEBEET9GfxSfMIJkH/jmClrIJ6kD3D/h5MMf7cvIp
BVLMmGROGwIFhIrfFZwfqEFGQzBZNpMP06BYJsyPbMRf1uLxFixIYHhSYXCcA+IJ
ZvcLMkMptNVb2xPr9jkdC3tXd47udo23Pxo8pP3uo0i265TMkdNOyY4WwJlrnCGQ
B7FDXeNXRAtNxdbfrFR2hpCd6yyVk+rqDl3AxNCQ01Slf8HmfOKtcZu7WHHwxQFZ
4ECVtlQmdcAaO8JiNdhWzyzbFW7GEEzvCdzYl3hZTqyXfXM+asGFw90K4qXKAoZS
l3S7Q5Pl7tg0KxDL6iHz0XVUMpxH31Mac09DM+dZWT9hp7PEFWiF79XzD0AGi+4=
=qqWu
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] A question about public keys

2013-10-03 Thread Adam Back

On Thu, Oct 03, 2013 at 04:53:09PM +0100, Michael Rogers wrote:

Presumably if you ensure that the private key is valid, the public key
derived from it must be a point on the curve.  So it's a matter of
validating private rather than public keys.

I understand what you're saying about a timing side-channel in the
draft-harkins-tls-pwd approach, but here I'm not concerned with validating
a public key after generating it, but rather with the puzzling statement
that there's no need to validate a public key after receiving it:

How do I validate Curve25519 public keys?

Don't. The Curve25519 function was carefully designed to allow all
32-byte strings as Diffie-Hellman public keys.

http://cr.yp.to/ecdh.html#validate


Well consider an EC point is an x,y coordinate both coordinates modulo p,
some 256-bit prime.  There are a number of points on the curve l and for a
given base point a number of points generated by that point n.  For prime
order curves typically the co-factor h = 1 so l=n.  (This is analogous to
the difference between p = 2q+1 in normal diffie hellman over prime fields. 
Ther are p values but the group generated by g will hold q possible values

before wrapping around where q divides p-1.)

So now back to EC to note that n and p are almost the same size (both
256-bit but np, however clearly the number of potential points is in fact
2p (because X=(x,y) and -X=(x,-y) are both points on the curve).  But 2pn
nearly 2p is nearly 2x n (n the number of points on the curve).  So
therefore around half of the points you could start from are not on the
curve, there is no solution when you try to solve for y from the curve
equation.  As you see in the harkin draft in that type of curve it is
because x^3+ax+b has no square root.

So you know about point compression?  Basically you can almost recover a
point from the x-coord only because each point is (x,f(x)) but with one
caveat that because its a symmetric curve about the x-axis you also need to
say whether thats f(x) or -f(x).  So people would send the x-coord and one
bit for the sign.

Its getting kind of complicated but it seems to me DJB ensures all 32 byte
encoded points are points by a kind of definitional trick that a point is
not the x-coord and sign bit, but the number x multiplied by a base point
(9,f(9)) which of course is a valid point because (9,f(9)) is a generator.

Its an equally valid encoding method and is probably faster.

However if you call (9,f(9))=G, you cant use DJB point encoding when you're
doing PAKE because well if password attempt pwd1 you get pwd1.G and then you
do some DH thing with it, send to the otherside they do Y=x.pwd1.G and send
back, thats bad because you can revise pwd1 to pwd2 by division:
pwd2.pwd1^-1.Y = pwd2.pwd1^-1.x.G=x.pwd2.G and then you can offline grind
the PAKE key exchange which is bad.

So for that you really need to start from x = H(password) and point
G=(x,f(x)) and that was what the hunt and peck algorithm in Harkins is
about.  DJB has a solution for that too in his Elligator paper which is
constant time (as I recall worst case two tries) because he can use a
different method relating to the more flexible and and more efficient curves
he chose.

Is it just me or could we better replace NIST by DJB ? ;)  He can do that EC
crypto, and do constant time coding (nacl), and non-hackable mail servers
(qmail), and worst-time databases (cdb).  Most people in the world look like
rank amateurs or no-real-programming understanding niche-bound math geeks
compared to DJB!

Adam


-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 03/10/13 15:14, Adam Back wrote:

Well I think there are two issues:

1. if the public key is derived from a password (like a bitcoin
brainwallet), or as in EC based PAKE systems) then if the point
derived from your password isnt on the curve, then you know that
is not a candidate password, hence you can for free narrow the
password search.  (Which particularly for PAKE systems weakens
their security).



2. if the software doesnt properly validate that the point is on
the curve, and tries to do an operation involving a private key or
secret, anyway, it may leak some of the secret.  DJB has some
slides saying he found some software does not check.


Hmm, so perhaps the statement quoted above simply means Curve25519
contains its own key validation code, and will complain if the string
you give it isn't a valid public key?

Cheers,
Michael
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iQEcBAEBAgAGBQJSTZLlAAoJEBEET9GfxSfM4dMH/jo83Jse5V6DqnZwaIkNesLY
AufH8+amkMALbO8Db7r/sG+cGXMy8sSRWqPTJ0jXd3z4ZAgKbx3aW2eBEmIU9i3Y
K0jkABVJty3XyvPAspoCUwZ+Fh7brUSCRHQJt0MWMQADPdoXJUY+iobmCGgO4Qbk
+npDlo3pTNeEofsvkEM3uSPofR88JXvMC1sYhGr4+GMsBt330vG2Zd278AlVTlOb
fVpwEtlad5Fb58RfGidMb4n7BUKKmkPI3KJewpJEXfc8CMP1ITsmX8hTzIz0wakz
ubjwDu7ENUMkZhfkt4qNpTLeWQBOFrrfUDe9qrlTY5GpbNfy295K/aWMvi65c6g=
=sxPV
-END PGP SIGNATURE-

___
cryptography mailing

[cryptography] replacing passwords with keys is not so hard (Re: PBKDF2 + current GPU or ASIC farms = game over for passwords)

2013-10-01 Thread Adam Back

On Mon, Sep 30, 2013 at 10:00:12PM -0400, d...@geer.org wrote:

Dr Adam Back wrote:
 PBKDF2 + current GPU or ASIC farms = game over for passwords.

Before discarding passwords as yesterday's fish, glance at this:

http://www.wired.com/opinion/2013/09/the-unexpected-result-of-fingerprint-authe
ntication-that-you-cant-take-the-fifth


Well OK switching to physical fingerprints (fingerprint reader, iphone etc)
is actually a step backwards, or only usable as a second factor.  I imagine
people have seen the gumi bear attacks, and someone already cracked the
iphone fingerprint reader using a photograph of a print and some
postprocessing, and fingerprints can be stolen.  And Lucky has some gruesome
alternatively low tech version also which doesnt bear thinking about. 
Fingerprints are a bad idea for those multiple reasons (stealable,

non-secret (5th amendment argument in the article), have no secure challenge
response possibility, left around via latent prints, lead to gruesome risks
where you'd sooner give up the password if rubberhosed than have...)

The point is rather to switch to keys.  I was resisting referencing it (as
its impolite to point at your own designs with commercial backing (*)) but I
guess it needs spelling out that yes you can do this, and yes it can be easy
to use and secure.  Check out oneid.com.  The federation server stores
password verifiers - that are not grindable via theft, needing simultaneous
compromise of the account holders smart phone/laptop (split keys).  The
smartphone/laptop has encrypted keys, with encryption that is also not
offline grindable without simultaneously compromising the server verifier
(more split-keys).  Devices have unique keys and so can be offline revoked
if stolen.  Security is end to end between the client and the relying party
(oneid or other party runnng the federation server cant even tell which
relying parties users are enrolled with nor logging into).  Stolen/broken
devices can be replaced via secure pairing with remaining devices. 
Simutlaneous theft of all devices is coped with via a recovery code, or

re-enrollment with a new identity (and new relying party account
re-association via the respective relying party enrollment process) if the
user ignores the recovery code setup.  


There is still login  transaction security in the system if the pc has
malware, the attacker has root on the federation server, the attacker has
all of your pins and passwords (that protect device private keys), and the
attacker has remote compromise but not code modification ability on the
relying party, just so long as you have your smartphone without targetted
malware, in your control.  That could and should be extended with a key
contribution from the smartphone SIM or TPM trusted-agent once hardware
catches up.

Its easy to use, just read the transaction confirmation on your smart phone
and click a button, thats the user experience.  Even if the laptop is
compromised by malware targetting your transaction (eg say online bitcoin
wallet auth) the worst it can do is block your transaction - presuming you
actually carefully read the transaction before approving on your smartphone
screen.

Adam

(*) historically I designed their crypto protocols as a consultant but I
have no financial stake.  oneid are khosla ventures funded the CEO is Steve
Kirsch a serial entrepreneur with  $1b of previous company exits to his
name.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] are ECDSA curves provably not cooked? (Re: RSA equivalent key length/strength)

2013-10-01 Thread Adam Back

On Tue, Oct 01, 2013 at 08:47:49AM -0700, Tony Arcieri wrote:

  On Tue, Oct 1, 2013 at 3:08 AM, Adam Back [1]a...@cypherspace.org
  wrote:

But I do think it is a very interesting and pressing research question
as to whether there are ways to plausibly deniably symmetrically
weaken or even trapdoor weaken DL curve parameters, when the seeds are
allowed to look random as the DSA FIPS 186-3 ones do.



  See slide #28 in this djb deck:
  If e.g. the NSA knew of an entire class of weak curves, they could
  perform a brute force search with random looking seeds, continuing
  until the curve parameters, after the seed is run through SHA1, fall
  into the class that's known to be weak to them.


Right but weak parameter arguments are very dangerous - the US national
infrastructure they're supposed to be protecting could be weakened when
someone else finds the weakness.  Algorithmic weaknesses cant be hidden with
confidence, how do they know the other countries defense research agencies
arent also sitting on the same weakness even before they found it.  Thats a
strong disincentive.  Though if its a well defined partial weakening they
might go with it - eg historically they explicitly had a go at in public
requiring use of eg differential cryptography where some of the key bits
of lotus notes were encrypted to the NSA public key (which I have as a
reverse-engineering trophy here[1]).  Like for examle they dont really want
foreign infrastructure to have more than 80 bits or something close to the
edge of strength and they're willing to tolerate that on US infratructure
also.  Somewhat plausible.

But the more interesting question I was referring to is a trapdoor weakness
with a weak proof of fairness (ie a fairness that looks like the one in FIPS
186-3/ECDSA where we dont know how much grinding if any went into the magic
seed values).  For illustration though not applicable to ECDSA and probably
outright defective eg can they start with some large number of candidate G
values where G=xH (ie knowing the EC discrete log of some value H they pass
off as a random fairly chosen point) and then do a birthday collision
between the selection of G values and diffrent seed values to a PRNG to find
a G value that they have both a discrete log of wrt H and a PRNG seed. 
Bearing in mind they may be willing to throw custom ASIC or FPGA

supercomputer hardware and $1bil budgt at the problem as a one off cost.

Adam

[1] http://www.cypherspace.org/adam/hacks/lotus-nsa-key.html
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] replacing passwords with keys is not so hard (Re: PBKDF2 + current GPU or ASIC farms = game over for passwords)

2013-10-01 Thread Adam Back

On Tue, Oct 01, 2013 at 10:25:10AM -0700, coderman wrote:

On Tue, Oct 1, 2013 at 2:12 AM, Adam Back a...@cypherspace.org wrote:

... And Lucky has some gruesome
alternatively low tech version also which doesnt bear thinking about.


i'm curious about defeating the liveness detection of fingerprint
readers using a severed digit.  or is non-trivial liveness detection
only a feature in the most expensive readers?


Hey that was the unmentionable part!  But surely that must be true because
if moistened gumi-bear can do the trick surely a finger without blood flow
can.  (Eww thanks for that).  Most of these biometrics seem pretty stupid. 
There was one where they printed out a colour photo of the person and waved

it in front of the camera to give an impression of motion for facial
recognition.  Its probably a basic factor of the noise rate in the data, the
limits of recognition, and the tolerable false negative rate = biometrics
are either insecure or unreliable at least for the mid term.  


But also you mention expensive biometric readers - some of the rubber
facemasks are damn convincing to the human eye (eg if you watched Bryan
Cranston breaking bad thing on Jimmy Fallon skip to 5:25
http://www.youtube.com/watch?v=XEQk1_F7sL0) - surely that can fool better
facial readers than waving an inkjet printed photo.  Probably similar for
fingers - ie readers can up, but your adversary can also make better than
prank gummy bear fake fingers to if thats your threat model.

Biometrics - stupid idea IMO.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] oneid single-sign on stuff (Re: PBKDF2 + current GPU or ASIC farms = game over for passwords (Re: TLS2)

2013-10-01 Thread Adam Back

I didnt see your comment lower down before (quoting got too long).

On Mon, Sep 30, 2013 at 07:41:20PM +0100, Wasa wrote:

[oneid]
i like the idea. Any issue/complications with re-provisioning or 
multiple devices with same identity?


If you lose one device you can replace the device enrole it authenticated
by your other devices and securely restore access keys into it.  It does
support multiple devices, though each device also has a unique key as well
as a common identity key so that individual devices can be revoked
instantly if they are stolen.  It uses some fun crypto like a blind MAC to
do split key KDF and AKE type protocols.



so I wonder:
- with no server[correction password], what would be the user perception of
security?  Say, you log into your online banking with no password; would
user feel secure and use the service?


There would be still a password or a pin typically so that perception is
still there somewhat.  But the password is used locally only (together with
a server input to prevent offline grinding to decrypt the private key in
event of stolen device).  The key derived from the password/pin and server
input is used to decrypt the device private key.  The device private key
makes a signature as part of an end2end secure challenge response with the
relying site.

and would that prevents them from using such systems?  


There is a bootstrap problem - oneid has to gather some relying parties and
persuade them to integrate the system.  They have integrated with some
commerce and banking platforms and some partners.  You could imagine it
being like other 3rd party integration where a bank might use also their own
info on top (account number etc) though that can be handled by oneid (it
supports encrypted relying party specific account info and form filling).

- would Facebook/twitter and co. like this sort of security where 
users cannot log-in from anywhere from any device? Surely they prefer 
a bit less security on client side + more ML detection server-side 
and more users connected; right?


In principle as I understand it user account theft, locking themselves out
by forgetting passwords and resets cost big players like
facebook/twitter/paypal etc big bucks like millions annually.  Security
cleanup after the inevitable server compromise and hashed password db theft
cant be a lot of fun either plus the negative PR if it comes out publicly. 
Real disservice to the users too who often use same or similar passwordws on

multiple sites.  So you would think there is a business case to improve
things.  


Similar password management is a pain and people leave new sites by clicking
back when faced with yet another please signup.  You see it now in some
sites starting to let you login with unrelated twitter or facebook just to
save their users the pain of managing yet another password from even an
unwanted relationship.  (All you want to do is read the new article or
report!) So then there should be an interest for the company to allow users
to authenticate.  The uptake of twitter/facebook in that role bears that
out.  But it also not very privte - I dont know what facebook are doing but
they're probably prone to blog about your login or enrollment on your
facebook wall, if you're not careful.  They certainly know where you logged
in and where you enrolled.  (Which oneid takes pains to prevent itself from
learning as all relying party traffic is direct from user to relying party).

- I guess the sort of service you describe is great for large 
companies; but the complexity might put off smaller ones. Thoughts?


Well from a user point of view its very simple.  The site integration is
supposedly rather easy also, though I know less about that. 
https://developer.oneid.com/ there are SDKs and APIs.


- how different is the client cert approach compared to token used on 
mobile devices (e.g. Google stores a unique token on smartphones to 
access google services and hence requires no passwd)? Essentially it 
too removes phishing problems, but seems more flexible. 


Google device auth is on the better end of things, but oneid its a
generalization of that concept - because its federated in a way where you
dont have to trust the federation server operator, and yet can use the same
device to login to multiple mutually distrusting services operated by
different relying parties.  Also oneid has server split key so you can not
access the device key without pin/password, and 3 wrong guesses locks your
device as enforced by the server.  The server also cant access your device
private key, you would need the device and the server db (eg a stolen copy
or subpoena of the server factor for a given user) plus then to grind the
actual password.  I'm not sure if google would track the login if used by a
third party - I guess they would want to because they like to track and
record everything for posterity in case they can squeeze some latent ad
revenue out of it.  (No evil and all but thats a big database for 

[cryptography] more oneid stuff 2-factor when smartphone offline scenarios (Re: replacing passwords with keys is not so hard (Re: PBKDF2 + current GPU or ASIC farms = game over for passwords))

2013-10-01 Thread Adam Back

On Tue, Oct 01, 2013 at 04:28:01PM -0400, Jonathan Thornburg wrote:

On Tue, 1 Oct 2013, Adam Back wrote:

The point is rather to switch to keys.  Check out oneid.com.  [[...]]

Its easy to use, just read the transaction confirmation on your smart phone
and click a button, thats the user experience.  [[...]]


How do I use this if I'm somewhere with no cellphone reception?


If you're offline it wont work because you wont be able to obtain the server
contribution to go with the password you know for the KDF to unlock your
smartphone private key.  (Technically the key could be cached, but you
probably dont want to make that too lax or your device could be stolen in
unlocked state).

But the use case is to single signon to an online service so thats probably
not a (new) problem.  Not online = no access services you need web single
sign on for?  If you're online on your laptop via wifi you can probably (but
not 100%) get online with your smartphone.  (The gap being the very odd
wretched hotel that tries to charge for wifi by number of devices).

No particular support from oneid but in principle you could reverse tether
your smartphone to your laptop but the setup for that is not convenient for
novice users, and many wifi chipsets cant be a hotspot and client
simultaneously.  I guess there's stll bluetooth reverse tether, but again
not easy to setup for novice users and probably the reverse direction to
what handset and computer OS aim to support.


How do I use this if my cellphone just broke down?


You get a replacement cellphone and pair it to your account.  It is possible
to chose to login to sites with a lower security margin, by flagging them to
be allowed to login with laptop only (making the smartphone unnecessary). 
However that is vulnerable to malware, and in oneid the relying party can

insist on having the smartphone level of security (and they can tell if
their policy was applied as there are 2 signatures in the challenge response
rather than 3).  You can also pair multiple smartphones/tablets to your
account, eg use your tablet or partner's smartphone if travelling together. 
I guess most people dont carry two smartphones, but smartphone + tablet +

laptop is maybe not that rare.

But otherwie I think for high security its the price you pay.  You dont want
targetted malware on your laptop to empty your bitcoin web wallet so you
have to tolerate the 2nd factor.  Its more useful than a OTP keyfob
(secureid and clones) because you can see the transaction details you are
authorizing.  OTP keyfobs can be repurposed by laptop malware to authorize
something different from what you think you are entering it for.

Try locking yourself out of your online banking while travelling by
forgetting a password.  An international cell call to their online support
etc is not much fun either.  The alternative has its failure modes as well
as being significantly insecure.


You'd wonder if oneid would be amenable to trying to be extremely open and
making reference implementation and open standard like openid if people
thought the idea is a net improvement.  That could be one way to overcome
selfish identity ownership thinking amongst relying parties.  And also it is
a fair concern from individual web developers of what happens to their login
mechanism if oneid went out of business.  The model actually open, that
anyone can run a federation server, analogous in that sense to openid.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] TLS2

2013-09-30 Thread Adam Back

On Mon, Sep 30, 2013 at 11:49:49AM +0300, ianG wrote:

On 30/09/13 11:02 AM, Adam Back wrote:

no ASN.1, and no X.509 [...], encrypt and then MAC only, no non-forward
secret ciphersuites, no baked in key length limits [...] support
soft-hosting [...] Add TOFO for self-signed keys.  


Personally, I'd do it over UDP (and swing for an IP allocation).  


I think lack of soft-hosting support in TLS was a mistake - its another
reason not to turn on SSL (IPv4 addresses are scarce and can only host one
SSL domain per IP#, that means it costs more, or a small hosting company can
only host a limited number of domains, and so has to charge more for SSL):
and I dont see why its a cost worth avoiding to include the domain in the
client hello.  There's an RFC for how to retrofit softhost support via
client-hello into TLS but its not deployed AFAIK.

The other approach is to bump up security - ie start with HTTP, then switch
to TLS, however that is generally a bad direction as it invites attacks on
the unauthenticated destination redirected to.  I know there is also another
direction to indicate via certification that a domain should be TLS only,
but as a friend of mine was saying 10 years ago, its past time to deprecate
HTTP in favor of TLS.

Both client and server must have a PP key pair.  


Well clearly passwords are bad and near the end of their life-time with GPU
advances, and even amplified password authenticated key exchanges like EKE
have a (so far) unavoidable design requirement to have the server store
something offline grindable, which could be key stretched, but thats it. 
PBKDF2 + current GPU or ASIC farms = game over for passwords.


However whether its password based or challenge response based, I think we
ought to address the phish problem for which actually EKE was after all
designed for (in 1992 (EKE) and 1993 (password augmented EKE)).  Maybe as
its been 20 years we might actually do it.  (Seems to be the general rule of
thumb for must-use crypto inventions that it takes 20 years until the
security software industry even tries).  Of course patents ony slow it down. 
And coincidentally the original AKE patent expired last month.  (And I

somehow doubt Lucent, the holder, got any licensing revenue worth speaking
about between 1993 and now).

By pinning the EKE or AKE to the domain, I mean that there should be no MITM
that can repurpose a challenge based on phish at telecon.com to telecom.com,
because the browser enforces that EKE/AKE challenge reponse includes the
domain connected to is combined in a non-malleable way into the response. 
(EKE/AKE are anyway immune to offline grinding of the exchanged messags.)


Clearly you want to tie that also back to the domains TLS auth key,
otherwise you just invite DNS exploits which are trivial across ARP
poisoning, DNS cache-poisoning, TCP/UDP session hijack etc depending on the
network scenario.

And the browser vendors need in the case of passwords/AKE to include a
secure UI that can not be indistinguishably pasted over by carefully aligned
javascript popups.

(The other defense with securid and their clones can help prop up
AKE/passwords.)


Both, used every time to start the session, both sides authenticating each
other at the key level.  Any question of certificates is kicked out to a
higher application layer with key-based identities established.


While certs are a complexity it would be nice to avoid, I think that
reference to something external and bloated can be a problem, as then like
now you pollute an otherwise clean standard (nice simple BNF definition)
with something monstrous like ASN.1 and X.500 naming via X.509.  Maybe you
could profile something like openPGP though (it has its own crappy legacy
they're onto v5 key formats by now, and some of the earlier vs have their
own problems, eg fingerprint ambiguity arising from ambiguous encoding and
other issues, including too many variants, extra mandatory/optional
extensions.) Of course the issue with rejecting formats below a certain
level is the WoT is shrunk, and anyway the WoT is also not that widely used
outside of operational security/crypto industry circes.  That second
argument may push more towards SSH format keys which are by comparison
extremely simple, and are recently talking about introducing simple
certification as I recall.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] three crypto lists - why and which

2013-09-30 Thread Adam Back

I am not sure if everyone is aware that there is also an unmoderated crypto
list, because I see old familiar names posting on the moderated crypto list
that I do not see posting on the unmoderated list.  The unmoderated list has
been running continuously (new posts in every day with no gaps) since mar
2010, with an interesting relatively low noise, and not firehose volume.

http://lists.randombit.net/mailman/listinfo/cryptography

The actual reason for the creation of that list was Perry's list went
through a hiatus when Perry stopped approving/forward posts eg

http://www.mail-archive.com/cryptography@metzdowd.com/

originally Nov 2009 - Mar 2010 (I presume the mar 2010 restart was motivated
by the creation of randombit list starting in the same month) but more
recently sep 2010 to may 2013 gap (minus traffic in aug 2011).

http://www.metzdowd.com/pipermail/cryptography/

I have no desire to pry into Perry's personal circumstances as to why this
huge gap happened, and he should be thanked for the significant moderation
effort he has put into create this low noise environment, but despite that
it is bad for cryptography if people's means of technical interaction
spuriously stops.  Perry mentioned recently that he has now backup
moderators, OK so good.

There is now also the cypherpunks list which has picked up, and covers a
wider mix of topics, censorship resistant technology ideas, forays into
ideology etc.  Moderation is even lower than randombit but no spam, noise
slightly higher but quite reasonable so far.  And there is now a domain name
that is not al-quaeda.net (seriously?  is that even funny?): cpunks.org. 

https://cpunks.org/pipermail/cypherpunks/ 


At least I enjoy it and see some familiar names posting last seen decade+
ago.

Anyway my reason for posting was threefold: a) make people aware of
randombit crypto list, b) rebooted cypherpunks list (*), but c) about how to
use randombit (unmoderated) and metzdowd.  


For my tastes sometimes Perry will cut off a discussion that I thought was
just warming up because I wanted to get into the detail, so I tend more
prefer the unmoderated list.  But its kind of a weird situaton because there
are people I want views and comments from who are on the metzdowd list who
as far as I know are not on the crypto list, and there's no convenient way
to migrate a conversation other than everyone subscribing to both.  Cc to
both perhaps works somewhat, I do that sometimes though as a general
principle it can be annoying when people Cc to too many lists.

Anyway thanks for your attention, back to the unmoderated (or moderated)
discussion!

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] PBKDF2 + current GPU or ASIC farms = game over for passwords (Re: TLS2)

2013-09-30 Thread Adam Back

On Mon, Sep 30, 2013 at 02:34:27PM +0100, Wasa wrote:

On 30/09/13 10:47, Adam Back wrote:

Well clearly passwords are bad and near the end of their life-time with
GPU advances, and even amplified password authenticated key exchanges like
EKE have a (so far) unavoidable design requirement to have the server
store something offline grindable, which could be key stretched, but thats
it.  PBKDF2 + current GPU or ASIC farms = game over for passwords.


what about stronger pwd-based key exchange like SRP and JPAKE?


what I mean there is a so far unavoidable aspect of the AKE design pattern
is that the server holds verifier v = PBKDF2(count,salt,password), so the
server if hostile, or of even more concern an attacker who steals the whole
database of user verifiers from the server can grind passwords against it. 
There is a new such server hashed password db attack disclosed or hushed up

every few weeks.


Passwords don't scale up and are very inconvenient, but are you sure your
argument PBKDF2 + current GPU or ASIC farms = game over for passwords
really holds?  what about scrypt?  And theoretically, you can always
increase the number of rounds in the hash...  I refer to this link too:
http://www.lightbluetouchpaper.org/2013/01/17/moores-law-wont-kill-passwords/


You know GPUs are pretty good at computing scrypt.  Eg look at litecoin
(bitcoin with hashcash mining changed to scrypt mining, people use GPUs for
~10x speed up over CPUs).  Litecoin was originally proposed as I understood
it to be more efficient on CPU than GPU, so that people could CPU mine and
GPU mine without competing for resources, but they chose a 128kB memory
consumption parameter, and it transpired that GPUs can compute on that
memory size fine (any decent GPU has  1GB of ram and a quite nice cacheing
hierarchy).  Clearly its desirable to have modest memory usage on a CPU or
if it fill L3 cache the CPU will slow down significantly for other
applications.  Even 128kB is going to fill L1 and half of L2 which has to
cost generic performance.  Anyway in the bitcoin context that coincidentally
was fine because then FPGAs  ASICs became the only way to profitably mine
hashcash based bitcoin, and so GPUs were freed up to mine scrypt based
litecoin.  Also for bitcoin purposes higher memory scrypt parameters
increase the validation phase (where all full nodes check all hashes and
signatures, a double SHA256 is a lot faster than a scrypt at even 128KB,
changing that to eg 128MB will only make it worse.

Also the PBKDF2 / scrypt happens on the client side - how do you think your
ARM powered smart phone will compare to a 9x 4096 core GPU monster.  Not
well :)

So yes I stand by that.  One could use higher memory scrypt parameters, and
so the claim goes with memory bound functions that memory IO is less
dissimilar between classes of machines than CPU speed (smartphone vs GPU).
However you have to bear in mind also that scrypt actually has CPU memory
tradeoffs, known about and acknowledged by its designer.

I believe its realtively easy to construct a tweaked scrypt that doesnt have
this problem.

Also for the bitcoin/litecoin side of things I heard a rumor that there were
people working on a litecoin ASIC.  Bitcoin FTW in terms of proving the
vulnerability of crypto password focussed KDFs to ASIC hardware.  The scrypt
time memory tradeoff issue maybe useful for efficient scrypt ASIC design.

But there is a caveat which is the client/server imbalance is related to the
difference in CPU power between mobile devices and server GPU or ASIC farms. 
While it is true that moore's law seems to have slowed down in terms of

clock rates and serial performance the number of cores and memory
architectures are still moving forward, and for ASICs density, clock rates
and energy efficiency are increasing, and thats what counts for password
cracking.  But yes the longer term picture depends on the trend of the ratio
between server GPU/ASIC performance vs mobile CPU.  Another factor also is
the mobiles are more elastic (variable clock, more cores) but to get full
perf you end up with power drain and people dont thank you for draining
their phone battery.  It is possible for ARM eg to include an scrypt or a
new ASIC unfriendly password KDF on the die perhaps if there was enough
interest.  The ready availability of cloud is another dynamic where you dont
even have to own the GPU farm to use it.  You can rent it by the hour or
even minute, or use paid password cracking services (with some disclaimer
that it better be for an account owned by you).

Anyway and all that because we are seemingly alergic to using client side
keys which kill the password problem dead.  For people with smart phones to
hand all the time eg something like oneid.com (*) can avoid passwords (split
keys between smart phone and server, nothing on server to grind, even stolen
smart phone cant have its encrypted key store offline ground to recover
encrypted private keys (they are also split so you need

Re: [cryptography] PBKDF2 + current GPU or ASIC farms = game over for passwords (Re: TLS2)

2013-09-30 Thread Adam Back

On Mon, Sep 30, 2013 at 06:52:47PM +0100, Wasa wrote:

Also the PBKDF2 / scrypt happens on the client side - how do you think
your ARM powered smart phone will compare to a 9x 4096 core GPU monster. 
Not well :)


How much would it help to delegate PBKDF2 / scrypt to smartphone GPU to
break this asymmetry?


It might help a little in the right direction, and so probably should be
done; but I presume phone GPUs wont have that many cores nor high
performance to compare to an AMD 7990 (4096x 1Ghz cores) even if it
outperforms the phone CPU by a reasonable margin.


since SRP and JPAKE use exponent_modulo sort of computation rather than a
hash, any idea how this impacts attackers?  how well can you paralellize a
dictionary brute force for DL problem?  I'm not expert so glad to hear
about it.


The A part of AKE password amplification, means that you cant break it via
the DL stuff.  The password only authenticates the diffie-hellman like key
exchange, so it just is there to prevent MITM.  You still have a full 2048
bit DL or 256-bit ECDL to attack and that is hopeless.

The only attack is on the PBKDF2 stored on the server (or malware to grab
the password on the client)


Anyway and all that because we are seemingly alergic to using client side
keys which kill the password problem dead.  For people with smart phones
to hand all the time eg something like oneid.com (*) can avoid passwords
(split keys between smart phone and server, nothing on server to grind,
even stolen smart phone cant have its encrypted key store offline ground
to recover encrypted private keys (they are also split so you need to
compromise the server and the client simultaneously).  Also the user can
lock the account at any time in event of theft or device loss.


i like the idea. Any issue/complications with re-provisioning or 
multiple devices with same identity?


If you lose one device you can replace the device enrole it authenticated by
your other devices and securely restore access keys into it.  It does
support multiple devices, though each device also has a unique key as well
as a common identity key so that individual devices can be revoked instantly
if they are stolen.  It uses some fun crypto like a blind MAC to do split
key KDF and AKE type protocols.

They have a recovery mechanism also I think if you simultaneously lose all
our devices (laptop and smartphone) (user print out 128-bit key on
registration).  If the user doesnt do that they have to re-enrole as a new
identity with oneid and the relying parties.

I was thinking it could be a good tech for access to bitcoin online wallets
and exchanges because also the transaction details are displayed for
approval on the smart phone, so even if the laptop had bitcoin related
password targetted malware on it, you could still securely transact if you
read the transaction details on the smartphone before approving.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PBKDF2 + current GPU or ASIC farms = game over for passwords (Re: TLS2)

2013-09-30 Thread Adam Back

On Mon, Sep 30, 2013 at 07:41:20PM +0100, Wasa wrote:

The only attack is on the PBKDF2 stored on the server (or malware to grab
the password on the client)


right. I was think SRP/JPAKE where the server does not store 
PBKDF2(salt,pwd) server-side, but rather it stores something like 
g^{PBKDF2(pwd)}. If I break into server and get hold of all 
g^{PBKDF2(pwd_i)}, can I parallelize the DL part?


Well ok, this IRTF draft I was coincidentally just reading claims to be
marginally more efficient than SRP

https://datatracker.ietf.org/doc/draft-irtf-cfrg-augpake/?include_text=1

and has a verifier of that form.

You could parallelize it somewhat at a micro level but I dont think you need
to because you can just try lots of passwords in parallel against the same
verifier.

Because I think one of the claims of SRP and JPAKE is not only to be 
resistant to passive/active sniffing followed by offline brute-force, 
but also to be more resistant against server compromise.


No I do not think that is true.  And I noticed the IRTF draft above's
security comments section confirms, it explicitly states that all it can
offer in the event of server compromise is that the attacker has to brute
force the PBKDF (aka function H in their terminology).  (Actually that is
why I bothered to read the draft because of their title/abstract I wondered
if they had claimed to do better against server compromise and if so how as
that is beyond the state of the art AFAIK; turns out they are the same as
SRP etc).


Idea?

If it turns out to be difficult to parallelize the brute-force, why 
not move towards this? It would be an incremental improvement more 
likely to be widely-accepted than brand-new schemes. JPAKE is a bit 
slow I presume (given the number of required rounds), SRP is not and 
the patent expires soon (if memory serves).


I think SRP is not patented, and in fact is designed to be free from
reliance on any pre-existing patent details.  There were some EKE patents 
but coincidentally the AKE version just expired last month.  Nice timing.


Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] forward-secrecy =2048-bit in legacy browser/servers? (Re: [Cryptography] RSA equivalent key length/strength)

2013-09-25 Thread Adam Back

On Wed, Sep 25, 2013 at 11:59:50PM +1200, Peter Gutmann wrote:

Something that can sign a new RSA-2048 sub-certificate is called a CA.  For
a browser, it'll have to be a trusted CA.  What I was asking you to explain is
how the browsers are going to deal with over half a billion (source: Netcraft
web server survey) new CAs in the ecosystem when websites sign a new RSA-2048
sub-certificate.


This is all ugly stuff, and probably  3072 bit RSA/DH keys should be
deprecated in any new standard, but for the legacy work-around senario to
try to improve things while that is happening:

Is there a possibility with RSA-RSA ciphersuite to have a certified RSA
signing key, but that key is used to sign an RS key negotiation?

At least that was how the export ciphersuites worked (1024+ bit RSA auth,
512-bit export-grade key negotation).  And that could even be weakly forward
secret in that the 512bit RSA key could be per session.  I imagine that
ciphersuite is widely disabled at this point.

But wasnt there also a step-up certificate that allowed stronger keys if the
right certificate bits were set (for approved export use like banking.)
Would setting that bit in all certificates allow some legacy server/browsers
to get forward secrecy via large, temporary key negotiation only RSA keys? 


(You have to wonder if the 1024-bit max DH standard and code limits was bit
of earlier sabotage in itself.)

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Deleting data on a flash?

2013-09-23 Thread Adam Back

While I get wear leveling is a problem, I'm not sure if the flash in a phone
is even going to use wear-leveling, but say for the sake of argument it
does.  It is however not a completely brand-new problem, relatedly spinning
disks now and then suffer sector failures, and the failed sectors are
remapped by th drive firmware to another spare good sector.  Consequently
data that you might have wanted to securely delete could be left still
readable by a lower level read.

Dm-crypt(*), the hard disk encryption system on linux, offers a solution to
the sector remapping problem: a key management system called LUKS (Linux
Unified Key Setup).  How it works is that the key you woul like to be able
to delete is secret shared so that you will need k of n blocks to access the
key.  This provides also additional redundancy - you dont want a single copy
of your access structure in case it suffers a sector failure.  To delete
simply delete all of the n blocks.  So long as  k of the blocks have
failed, your data is secure.  k is chosen to make that astronomically
unlikely (short of a complete disk failure, which you're going to notice).

For wear-leveling its more tricky, but it I think the trick to deletion
would be to delete and temporarily fill the disk - even wear leveling has to
delete then.  You probably want the LUKS k of n trick also to account for
partial failures and inaccessible spare capacity held for remapping.

Also it seems to me that SSD drive manufacturers ought to have a special
deletable NVRAM for key storage.  Its not exactly an unknown problem, would
allow instant secure deletion.  Of course people may rightly not trust the
manufacturer, but it wouldnt hurt to mix it in (xor the LUKS style key and
the NVRAM deletable).

Apparently or so I've heard claim SSDs also offer lower level APIs to
actually wipe physical (not logically wear-level mapped) cells, to reliably
wipe working cells.  Anyone know about those?  They could be used where
available and to the extent they are trusted.

Adam

(*) DM = Device Mapper, a file system layering mechanism on linux.

On Mon, Sep 23, 2013 at 11:02:45AM +0300, ianG wrote:

On 23/09/13 07:12 AM, Dev Random wrote:

I've been thinking about this for a while now and I don't see a way to
do this with today's mobile devices without some external help.

The issue is that it's pretty much impossible to delete data securely
from a flash device.



Why is that?



That means that in order to guarantee PFS, you
have to store the keys in memory only.  But again, in a mobile
environment, you don't have access to stable memory either, because of
the OS restarting your app, or the device itself rebooting.

Let's call this the persistence/deletion issue.

So, I submit that PFS in async messaging is impossible without help from
some kind of ephemeral, yet persistent storage.  A possible solution
might be to store a portion of the key material (through Shamir's secret
sharing) on servers that you partially trust.



(I agree with the difficulty in general.  Stating anything like PFS 
in the context of a protocol makes less sense if one considers that 
the clients either end save the messages.)




iang

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] secure deletion on SSDs (Re: Asynchronous forward secrecy encryption)

2013-09-23 Thread Adam Back

(Changing the subject line to reflect topic drift).

Thats not bad (make the decryption dependant on accessibility of the entire
file) nice as a design idea.  But that could be expensive in the sense that
any time any block in the file changes, you have to re-encrypt the
encryption or, more efficiently the key computed from the hash of the file. 
Still you have to re-write the header any time there is a block change,

and do it atomically or log recoverably ideally.  Also you have re-read and
hash the whole file to re-compute the xor sha(encrypted-file) header.  Well
I guess even that is relatively fixable probably eg merkle hash of the
blocks of the file instead plus a bit of memory cacheing.

Adam

On Mon, Sep 23, 2013 at 03:00:03PM +0200, Natanael wrote:

  I made a suggestion like this elsewhere:

  Store the keys split up in several different files using Shamir's
  Secret Sharing Scheme. Encrypt each file with a different key. Encrypt
  those keys with a master key. XOR each encrypted key with the SHA256 of
  their respective encrypted files. Put those XORed keys in the headers
  of their respective files.

  If you manage to securely wipe just ~100 bits of any of the files, the
  keys are unrecoverable.

  I don't know if that can provide enough assurance of secure deletion on
  a flash memory, but it's better than nothing.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] secure deletion on SSDs (Re: Asynchronous forward secrecy encryption)

2013-09-23 Thread Adam Back

On Mon, Sep 23, 2013 at 01:39:35PM +0100, Michael Rogers wrote:

Apple came within a whisker of solving the problem in iOS by creating
an 'effaceable storage' area within the flash storage, which bypasses
block remapping and can be deleted securely. However, iOS only uses
the effaceable storage for resetting the entire device (by deleting
the key that encrypts the user's filesystem), not for securely
deleting individual files.


Hmm well thats interesting no?  With the ability to securely delete a single
key you can probably use that to selectively delete files with an
appropriate key management structure.  eg without optimizing that, you could
have a table of per file keys, encrypted with the master key.  To delete a
given file you'd re-encrypt everything in the file table to a new key,
except the deleted file, and delete, then over-rewrite this effaceable
storage area.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Asynchronous forward secrecy encryption

2013-09-20 Thread Adam Back

Depending on what you're using this protocol for you maybe should try to
make it so that an attacker cannot tell that two messages are for the same
recipient, nor which message comes before another even with access to long
term keys of one or both parties after the fact.  (Forward-anonymity
property).

Otherwise it may not be safe for use via remailers (when the exit is to a
public drop box like alt.anonymous.messages).  And being able to prove who
sent which message to who after the fact is not good either, if that can be
distinguished with access to either parties long term keys (missing
forward-anonymity).

Adam

On Thu, Sep 19, 2013 at 09:20:04PM +0100, Michael Rogers wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 19/09/13 08:04, Trevor Perrin wrote:

I'd have to see a writeup to have real comments.  But to address
the issue of fragility:

It seems you're worried about per-message key updates because in
the (infrequent?) case that a sender's write to storage fails, an
old key would be reused for the next message.

What happens in that case?  You mentioned random IVs, so I assume
the only problem is that the recipient can't decrypt it (as she's
already deleted the old key on receipt of the previous message).


The key reuse issue isn't related to the choice between time-based and
message-based updates. It's caused by keys and IVs in the current
design being derived deterministically from the shared secret and the
sequence number. If an endpoint crashes and restarts, it may reuse a
key and IV with new plaintext. Not good.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Asynchronous forward secrecy encryption

2013-09-20 Thread Adam Back

btw as I didnt say it explicitly, why I claim (forward-anonymous) sequence
security is important is that mixmaster remailers shuffle and reorder
messages.  If the message sequence is publicly viewable that property is
broken up-front, and if the message sequence is observable backwards in time
with disclosure of current keys, in the event of a key compromise anonymity
is lost.

Adam

On Fri, Sep 20, 2013 at 11:19:58AM +0200, Adam Back wrote:

Depending on what you're using this protocol for you maybe should try to
make it so that an attacker cannot tell that two messages are for the same
recipient, nor which message comes before another even with access to long
term keys of one or both parties after the fact.  (Forward-anonymity
property).

Otherwise it may not be safe for use via remailers (when the exit is to a
public drop box like alt.anonymous.messages).  And being able to prove who
sent which message to who after the fact is not good either, if that can be
distinguished with access to either parties long term keys (missing
forward-anonymity).

Adam

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Asynchronous forward secrecy encryption

2013-09-18 Thread Adam Back

Thats a good approach but note it does assume your messages are delivered in
the same order they are sent (even though they are delivered
asynchronously).  That is generally the case but does not have to be -
neither email nor UDP for example guarantee that.  


Maybe you would want to include an authenticated sequence number so the
recipient can detect gaps and out of order messages, though that does create
an attack where the attacker can delete a message, and cause the recipient
to keep messages.

Or better the actual key used could be derived to fix that.  eg
k_{i+1}=H(k_i) delete k_i; but also sk_i=H(1||k_i) then use sk_i values.  In
that way you can keep keys for a gap with no security implication other than
the missing/delayed message security.  Other messages that come afterwards
would be unaffected.

Adam

On Tue, Sep 17, 2013 at 04:14:09PM -0700, Trevor Perrin wrote:

On Tue, Sep 17, 2013 at 2:01 PM, Michael Rogers
mich...@briarproject.org wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi Marco,

This is a problem we're working on as part of the Briar project. Our
approach is pretty simple: establish a shared secret when you first
communicate, periodically run that secret through a one-way function
to get a new shared secret, and destroy the old one.


Why not have separate symmetric keys for each direction of
communication (Alice - Bob, Bob-Alice).

Then whenever a party encrypts or decrypts a message, they can update
the corresponding key right away, instead of having to wait.

(Or look at OTR's use of updating Diffie-Hellmans).

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Asynchronous forward secrecy encryption

2013-09-16 Thread Adam Back

Well aside from the PGP PFS draft that you found (which I am one of the
co-authors of) I also had before that in 1998 observed that any IBE system
can be used to make a non-interactively forware secret system.

http://www.cypherspace.org/adam/nifs/

There were prior IBE systems (with expensive setup and weak security margin
where you had to compute a 512-bit discrete log during to setup to gain
1024-bit security in the cse of Maurer  Yacobi's [1]), however that IBE -
NIFS approach become efficient with Boneh  Franklins 2001 Weil pairing
based IBE.

However personally I consider that to still be a bit experimental on the
experimantal sid and prefer still more conventional approaches.

However in the mean time here's another approach for you (all new AFAIK :)
Use the paillier (1999) cryptosystem to compute discrete log mod n^2. 
(Knowledge of lambda(n^2) (the carmichael function of n^2) allows computing

the discrete log in paillier.) Create some relatively arbitrary set of
public keys y_i in an easily (publicly) computable sequence eg y_i=KDF(i). 
Use lambda to compute x_i st y_i = h^x_i mod n^2 with generator h.  Delete

lambda (and p,q, e, d etc).

Now encryption is (randomized) Elgamal using y_i, h and x_i during epoch i. 
Or for message i (if you stash a big load of public keys on an auxilliary
server for senders to take).  Should be pretty easy to implement. 


discrete log of y = h^x mod n^2 is computed with l = lamda:

define log(y,h) {
  return (modexp(y,l,n^2)-1)/n * modinv((modexp(h,l,n^2)-1)/n,n)%n;
}

(in bc syntax).  


or x = (y^l-1 mod n^2)|n / (h^l-1 mod n^2) mod n

where | is literally divided by (no modulus) and / is multiple my modular
inverse mod n.  n is an RSA modulus.

Adam

[1] Nov 1996 - U M Maurer and Y Yacobi - A Non-interactive Public-Key
Distribution System, Designs, Codes and Cryptography vol 9 no. 3 pp 305-316

On Mon, Sep 16, 2013 at 01:45:43PM +0200, Marco Pozzato wrote:

  Hi all,
  I'm looking for an asynchronous messaging protocol with support for
  forward secrecy: I found some ideas, some abstract paper but nothing
  ready to be used.
  OTR seems the preeminent protocol, but does not have support for
  asynchronous communication.
  This post [1]https://whispersystems.org/blog/asynchronous-security/
  describes an interesting variation on OTR: the basic idea is to
  precalculate 100 Diffie-Hellman and consume one at every new message.
  On the opposite side, for OpenPGP lovers, I found an old
  extension [2]http://tools.ietf.org/html/draft-brown-pgp-pfs-01 which
  adopt the same approach, using many short-lived keys, which frequently
  expire (eg: every week) and are deleted.
  They are both clever ideas to provide PFS, but what does it mean to the
  average user? Let say that today I discover an attack run on 1st of
  August:
* OTR variation: I do not know which messages were wiretapped. 100
  messages could spawn few hours or two months.
* OpenPGP: I know I lost messages sent in the first week of August.

  What do you think about it?
  Marco

References

  1. https://whispersystems.org/blog/asynchronous-security/
  2. http://tools.ietf.org/html/draft-brown-pgp-pfs-01



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Bitcoin-development] REWARD offered for hash collisions for SHA1, SHA256, RIPEMD160 and others

2013-09-16 Thread Adam Back

Mining power policy abuse (deciding which transactions prevail based on
compute power advantage for theft reasons, or political reasons, or taint
reasons) is what committed coins protect against:

https://bitcointalk.org/index.php?topic=206303.0

(Its just a proposal, its not implemented).

Adam

On Mon, Sep 16, 2013 at 05:21:42PM +0200, Lodewijk andré de la porte wrote:


1) We advise mining the block in which you collect your bounty yourself;
   scriptSigs satisfying the above scriptPubKeys do not cryptographically sign
   the transaction's outputs. If the bounty value is sufficiently large
   other miners may find it profitable to reorganize the chain to kill your
   block and collect the reward themselves.  This is particularly
   profitable for larger, centralized, mining pools.


This is a big problem.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] prism proof email, namespaces, and anonymity

2013-09-15 Thread Adam Back

On Fri, Sep 13, 2013 at 04:55:05PM -0400, John Kelsey wrote:

The more I think about it, the more important it seems that any anonymous
email like communications system *not* include people who don't want to be
part of it, and have lots of defenses to prevent its anonymous
communications from becoming a nightmare for its participants.


Well you could certainly allow people to opt-in to receiving anonymous
email, send them a notification mail saying an anonymous email is waiting
for them (and whatever warning that it could be a nastygram, as easily as
the next thing).

People have to bear in mind that email itself is not authenticated - SMTP
forgeries still work - but there are still a large number of newbies some of
whom have sufficiently thin skin to go ballistic when they realize they
received something anonymous and not internalized the implication of digital
free-speech.


At ZKS we had a pseudonymous email system.  Users had to pay for nyms (a
pack of 5 paid per year) so they wouldnt throw them away on nuisance pranks
too lightly.  They could be blocked if credible abuse complaint were
received.

Another design permutation I was thinking could be rather interesting is
unobservable mail.  That is to say the participants know who they are
talking to (signed, non-pseudonymous) but passive observers do not.  It
seems to me that in that circumstance you have more design leverage to
increase the security margin using PIR like tricks than you can with
pseudonymous/anonymous - if the contract is that the system remains very
secure so long as both parties to a communication channel want it to remain
that way.

There were also a few protocols for to facilitate anonymous abuse resistant
emails - user gets some kind of anonymously refreshable egress capability
token.  If they abuse they are not identified but lose the capability.  eg
http://www-users.cs.umn.edu/~hopper/faust-wpes.pdf

Finally there can be different types of costs for nyms and posts - creating
nyms or individual posts can cost real money (hard to retain pseudonymity),
bitcoin, or hashcash, as well lost reputation if a used nym is canceled.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] motivation, research ethics organizational criminality (Re: Forward Secrecy Extensions for OpenPGP: Is this still a good proposal?)

2013-09-13 Thread Adam Back

I suspect there may be some positive correlation between brilliant minds and
consideration of human rights  ability to think independently and
critically including in the area of uncritical acceptance authoritarian
dictates.  We're not talking about random grunt - we're talking about gifted
end of PhD mathematicians or equivalent to be much use to NSA for
surrepticiously cracking or backdooring ciphers in the face of public
analysis.  (Well the DRBG one was pretty ham-fisted, but maybe they have
some better ones we hvent found yet, or at least tried).

Take a look eg at this washington monthly article, there is a history of top
US universities having to divest themselves of direct involvment with
classified research due to protestations of their academic staff about the
ethical considerations.

http://www.washingtonmonthly.com/ten-miles-square/2013/09/does_classified_research_corru046860.php


“In the 1960s students at MIT protested strongly against having a
classified research laboratory on the campus and MIT said we will divest
it, so it won’t be part of MIT anymore,” said Leslie.  “It still exists in
Cambridge, but it’s not officially connected.” Leslie also points to
Stanford, where they made the decision for their Stanford Research
Institute to disaffiliate and become an independent non-profit.


Psychopaths are a minority, and people on the top end of crypto/maths skills
are sought after enough to easily move jobs even in a down market - so the
must collect pay-check argument seems unlikely.  So I stand by my argument
that they probably scored an own goal on the retention and motivation front. 
I think for the majority of people - they wont like to go to work, or will

feel demotivated, feeling the world is sneering at their employer as a
quasi-criminal org.

Adam

On Tue, Sep 10, 2013 at 11:05:58PM +0200, David D wrote:

Quote,  You've got to think (NSA claims to be the biggest employer of
mathematicians) that seeing the illegal activities the US has been getting
up to with the fruits of their labour that they may have a mathematician
retention or motivation problem on their hands.

You mean like the principled mathematicians working on cluster bombs,
drones, and other cool shit?

Everyone at the NSA knows exactly what they are doing.

I suspect, like most that suck off the military-industrial complex tit,
there is surprising low turnover.

Paychecks only go so far with the principled, but spineless will collect a
check forever and do whatever it takes to keep it coming.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Forward Secrecy Extensions for OpenPGP: Is this still a good proposal?

2013-09-10 Thread Adam Back

You know coincidentally we (the three authors of that paper) were just
talking about that very topic in off-list (and PGP encrypted:) email.

I remain keen on forward-secrecy, and it does seem to be in fashion again
right now.

Personally I think we in the open community need to up our game an order of
magnitude.  We thought we won the last crypto wars when mandatory key escrow
was abandoned, and US crypto export regs basically scrapped.  But it turns
out instead they just went underground and sabotaged everything they could
gain influence over with a $250m/year black budget and limited regard for
law, ethics and human rights.  Apparently including SSL MITMs using CAs
keys.

You've got to think (NSA claims to be the biggest employer of
mathematicians) that seeing the illegal activities the US has been getting
up to with the fruits of their labour that they may have a mathematician
retention or motivation problem on their hands.  Who wants their life's work
to be a small part in the secret and illegal creation of a surveillance
state, with a real risk of creating the environment for a hard to recover
fascist political regime over the next century if the events allow even
worse governments to get in that further overthrow democratic pretense.

How about this for another idea, go for TLS 2.0 that combines ToR and TLS,
and deprecate HTTP (non TLS) and TLS 1.x and SSL.  Every web server a ToR
node, every server an encrypted web cache, many browsers a ToR node.

Do something to up the game, not just blunder along reacting and failing
year on year to deploy fixes for glaring holes.

Adam

On Tue, Sep 10, 2013 at 08:35:08PM +0200, Fabio Pietrosanti (naif) wrote:

Hi all,

i just read about this internet draft Forward Secrecy Extensions for
OpenPGP available at
http://tools.ietf.org/html/draft-brown-pgp-pfs-03 .

Is it a still good proposal?

Should it be revamped as an actual improvement of currently existing use
of OpenPGP technology?

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] a Cypherpunks comeback

2013-07-22 Thread Adam Back

Could you please get another domain name, that name is just ridiculous.

It might tickle your humour but I guarantee it does not 99% of potential
subscribers...

Unless your hidden objective is to drive away potential subscribers.

Adam

On Sun, Jul 21, 2013 at 11:07:26AM +0200, Eugen Leitl wrote:

- Forwarded message from Riad S. Wahby r...@jfet.org -

Date: Sat, 20 Jul 2013 12:41:25 -0400
From: Riad S. Wahby r...@jfet.org
To: cpunks-recipients-suppres...@proton.jfet.org
Subject: a Cypherpunks comeback
User-Agent: Mutt/1.5.21 (2010-09-15)

tl;dr:
I'm writing to invite you back to the Cypherpunks mailing list. If
you're interested, you can join via
   https://al-qaeda.net/mailman/listinfo/cypherpunks

Hello,

In the past couple days I've exchanged emails with John Young and
Eugen Leitl on some brokenness in the Cypherpunks mailing list. This
discussion brought us to a discussion of attempting to resurrect the
list's wetware, as it were, in addition to its software. At Eugen's
request, John dug up a couple Majordomo WHO outputs from about 15 years
ago; I tidied up the lists, and now I'm writing to you.

So! if you still have an interest in crypto, privacy, and politics, and
if you want to discuss that interest with a bunch of like-minded weirdos
from the aether, you can subscribe yourself via the web interface above
or by sending an email with subscribe in the body to
cypherpunks-requ...@al-qaeda.net.

(I am aware the provocative choice of domain name may discourage you
somewhat. I can only tell you that I've been running a Cypherpunks list
of some sort from this domain for a bit over a decade, and I haven't yet
been spirited away in a black helicopter. Here's hoping for another
helicopter-free decade.)

Best regards, and welcome back, preemptively,

-=rsw
on behalf of jya, eugen, and rsw

- End forwarded message -
--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://ativel.com http://postbiota.org
AC894EC5: 38A5 5F46 A4FF 59B8 336B  47EE F46E 3489 AC89 4EC5
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] SSL session resumption defective (Re: What project would you finance? [WAS: Potential funding for crypto-related projects])

2013-07-04 Thread Adam Back

Forward secrecy is exceedingly important security property.  Without it an
attacker can store encrypted messages via passive eavesdropping, or court
order an any infrastructure that records messages (advertised or covert) and
then obtain the private key via burglary, subpoena, coercion or
software/hardware compromise.

The fact that the user couldnt decrypt the traffic even if he wanted to as
he automatically no longer has the keys, is extremely valuable to the
overall security for casual, high assurance and after-the-fact security (aka
subpoena).

In my view all non-forward-secret ciphersuits should be deprecated.

(The argument that other parts of the system are poorly secured, is not an
excuse; and anyway their failure modes are quite distinct).

Btw DH is not the only way to get forward secrecy; ephemeral (512-bit) RSA
keys were used as part of the now-defunct export ciphers, and the less well
known fact that you can extend forward secrecy using symmetric key one way
functions hash function k' = H(k), delete k.

DH also provides forward security (bacward secrecy?) its all a misnomer but
basically recovery of security, if decryption keys are compromised, but the
random number generator is still secure.  (And auth keys presumably.)

The fact that forward secrecy is secure against passive adversaries even
with posession of authenticating signature keys, also ups the level of
attack required to obtaining plaintext.  A MITM is something harder to
achieve at large scale, and without detection, in the face of compromised
CAs and so on.  So that is another extremely valuable functionality provided
by DH.

Dont knock DH - it provides multiple significant security advantages over
long-live keys.  All comms that is not necessarily store and forward should
be using it.

Adam

On Thu, Jul 04, 2013 at 11:16:21AM -0400, Thierry Moreau wrote:
Thanks to Nico for bringing the focus on DH as the central ingredient 
of PFS.


Nico Williams wrote:


But first we'd have to get users to use cipher suites with PFS.  We're
not really there.



Why?

Perfect forward secrecy (PFS) is an abstract security property 
defined because Diffie-Hellman (DH) -- whatever flavor of it -- 
provides it.


As a reminder, PFS prevents an adversary who gets a copy of a 
victim's system state at time T (long term private keys), then *only* 
eavesdrops the victim's system protocol exchanges at any time T' that 
is past a session key renegotiation (hint: the DH exchange part of 
the renegotiation bars the passive eavesdropper).


It's nice for us cryptographers to provide such protection. But its 
incremental security appears marginal.


So, is it really *needed* by the users given the state of client 
system insecurity?


I would rather get users to raise their awareness and self-defense 
against client system insecurity (seldom a cryptographer 
achievement).


--
- Thierry Moreau


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] SSL session resumption defective (Re: What project would you finance? [WAS: Potential funding for crypto-related projects])

2013-07-04 Thread Adam Back

I do not think it is a narrow difference.  End point compromise via
subpoena, physical seizing, or court mandated disclosure are far different
things than pre-emptive storing and later decryption.  The scale at which a
society will do them, and tolerate doing them given their inherently
increased visibility is much curtailed.  Trying to do wide scale MITM is
much harder, than hoovering ciphertext and then after the fact obtaining
keys by whatever method is expedient, legal/extra-legal, secret
particularized warrant, secret general warrants, government authorized
malware, etc.  All of these things are apparently happening on scale larger
than authorized by society.

Having to physically seize systems, issue individualized subpoenas to a
generally public court process based on articulated suspicion creates a
natural balance vs general warrants that the US rightly fought a revolution
against my ancesters, the British over.

Basically unless you think PRISM is a good idea, you should use DH.

On Thu, Jul 04, 2013 at 12:37:40PM -0400, Thierry Moreau wrote:

(The argument that other parts of the system are poorly secured, is not an
excuse; and anyway their failure modes are quite distinct).


In my opinion, when you consider the casual user needs, I see those 
arguments not at a top priority.


Subpoena resistance is a pretty high priority for end user systems.


Btw DH is not the only way to get forward secrecy; ephemeral (512-bit) RSA
keys were used as part of the now-defunct export ciphers, and the less well
known fact that you can extend forward secrecy using symmetric key one way
functions hash function k' = H(k), delete k.


Not completely by this counterexample: generate k, suffer from an 
enemy copy of system state including k, let k'=H(k), delete k', use 
k' in dangerous confidence. I mean the textbook PFS definition is not 
satisfied by k'=H(k).


I think you are confusing forward secrecy (aka backward security) with
backward secrecy (forward security).  Ross Anderson tried to improve things
with his forward secure/backward secure alternative terminology:

http://www.cypherspace.org/adam/nifs/refs/forwardsecure.pdf

Forward secrecy is a bad term from a mnemonic point of view, I think
Anderson's forward/backward security terms are better.  EDH provides both,
k'=H(k) provides only backward security (aka forward secrecy).  The point is
you do both; you can computationally afford to do k'=H(k) with an agile
key-schedule cipher like AES every minute or whatever.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] What project would you finance? [WAS: Potential funding for crypto-related projects]

2013-07-02 Thread Adam Back

I think it time to deprecate non-https (and non-forward secret
ciphersuites.)  Compute power has moved on, session cacheing works,
symmetric crypto is cheap.

Btw did anyone get a handle on session resumption - does it provide forward
secrecy (via k' = H(k)?).  Otherwise I saw concerns a disk stored, or long
lived session resumption may itself start to become an exposure risk
somewhat analogous to non-forward secret SSL.

Adam

On Tue, Jul 02, 2013 at 12:50:32PM +0300, ianG wrote:

BTNS (better than nothing security) for IPSec could save it.

There is precedent:  the ideas behind SSH totally swept out 
secure-telnet within a year or so.  Skype demolished other VoIP 
providers, because its keys were hidden.  The same thing happened 
with that email transport security system.


In contrast, IPSec is a complete and utter deployment failure, and it 
shares statistically unmeasurable rates of protection across the net. 
It's near cousin, secure browsing at least achieved penetration rates 
of around 1% if one counts the HTTPS v. HTTP ratio (what else 
matters?). Both suffered in large part because they insisted on the 
classical certificates / PKI schoolbook.


So, if one is looking for a saviour, there is pretty good correlation here.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] SSL session resumption defective (Re: What project would you finance? [WAS: Potential funding for crypto-related projects])

2013-07-02 Thread Adam Back

On Tue, Jul 02, 2013 at 11:48:02AM +0100, Ben Laurie wrote:

On 2 July 2013 11:25, Adam Back a...@cypherspace.org wrote:
does it provide forward secrecy (via k' = H(k)?).  


Resumed [SSL] sessions do not give forward secrecy. Sessions should be
expired regularly, therefore.


That seems like an SSL protocol bug no?  With the existence of forward
secret ciphersuites, the session resumption cache mechanism itself MUST
exhibit forward secrecy.

Do you think anyone would be interested in fixing that?

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Is the NSA now a civilian intelligence agency? (Was: Re: Snowden: Fabricating Digital Keys?)

2013-06-30 Thread Adam Back

Fully agree.  I suspect the released figures showing a spike in FBI
wire-taps may be cover/laundry and indicative of receiving domestic
targetted crime tips from NSA.

Another vector: the UK GCHQ have reportedly on their list of authorized
spying motivations economic well being.  That translates to economic
espionage.  It seems to be strongly suspected by informed political
commentators that the US (and secondarily echelon partners) are conducting
economic espionage against Europe.

It seems beyond the ken and political will of national security spies to
restrict the information collected to narrow national security use.  Once
they slide it into law enforcement, if historically falls into increasingly
more trivial or even arguable crimes.  We also see hints such information
is being abused for political reasons, eg the IRS audits.

The other aspect of this is that I dont think Americans can expect even the
most positive constitutional or legal re-evaluation and adjustment to
actually fix the problem.  It seems to me to be already established that
ISPs can be required to keep records for some period.  eg GSM location, and
call information for years; email bodies for periods of time.  Therefore it
seems obvious to me that as soon as there is any legal threat to the NSA
storing their own information, they'll just get some laws to require the
ISPs to do it for them.  Probably they can fix it with a few leases, and
contracts and carry on as is.  The people working on this stuff at the ISPs
are going to already have the same security clearances as the NSA, and the
NSA apparently already sub-contracted to the private sectore 70% of its
budget.  So how hard is it going to be for them to ask the ISPs and telcos
to form a privately owned telecommunications consortium, that harvests and
stores information.  Apparently private sector sub-contracting already forms
part of the legal shenanigans in the abuse of the FISA.

Though I do think it is a politically useful exercise for people to press
for legal changes, it seems that with the extent of lying and manipulation,
information related power, and scale of economic lobbying; the mil-ind
complex in the US has effectively become above the US law and constitution.

So I think the only answer is lots of crypto.  Per the cypherpunks credo:
write code not laws.

Adam

On Sun, Jun 30, 2013 at 01:30:34PM +0300, ianG wrote:

On 29/06/13 13:23 PM, Jacob Appelbaum wrote:

http://www.guardian.co.uk/world/2013/jun/17/edward-snowden-nsa-files-whistleblower

One of the most interesting things to fall out of this entire ordeal is
that we now have a new threat model that regular users will not merely
dismiss as paranoid. They may want to believe it *isn't* true or that
policy has changed to stop these things - there is a lot of wishful
thinking to be sure. Still such users will not however believe
reasonably that everyone in the world follows those policies, even if
their own government may follow those policies.



Yes, but I don't think the penny has yet dropped.

One of the things that disturbed me was the several references of how 
they deal with the material collected.  I don't think this is getting 
enough exposure, so I'm laying my thoughts out here.


There is a lot of reference to analysts poking around and deciding if 
they want that material or not, as the sole apparent figleaf of a 
warrant.  But there was also reference to *evidence of a crime* :


http://www.cnsnews.com/news/article/intelligence-chief-defends-internet-spying-program
—The dissemination of information incidentally intercepted about a 
U.S. person is prohibited unless it is necessary to understand 
foreign intelligence or assess its importance, *is evidence of a 
crime* , or indicates a threat of death or serious bodily harm.




The way I read that (and combined with the overall disclosures that 
they are basically collecting everything they can get their hands on) 
the NSA has now been de-militarised, or civilianised if you prefer 
that term. In the sense that, information regarding criminal activity 
is now being shared with the FBI  friends.  Routinely, albeit 
secretly and deniably.


This represents a much greater breach than anything else.  We always 
knew that the NSA could accidentally harvest stuff, and we always 
knew that they could ask GCHQ to spy on Americans in exchange for 
another favour.  As Snowden said somewhere, the American/foreigner 
thing is just a distracting tool used by the NSA to up-sell their 
goodness to congress.


What made massive harvesting relatively safe was that they never 
shared it, regardless of what it was about, unless it was a serious 
national security issue.


Now the NSA is sharing *criminal* information -- civilian 
information. To back this shift up, the information providers reveal:


http://www.counterpunch.org/2013/06/20/spying-by-the-numbers/

Apple reported receiving 4,000 to 5,000 government requests for 
information on customers in just the last six 

Re: [cryptography] skype backdoor confirmation

2013-05-24 Thread Adam Back

It seems like there is this new narrative in some peoples minds about all
companies backdoor everything and cooperate with law enforcement with no
questions asked, what do you expect.  I have to disagree strongly with this
narrative to combat this narrative displacing reality!  I've seen several
people saying similar things in this thread.  No I say.

I think the point is not that a company could backdoor something.  We know
that companies that have information for whatever pre-existing reason that
may help investigations will typically be expected to hand it over with
appropropriate legal checks and balances, a court order, subpoena etc. 
Sometimes their lawyers will fight it if the subpoena is ridiculously broad,

and thats not that unusual.  Sometimes there are gag orders to prevent the
fact that a subpoena was received from being disclosed to the target, or
disclosed ever.  The latter is considered fairly obnoxious.

Now and then there are rumours or claims of forced changes that eg hushmail
maybe changed some code in response to law enforcement request of some kind.

However it is not the case that anything that could be backdoored is
backdoored.  Do you think all SMIME email clients, all SSL clients (embedded
and browser), all SSL web servers, all VPNs are backdoored?  I seriously
doubt any of them are backdoored in fact.  Would those taking the what do
you expect narrative like to try your narrative against web servers and
VPNs?

Now web2.0 types of things that involve social media and messages being
stored online obviously are targets for subpoenas and dont typically involve
more than transport encryption.

IM most of the clients are not end2end by design - ie like web20 there is
transport encryption from client to server, but a central server that sees
all traffic.  As someone mentioned many companies run their own server for
this reason (to avoid traffic being readable to the internet scale IM server
operator).  Skype was claimed to be end2end secure.  The skype security
review white paper saying so is still on their web page.  The privacy policy
just says they will hand over information they have, in response to valid
legal requests, which is a non-statement, companies operatate in
jurisdictions which issue legal requests.  For all we know skype may still
be end2end secure when used with a strong password, except for uploading
URLs for some ill thought out malware checking.  Or not, maybe thats
happening server side, no one took the trouble to determine (its easy enough
I think as I said just upload lots of URLs and same character count with no
URLs and count the byte count of the traffic flow).  The password reset
doesnt sound so good, possibly not being technically end2end, but presumably
you dont have to use that.

So anyway, no, products riddled with backdoors is not acceptable, its not
business as usual, and we do expect better.  And if companies are
advertising end2end security, and yet routinely decrypting all traffic, in
many countries that could open them up to fines and possible prosecution for
false advertising.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] skype backdoor confirmation

2013-05-22 Thread Adam Back
You know thats the second time you claimed skype was not end2end secure. 
Did you read the skype independent security review paper that Ian posted a

link to?

http://download.skype.com/share/security/2005-031%20security%20evaluation.pdf

It is cleary and unambiguously claimed that skype WAS end to end secure.

If you want to claim otherwise we're gonna need some evidence.  


I know they provided certification of the user identity, but as they
provided the namespace its not clearly what else they could easily do,
without exposing users to fingerprints, x509 certificates (which are not
free) or PGP WoT.  I dont think you can use that to make blanket statements
that it was never end2end secure.

Also you say there are now features which are incompatible.  Really?  What
features?

Adam

On Wed, May 22, 2013 at 06:57:09PM +0200, Florian Weimer wrote:

So, the review is not invalid.  And, even when Skype changes its
model, the review remains valid.


There are now features that are incompatible with the design sketched
in the report, such as user password recovery and call forwarding.

The key management never was end-to-end, and we'd view that somewhat
differently these days (I think).

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] skype backdoor confirmation

2013-05-22 Thread Adam Back

I dont think your inference is necessarily correct.  With reference to the
Berson report, consider the skype RSA keypair was for authentication only
(authenticating ephemeral key-exchange as described in the paper).  The
public RSA key is certified by skype as belonging to your identity.  They
would be able to do an email password reset with your client generating a
new keypair and skype recertifying it, using your knowledge of the new
password hash and ability to receive email at the registered address as
authorization.  


Of course typing the password on the website (as it is now) would have some
trust-us limitations if it is not hashing the password in jscript.  I see
there is jscript and the password reset will tell you need to enable
jscript, but I didnt actually see any evidence of pbkdf2/sha/md5 etc in that
code, however it has been minified and the browser itself does support some
kinds of password hashing, so thats not definitive.  Even if it doesnt do
password hashing client side they may claim that it doesnt record the
password server side, just hash and store.  But also and my point: its hard
to say at this point how password reset worked at the time the report was
written (2005).

Also it can be a defense that if its a choice between losing a user, who
prefers convenience over security, to offer a web based password reset is
optional.  A user who cares can use a strong password entered only in the
client, and back it up.

Adam

On Wed, May 22, 2013 at 07:28:30PM +0200, Florian Weimer wrote:

* Adam Back:


If you want to claim otherwise we're gonna need some evidence.


https://login.skype.com/account/password-reset-request

This is impossible to implement with any real end-to-end security.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] skype backdoor confirmation

2013-05-22 Thread Adam Back

Indeed it was understood that skype's coding was described as akin to a
polymorphic virus.  However it was also considered that this was for
business reasons to make it difficult for competing products to interoperate
at the codec, and protocol level.

I notice that those two papers do NOT make the claim that skype learns the
communications encryption session key as part of the protocol, but rather as
I was saying, their only ability stemmed from being the CA issuing identity
certificates, and therefore could construct two fake certificates, and
somehow persuading the p2p network to route the real users to the MITM
(holding a fake cert for both parties).

(The session key that is mentioned as part of the server auth protocol is
used to encrypt the hashed password and emphemeral RSA key used for
authentication, not as far as I understand the traffic, that happened
end2end in a separate protocol).

Adam

On Wed, May 22, 2013 at 07:41:38PM +0200, Dominik Schürmann wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi folks,

we recently wrote a small section about skype with some references:
http://sufficientlysecure.org/uploads/skype.pdf

Interesting references (from 2005, 2006):
http://www.ossir.org/windows/supports/2005/2005-11-07/EADS-CCR_Fabrice_Skype.pdf

http://secdev.org/conf/skype_BHEU06.pdf

In my understanding it provided some sort of minimum end-to-end
security in the past, but it could never be verified as it is a highly
obfuscated protocol.

Regards
Dominik

On 22.05.2013 19:28, Florian Weimer wrote:

* Adam Back:


If you want to claim otherwise we're gonna need some evidence.


https://login.skype.com/account/password-reset-request

This is impossible to implement with any real end-to-end security.
___ cryptography
mailing list cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJRnQNPAAoJEHGMBwEAASKC5woH/3RJCrM4mXhvFwAHCGf4Hdpo
dtP5NkZNHTrpTT2Gj6ECbfbD6GZLg+RxeBimDiVEpIovW9lyB/T3bV/yBqkE7ZDV
xdFYGMH5+ZBxpg8q3K8D6hL1maLSV7DWRyye5z45/DVmLPe1Sax3Dh7XHOn1k0k8
VI3ck/YLTaOIBhaifc7qXBAV8gWs/GjCpr+o3+S23SLLTWV8Qla2nucwCdtKVQAM
LWMH5I0mBMssVF3dKkPvGtinoJ51gqiZb19z+2DwNucRPHOo2+kZNFpjafNKqjsh
1TGU1d/DmUsDQsMeUoprRG2yt6hORIb2ZYgG49JzuQa7Zya3TIzhGsfIjN5Nk8M=
=yIS5
-END PGP SIGNATURE-
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] skype backdoor confirmation

2013-05-18 Thread Adam Back

Actually I think that was the point, as far as anyone knew and from the last
published semi-independent review (some years ago on the crypto list as I
recall) it indeed was end2end secure.  Many IM systems are not end2end so
for skype to benefit from the impression that they still are end2end secure
while actually not being is the focus of this thread.

Adam

On Sat, May 18, 2013 at 06:52:58PM +0200, Florian Weimer wrote:

As far as I know, Skype is e2e secure.


It hasn't got end-to-end key management, so it can't be end-to-end
secure against the network operator.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Regarding Zerocoin and alternative cryptographic accumulators

2013-05-05 Thread Adam Back

This below post didnt elicit any response, but the poster references an
interesting though novel (and therefore possibly risky) alternative
accumulator without the need for a centrally trusted RSA key generator
(which is an anathema to a distributed trust system), or alternatively
zero-trust but very inefficient RSA UFO mentioned in Green's paper.  Lipmaa
is a well known researcher, and Limpaa's proposed novel accumulator scheme
does appear to offer a simultaneously efficient and zero trust alternative
to the optimized Benaloh accumulator zerocoin, like Sander and Ta-Shma's
auditable ecash that it is based on.

ps I notice the Matthew Green's address was misttyped by the parent poster,
so I have fixed that.

Adam

Sat, Apr 27, 2013 at 05:25:02PM +0400

[...]

I have recently read the Zerocoin paper which describes a very
interesting enhanced anonymity solution for bitcoin-like blockchain
based cryptocurrencies  ( those unfamiliar can check it out here
http://spar.isi.jhu.edu/~mgreen/ZerocoinOakland.pdf )

The paper specifically states that While we were not able to find an
analogue of our scheme using alternative components, it is possible
that further research will lead to other solutions. Ideally such an
improvement could produce a drop-in replacement for our existing
implementation

However, I've come across an alternative cryptographic accumulator
that does not require trusted setup, the Lipmaa  Euclidean Rings based
design. ( http://www.cs.ut.ee/~lipmaa/papers/lip12b/cl-accum.pdf )
From my superficial assessment, it appears fitting for a zerocoin like
design, but I find it quite likely that I am missing the obvious.

The question thus is: what exactly prevents Lipmaa accumulator from
being used as aforementioned drop-in replacement ?

Thank you very much in advance.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Regarding Zerocoin and alternative cryptographic accumulators

2013-05-05 Thread Adam Back

[More address typos, its contagious!, so resending]

This below post didnt elicit any response, but the poster references an
interesting though novel (and therefore possibly risky) alternative
accumulator without the need for a centrally trusted RSA key generator
(which is an anathema to a distributed trust system), or alternatively
zero-trust but very inefficient RSA UFO mentioned in Green's paper.  Lipmaa
is a well known researcher, and Lipmaa's proposed novel accumulator scheme
does appear to offer a simultaneously efficient and zero trust alternative
to the optimized Benaloh accumulator used by zerocoin; and Sander and
Ta-Shma's auditable ecash, that zerocoin is based on, also used the Benaloh
accumulator.

Adam

Sat, Apr 27, 2013 at 05:25:02PM +0400

[...]

I have recently read the Zerocoin paper which describes a very
interesting enhanced anonymity solution for bitcoin-like blockchain
based cryptocurrencies  ( those unfamiliar can check it out here
http://spar.isi.jhu.edu/~mgreen/ZerocoinOakland.pdf )

The paper specifically states that While we were not able to find an
analogue of our scheme using alternative components, it is possible
that further research will lead to other solutions. Ideally such an
improvement could produce a drop-in replacement for our existing
implementation

However, I've come across an alternative cryptographic accumulator
that does not require trusted setup, the Lipmaa  Euclidean Rings based
design. ( http://www.cs.ut.ee/~lipmaa/papers/lip12b/cl-accum.pdf )
From my superficial assessment, it appears fitting for a zerocoin like
design, but I find it quite likely that I am missing the obvious.

The question thus is: what exactly prevents Lipmaa accumulator from
being used as aforementioned drop-in replacement ?

Thank you very much in advance.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] bitcoin stats (Re: OT: Skype-Based Malware Forces Computers into Bitcoin Mining)

2013-04-18 Thread Adam Back

vs the other things malware does it also seems like a much more benign
payload - uses a bit more electricity!  I imagine they throttle it down
dynamically when the user actually does things to hide the computer slow
down.  (Though typically you wont notice with GPU mining unless you are
playing video games).  But you may notice the increased GPU fan noise and
heat on mid/high range cards.

Seems bitcoin mining is using 40MW of power (estimate) thats 0.025% of the
amount used by US households (30,000 US households - obviously bitcoin
mining is global - its just a comparative stat).

Thats a lot of 55.1 bit hashcash mining those miners are doing!  Bits is
more human readable by my thinking from hashcash difficulty measurement and
visually easy to confirm from the hash output.

http://blockexplorer.com/block/01c7e0186b24825b3f2973ef6e8556bbc5f6a0aaadf363114fd1

you can see the SHA256 output 


01c7e0186b24825b3f2973ef6e8556bbc5f6a0aaadf363114fd1

has 13 hex 0 nibbles = 52 bits plus the next digit is a one 0001 in binary
so thats three more - so its a 55 bit.

And the difficulty parameter is measured in the number of 2^32 hashes
(MAX_UINT == ~4 billion) so you can get back to it from 55.1-32=23.1 via
2^23.1 (in bc e(l(2)*23.1) ~ 9 million.  Much better :)  And converting GPU
card mining stats into bits is also easy 400Mh (amd 7870) = 28.6.  So that
means I have 55.1-28.6 = 26.5 bits short (I need 2^26.5 bits to match the
network hash rate).  And also you can recreate the network hashrate as
2^55.1/600 (60 seconds x 10 mins per block) or subtract 9.2 as log2(600) =
9.22 so 55.1-9.2=45.9 = 2^45.9 so subtract 40 for terahash/sec = 2^5.9 =
59.7.  (Except some web stats are misreporting network hash in 1000^4 rather
than 1024^4).

Anyway I think humans work better with massive numbers in the log scale.

Adam

On Wed, Apr 17, 2013 at 10:27:01PM -0400, Kevin W. Wall wrote:

You know Bitcoin must have arrived when this is going on.
(For that matter, I even heard Bitcoin mentioned on NPR a few
days ago.)

As reported on IEEE Computer Society's _Computing Now_
news site:
http://www.computer.org/portal/web/news/home/-/blogs/skype-based-malware-forces-computers-into-bitcoin-mining

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] summary of zerocoin (Re: an untraceability extension to Bitcoin using a combination of digital commitments, one-way accumulators and zero-knowledge proofs, )

2013-04-18 Thread Adam Back

Someone asked me offline for the UFO RSA reference, so I am posting my reply
here.  (RSA UFOs are a way to generate an RSA key without ever knowing the
private key in a trustworthy way).

Its ref 26 in the zerocoin paper:

http://spar.isi.jhu.edu/~mgreen/ZerocoinOakland.pdf

T. Sander, “Efficient accumulators without trapdoor extended
abstract,” in Information and Communication Security, vol.
1726 of LNCS, 1999, pp. 252–262.  citeseer has it:

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.28.4015

Btw thats presumably not coincidentally the same Sander from Sander  Ta
Shma that published auditable distibuted NIZKP-based auditable anonymous
electronic cash in 1999.  


http://www.cs.tau.ac.il/~amnon/Papers/ST.crypto99.pdf

You've got to say that paper is really close to the distributed publicly
auditable aspects of bitcoin.

They were aiming for privacy, auditability and distribution; so its really
close to bitcoin - bitcoin drops the blinding/ZKP based privacy target and
adds hashcash distributed mining and the b-money (Wei Dai) or bitgold (Nick
Szabo) inspired inflation control + later independent exchanges sprung up so
there was no direct banking interface.  


The only aspect of privacy in bitcoin is that there is no bitcoin address
requiring identity, so you are pseudonymous, plus can consequently create
many addresses (and the clients seem to encourage this by design).  The
exchanges of course require identification for wire transfers, but there is
some ambiguity between spent and transferred to another coin you own.  Not
a huge amount though and people dont reasn well about statistics as Shamir
et al showed in quantitative analysis of the full bitcoin transaction
graph.

http://eprint.iacr.org/2012/584.pdf

Adam

On Wed, Apr 17, 2013 at 11:49:00PM +0200, Adam Back wrote:

It appears to use cut-and-choose technique to create a non-interactive ZKP
on a one-way accumulator (from Camenisch  Lysanka).  That results in
relatively big ZKPs which impact bitcoin scalability, it doesnt say how big
they actually are but for good security margin I'm guessing something like
128 individual proofs, which can get kind of heavy.  They say its like to
exceed the bitcoins 10kb per tx limit...

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] summary of zerocoin (Re: an untraceability extension to Bitcoin using a combination of digital commitments, one-way accumulators and zero-knowledge proofs, )

2013-04-17 Thread Adam Back

It appears to use cut-and-choose technique to create a non-interactive ZKP
on a one-way accumulator (from Camenisch  Lysanka).  That results in
relatively big ZKPs which impact bitcoin scalability, it doesnt say how big
they actually are but for good security margin I'm guessing something like
128 individual proofs, which can get kind of heavy.  They say its like to
exceed the bitcoins 10kb per tx limit.

(People may recall cut-and-choose as Chaum's original blind RSA signature
based proposal to support offline spendable ecash with identity exposure as
the penalty - the owners identity would be embedded in the coin via cut and
choose, and then if they double spend revealed as ther is some recipient
choice within the spend that would reveal two shares).  Stefan Brands ecash
by contrast with the more flexible discrete log representation problem
provided a direct (non-cut-and-choose) offline double spend deterrence
method.

(Cut and choose is a simple idea that if you have to commit first and reveal
one of two options you have a 1 in 2 (0.5 or 2^-1) chances of cheating (and
putting someone elses identity in the coin eg for revealing during an
offline double spen), but if you do it 128 times you have a 2^-128 chance of
cheating.  The same approach plus demonstrably fair way to generate
witnesses is a general method for constructing NIZKPs from ZKPs.)

Also in principle you have to construct a NIZKP all the way back to the
first zerocoin, and while the accumulator is constructed incrementally and
stored in each block bitcoin requires public auditability so the verifier
has to go check the corresponding spent list that grows.  (The ZKP is that
the zerocoin is a member of the set of zerocoins previously exchanged for
bitcoin, and is not a member of the spent list.  The expensive part is the
former, the latter is just a bit list of spent zerocoin serial numbers.  Thy
do suggest just trusting the last block accumulator as a starting point, but
I think you can only do that if the zerocoin was issued after that, and the
effectiveness of that optimization is in some conflict with anonymity set
requiremnts to hold the zerocoin for a while to at least have a decent
number of coins your mixin with.

The accumulators allow faster NIZP of set membership than proving with a bit
list of OR operations, but the cut-and-choose itself isnt a direct NiZKP it
involves 128 rounds or whatever the security parameter is.


The accumulator uses a trusted party to generate an RSA key and discard the
private keys.  (Though happily there are RSA UFOs (way to generate an RSA
key without ever knowing the private key in a trustworthy way)).

So thats all pretty cool, and I think an efficiency improvement over
previous NIZKP of set membership (or non-set-membership) but a bit heavy. 


btw about payment privacy (or lack thereof in bitcoin) an amusing link
http://www.listentobitcoin.com/ to get an idea of the bitcoin transaction
volume and transaction sizes - highlighting visually the extremely open
nature of the realtime transaction history.  I saw a massive $640k
transaction float past when I looked.  Soothing sounds it makes too. 
(Bitcoin is private only in the self chosen psuedonym sense, and is

otherwise an open book by design).

Adam

On Sat, Apr 13, 2013 at 01:40:36AM +0300, ianG wrote:

Steve Bellovin posted this on another list, hattip to him.

http://www.forbes.com/sites/andygreenberg/2013/04/12/zerocoin-add-on-for-bitcoin-could-make-it-truly-anonymous-and-untraceable/

For those following Bitcoin this is news.  Matthew Green writes:

   For those who just want the TL;DR, here it is:

   Zerocoin is a new cryptographic extension to Bitcoin that (if 
adopted) would bring true cryptographic anonymity to Bitcoin. It 
works at the protocol level and doesn't require new trusted parties 
or services. With some engineering, it might (someday) turn Bitcoin 
into a completely untraceable, anonymous electronic currency.


http://blog.cryptographyengineering.com/2013/04/zerocoin-making-bitcoin-anonymous.html



(iang adds:)

Bitcoin is psuedonymous but traceable, which is to say that all 
transactions are traceable from identity to identity, but those 
identities are psuedonyms, being (hashes of) public keys.  This is 
pretty weak.  In contrast, Chaumian blinding was untraceable but 
typically identified according to an issuer's regime.  Because 
Chaumian mathematics required a mint, this devolved to 
trusted/identified, so again not as strong as some hoped.


Bitcoin fixed this 'flaw' by decorporating the mint into an 
algorithm. This suggests a new axis of distributed.  But  Bitcoin 
lost the untraceability in the process, thus rendering it a rather 
ridiculous attempt at privacy, as the entire graph was on display.  
Bitcoin is more or less worse at privacy than Chaumian cash ever was.


The holy grail in Chaumian times was untraceable  unidentifiable, to 
which Bitcoin added distributed.  This paper by Miers, Garman, Green 
 Rubin 

Re: [cryptography] an untraceability extension to Bitcoin using a combination of digital commitments, one-way accumulators and zero-knowledge proofs,

2013-04-13 Thread Adam Back

Also without having read the article, but did read the blog post by one of
the authors as Ian G said zerocoin appears to provide payment privacy, and
public auditability while retaining distributed setting.

However payment publicly auditable payment privacy comes from ZKP of non-set
membership (from 1998 paper by Sander  Ta-Shma, and they reference that
also), plus bit coins hashcash computational concensus enforced model of
distributed.  So far I dont see something new other than assembling the two
parts.  I commented on this list on this combined approach few year back:

http://www.mail-archive.com/cryptography@randombit.net/msg00781.html

My other comment then, which I dont know if zero coin incorporated was that
as the ZK non-set-membership proof (in set of spent coins) is itself
expensive maybe that work should be incorporated into the computational work
of the bitcoins.  If one could successfully do that work incorporation, the
work would be less of an issue as the miners would do it, and the mining
network has more power than the top 100 supercomputers combined (or so I
read!)  However it would be useful to have individuals easily have the power
to categorically verify from scratch that a given zerocoin is valid.  Pubic
audit speed does matter.

Maybe they have some ZKP set membership optimizations and concrete protocol
plus prototype implementation is the point.

Adam

On Sat, Apr 13, 2013 at 01:40:36AM +0300, ianG wrote:

Steve Bellovin posted this on another list, hattip to him.

http://www.forbes.com/sites/andygreenberg/2013/04/12/zerocoin-add-on-for-bitcoin-could-make-it-truly-anonymous-and-untraceable/

For those following Bitcoin this is news.  Matthew Green writes:

   For those who just want the TL;DR, here it is:

   Zerocoin is a new cryptographic extension to Bitcoin that (if 
adopted) would bring true cryptographic anonymity to Bitcoin. It 
works at the protocol level and doesn't require new trusted parties 
or services. With some engineering, it might (someday) turn Bitcoin 
into a completely untraceable, anonymous electronic currency.


http://blog.cryptographyengineering.com/2013/04/zerocoin-making-bitcoin-anonymous.html



(iang adds:)

Bitcoin is psuedonymous but traceable, which is to say that all 
transactions are traceable from identity to identity, but those 
identities are psuedonyms, being (hashes of) public keys.  This is 
pretty weak.  In contrast, Chaumian blinding was untraceable but 
typically identified according to an issuer's regime.  Because 
Chaumian mathematics required a mint, this devolved to 
trusted/identified, so again not as strong as some hoped.


Bitcoin fixed this 'flaw' by decorporating the mint into an 
algorithm. This suggests a new axis of distributed.  But  Bitcoin 
lost the untraceability in the process, thus rendering it a rather 
ridiculous attempt at privacy, as the entire graph was on display.  
Bitcoin is more or less worse at privacy than Chaumian cash ever was.


The holy grail in Chaumian times was untraceable  unidentifiable, to 
which Bitcoin added distributed.  This paper by Miers, Garman, Green 
 Rubin suggests untraceable  psuedonymous  distributed is 
possible:


http://spar.isi.jhu.edu/~mgreen/ZerocoinOakland.pdf

(I haven't as yet read the paper so there may be killer details in there.)


iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Cypherpunks mailing list

2013-03-25 Thread Adam Back

Yeah but that is basically zero traffic, and I suspect in large part because
its a silly domain that people who dislike inviting their addition to a
watch-list will avoid.

Maybe someone with a more neutral domain could try it - or a cypherpunks.*
domain if they have a listserv handy.

Adam

On Mon, Mar 25, 2013 at 08:59:43AM +0100, Eugen Leitl wrote:

On Mon, Mar 25, 2013 at 12:46:49AM -0700, Tony Arcieri wrote:

The original Cypherpunks mailing list seems dead.

Is there any list that it's successor?


De facto it's cypherpu...@al-qaeda.net

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Cypherpunks mailing list

2013-03-25 Thread Adam Back

On Mon, Mar 25, 2013 at 05:13:57PM +0100, Moritz wrote:

On 25.03.2013 09:25, Adam Back wrote:
because its a silly domain that people who dislike inviting their 
addition to a watch-list will avoid.


Isn't exactly that a nice property of a cypherpunks list?


No it is not, it is a way to persuade people to leave, or not join the
listserv.


Maybe someone with a more neutral domain could try it - or a cypherpunks.*
domain if they have a listserv handy.


Cypherpunks is a distributed mailing list. A subscriber can subscribe
to one node of the list and thereby participate on the full list. Each
node (called a Cypherpunks Distributed Remailer [CDR], although they are
not related to anonymous remailers) exchanges messages with the other
nodes in addition to sending messages to its subscribers.


Yes I know, but that badly named listserv is the last CDR.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Cypherpunks mailing list

2013-03-25 Thread Adam Back

Cyberpunk, cypherpunk, coderpunks... is all fine, I think people have
understood the etymology of those terms after a few decades, negative
connotations to some of 'punk' notwithstanding, a cypherpunk is a term for
an area of interest or philosophy with a dictionary definition at this
point.

But my point actually was b...@al-qaeda.net???  Come on that is watch list
bait and an invitation NOT to join list blah, whatever it is about.

Adam

On Mon, Mar 25, 2013 at 06:18:14PM +0100, Eugen Leitl wrote:

On Mon, Mar 25, 2013 at 05:50:18PM +0100, Adam Back wrote:


Isn't exactly that a nice property of a cypherpunks list?


No it is not, it is a way to persuade people to leave, or not join the
listserv.


We have to agree to disagree on that one. A 'punk' of any
kind will tend to thumb his nose at authorities.
If they consider the name annoying, so much the better.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] msft skype IM snooping stats PGP/X509 in IM?? (Re: why did OTR succeed in IM?)

2013-03-24 Thread Adam Back

Ian wrote:
Are we saying then that the threat on the servers has proven so small 
that in practice nobody's bothered to push a persistent key 
mechanism? Or have I got this wrong, and the clients are doing p2p 
exchange of their ephemeral keys, thus dispersing the risk?


Its been a while since I used pidgin OTR via the plugin, but I suspect it
would warn you if keys change unexpectedly (ssh key-caching style).  There
might also be a ADH fingerprint or something.  Maybe someone who is actively
using it or knows how it works could comment.  But otherwise you would be
very right that the chat server would actually be definitionally placed to
conduct MITMs.  An option to PGP sign the ADH would be nice.

IMHO, it's not Microsoft that has ever been special in this respect.  
It is all large companies that have a large invasive government. 
Unfortunately, once a company has made its bed in a country, the side 
deals are inevitable.


shades of hushmail backdooring.  It seems a very ethically dubious concept
to me that a service with specific privacy policy could be required to
modify its code to install a backdoor, and/or not talk about it.  Personally
I do not consider this type of arm-twisting to be consistent with an open
democratic society.

Anyway the obvious defense is to design protocols that are end 2 end secure,
not vulnerable to server based back doors, including CA malfeasance, and
open source so that client backdoors can be more easily detected also.

My prediction for the list is that detectable CA based and other MITMing
will become more prelavant and brazen.  Ie the climate will get so they feel
they dont have to worry too much about it being detectable.  Think eg China,
Iran but US following their lead.


And clearly there are plenty of people with very legitimate reasons to
hide; given the levels justice has stooped to do these days in their legal
treatment of activists (even green activists, anti-financial crimes,
corporate ethics activists, whistleblowers) - western countries are
slipping backwards in terms of transparency and justice.



And people like us.

https://www.noisebridge.net/pipermail/noisebridge-discuss/2013-March/035200.html


I'd kind of forgotten about that, maybe dimly remember reading it though it
sounds a bit paranoid, but seems like that guy narrowly avoided becoming
another Andrew Auernheimer (Weev)

http://appleinsider.com/articles/13/03/18/hacker-involved-in-att-ipad-3g-e-mail-breach-sentenced-to-41-months-in-jail

41 months for pointing out to a journalist that att had an unprotected API
allowing iphone accounts to be identified.  More CFAA idiocy.

I guess dont live in the US is one partial defense.

Lesson for now until Aaron's law can undo the capricious stupidity is dont
probe servers, or if you are asked to by the owner written permission, or
probe over ToR, and release your findings to journalists via anonymous
remailers.  Dangerous times to be a security researcher for sure.

It could be that you might get similar issues for non-network things even -
eg reverse engineer a protocol and break it?  Probably most click through
licenses also forbid such things.  Obviously there have been various abuses
of DMCA which were not actually DRM related, but maybe there is scope even
beyond that for ignoring anti-security-testing stuff in click through
licenses.

Encourages the ostrich, and PR denial approach to security flaws. 
Corporates will thing they can achieve security via the corporate entity

and US justice aggressively abuse CFAA to suppress flaws, to avoid
embarrassment.  (And probably not bother fixing either, leading to the
actual security they ought to care about going unsecured - government
sponsored and organized criminal activities exploiting the flaws for
espionage or illicit profit!)

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] msft skype IM snooping stats PGP/X509 in IM?? (Re: why did OTR succeed in IM?)

2013-03-23 Thread Adam Back

Was there anyone trying to use OpenPGP and/or X.509 in IM?

I mean I know many IM protocols support SSL which itself uses X.509, but
that doesnt really meaningfully encrypt the messages in a privacy sense as
they flow in the plaintext through chat server with that model.

btw is anyone noticing that apparently skype is both able to eavesdrop on
skype calls, now that microsoft coded themselves in a central backdoor, this
was initially rumoured, then confirmed somewhat by a Russian police
statement [1], then confirmed by microsoft itself in its law enforcement
requests report.  Now publicly disclosed law enforcement requests reports
are good thing, started by google, but clearly those requests are getting
info or they wouldnt be submitting them by the 10s of thousands.

http://www.microsoft.com/about/corporatecitizenship/en-us/reporting/transparency/

75,000 skype related law enforcement requests, 137,000 accounts affectd (each
call involving or more parties).

You have to wonder with that kind of mentality at microsoft (to
intentionally insert themselves into the calls, gratuitiously when it
supposedly wasnt previously architected to allow that under skype's watch),
what other nasties they've put in.  Eg routine keyword scanning?  Remote
monitoring (turn on microphone, camera?) Remote backdoor and rifling through
files on the users computer.  The source is more than closed, its coded like
a polymorphic virus with extensive anti-reverse-engineering features it
would be rather hard to tell what all it is doing, and given the apparent
lack of end to end security, basically impossible to tell what they are
doing in their servers.

I think its past time people considered switching to another IM client, an
open source one with p2p routed traffic and/or end 2 end security,
preferably with some resilience to X.509 certificate authority based
malfeasance.

I have nothing particular to hide, but this level of aggressive, no-warrant
mass-scale fishing is not cricket.  They are no doubt probably hoovering it
all up to store in those new massive Utah spook data centers in case they
want to do some post-hoc fishing also.

And clearly there are plenty of people with very legitimate reasons to hide;
given the levels justice has stooped to do these days in their legal
treatment of activists (even green activists, anti-financial crimes,
corporate ethics activists, whistleblowers) - western countries are slipping
backwards in terms of transparency and justice.

Adam

[1] http://www.itar-tass.com/en/c142/675600.html

On Sat, Mar 23, 2013 at 01:36:34PM +, Ben Laurie wrote:

On 23 March 2013 09:25, ianG i...@iang.org wrote:

Someone on another list asked an interesting question:

 Why did OTR succeed in IM systems, where OpenPGP and x.509 did not?


Because Adium built it in?





(The reason this is interesting (to me?) is that there are not so many
instances in our field where there are open design competitions at this
level.  The results of such a competition can be illuminating as to what
matters and what does not.  E.g., OpenPGP v. S/MIME and SSH v. secure telnet
are two such competitions.)

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Interesting Webcrypto question

2013-03-03 Thread Adam Back
The realism of export restricting open source software is utterly ludicrous. 
Any self-declaration click-through someone might implement can be clicked

through by anyone, from anywhere, and I presume someone from an embargoed
country is more worried about their own countries laws than US laws, to the
extent that it is apparently illegal in the US to ignore site policies
(which itself is stupid, as the Swartz case demonstrates).

In fact anyway most countries that are likely to be on an embargo list,
probably are so repressive they dont allow encryption for their subjects
anyway.  If the government of the embargoed country wants a piece of
software you can be damn sure a click through isnt going to stop them.  Also
the exemptions and conflicts are getting confusing - in some cases the USG
has actually funded encryption softare for VPN tunneling targetted at the
regimes of a very likely overlapping set of countries that it is embargoing. 
I guess we want their citizens to have encryption to tunnel out, but not

their government nor arms-manufacturers.

Governments and most corporations cant seem to keep the Chinese from bulk
downloading all their firewalled restricted secrets or IP never mind stuff
that is available for open download by design!

I guess they never heard of VPNs and proxies.  If everyone and his dog can
stream movies from any country-IP restricted service, I dare say they can
download any bits they care to with zip effort.

You know I did hear it is also the law that hackney carriages (aka taxi
cabs) in london must carry a fresh bale of straw, makes about as much sense
as open source and jscript crypto export restrictions in an internet world.

It does make a lot of sense not to sell embargoed countries physical
weaponry.  (I guess unless the West has just flip-flopped sides on the
embargoed country and the newly installed dictator is now our dictator,
then the mil-industry complex will be glad to have a clearance sale of
previous previous gen old-stock mil-hardware.)

Well anyway you can see the logic of not offering assistance of any form,
paid or free, to these embargoed orgs and countries, but the futility of
trying to censor information is just dumb.  Maybe it would be more
productive in the current USG info-war mentality to block and disconnect
embargoed orgs and countries government sites from the internet in general. 
(But not their citizens who presumably we encourage to read international

news etc).  But that obviously is also at best going to be a minor irritant
to them - they can just install consumer labeled IPs and tunnels.

Adam

On Mon, Mar 04, 2013 at 11:21:04AM +1300, Peter Gutmann wrote:

Arshad Noor arshad.n...@strongauth.com writes:


Open-source crypto that is downloadable from public-sites has a special
designation in the EAR; you only need to notify the BIS and provide the
download URL.


Controls for export to the Twhatever-it-is-this-week countries override the
5D002 exception.  In other words there's an exception to the exception (or in
computer security terms the deny MAC overrides the allow MAC).  This is why I
specifically mentioned countries like North Korea and Iran.

Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitmessage

2013-02-20 Thread Adam Back

Seems to me neither of you read the reference I gave:

I (Adam) wrote:

It is tricky to get forward secrecy for store-and-forard messaging [2],
but perhaps you could incorporate rekeying into your protocol in some
convenient way.
...
[2] http://cypherspace.org/adam/nifs/


Not impossible just not-obvious. You can use any ID based scheme as a
non-interactive forward secrecy scheme (without the negative connotations of
an ID based scheme - there is no central server able to decrypt your files
in the non-interactive FS use of the same crypto).

For example the Boneh-Fanklin weil pairing based scheme of Katz is the main
(only?) secure and efficient setup ID based scheme.  


Personally I would super-encrypt with something conventional in case
the weil-pairing turned out not to be secure.

Also there are a number of things you can do operationally to send keys with
messages or distribute keys, Ian Brown  I wrote an RFC with extensions for
PGP for forward secrecy:

http://www.cypherspace.org/openpgp/pfs/openpgp-pfs.txt

Adam

On Wed, Feb 20, 2013 at 11:56:59AM +1000, James A. Donald wrote:

On 2013-02-20 6:21 AM, Jonathan Warren wrote:


It is tricky indeed. The handshaking necessary to set up the session key could 
piggyback on the first couple messages that users send to one another although 
those first several messages would not be forward-secret. I suppose that the 
session key could then be replaced with each message sent: The sender of a 
message would keep track of two session keys- one that the other party used 
previously (so that you can receive consecutive replies) and a new one that you 
are encouraging them to use on the next message.  I think it would work.


If store and forward, cannot be forward secrecy.

Suppose that human readable messages, messages that might contain 
important secrets, are only exchanged when the sender and the final 
recipient are both online at the same time, then forward secrecy no 
problem.  Both parties set up a shared transient secret session key, 
as usual, which goes away when offline, reboot, or timeout.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitmessage

2013-02-16 Thread Adam Back

With no criticism to the idea and motivation there are similarities with
having a reply-to of a newsgroup such as alt.anonymous.messages, which is
used as a more secure alternative to reply blocks.  To pickup those messages
anonymously you'd ideally need to be able to unobservably download newsgroup
articles (ssl access to a newsfeed, or pre-download a lot, or i2p/ToR access
to a newsfeed + selective article download).

For bitmessage you probably need to use steganographic techniques otherwise
some messages would be too large to have been created by some public keys. 
(As in pgp stealth [1] but updated for ECC, depending on our parameter

choices)

IMO you might want to do something about forward secrecy (aka backward
security) and forward anonymity, or you arguably end up with the same issue
as reply blocks: a subpoena plus suspicion can force decryption (you wont
have the decrypt the reply-block via repeated subpoenas down the chain, but
the participants are known or suspected, just coerce them to decrypt!) It is
tricky to get forward secrecy for store-and-forard messaging [2], but
perhaps you could incorporate rekeying into your protocol in some convenient
way.

And maybe a way to steganographically tunnel connections to participate
(perhaps in passive mode only) where mere observable participation in an
anonymous messaging protocol may become outlawed.


As the designer of the 2nd gen pseudonymous mail system [3] at Zero
Knowledge Systems I came to the conclusion that for a mail system with the
objective of pseudonymous privacy, on a practical basis forward secrecy is
more important than resistance to pervasive traffic collection analysis -
for most users its game over if they get targetted, and at least in western
countries the capabilities of the secret service organizations are not
applied to minor end-user disputes.

While courts issue subpoenas all the time as a matter of course, and law
enforcement dont even bother - just ask the ISP for data without court order
works most of the time with most ISPs.  


Ergo forward secrecy and forward anonymity is your friend.

(And briefly the design of the freedom 2nd gen mail system is to have
forward secret connections to a pseudonymous pop box containing encrypted
mails.  The first gen ZKS pseudonymous mail system was reply block based).

Adam

[1] http://cypherspace.org/adam/stealth/
[2] http://cypherspace.org/adam/nifs/
[3] http://cypherspace.org/adam/pubs/freedom2-mail.pdf

On Sat, Feb 16, 2013 at 01:49:18PM -0500, Jonathan Warren wrote:

  Hello everyone, I would like to introduce you to a communications
  protocol I have been working on called Bitmessage. I have also written
  an open source client released under the MIT/X11 license. It borrows
  ideas from Bitcoin and Hashcash and aims to form a secure and
  decentralized communications protocol which also doesn't rely on trust.
  Criticism of the X.509 certificate system is understandably common in
  this listserv (and also increasingly common in more public forums);
  Bitmessage instead uses Bitcoin-like addresses for authentication. It
  has a 'broadcast' and 'subscription' feature which other people have
  described as a decentralized Twitter and also aims to hide
  non-content data, like the sender and receiver of messages, from
  passive eavesdroppers like those running warrantless wiretapping
  programs. It may also be possible to be strong against active attackers
  although I'm not yet making that claim.


  A primary goal has been to make a clean and simple interface so that
  the key management, authentication, and encryption is simple even for
  people who do not understand public-key cryptography. I'm sure that
  there is quite a bit of demand for such a program and protocol although
  I am currently not actively promoting it because it has not been
  independently audited.


  I would be interested to hear your comments. The website
  https://bitmessage.org links to various resources like a short
  whitepaper describing how the protocol works and what its goals are (
  https://bitmessage.org/bitmessage.pdf ) and the source code on Github (
  https://github.com/Bitmessage/PyBitmessage ). The main source code file
  is bitmessagemain.py.


  Bitmessage is written in Python and uses an OpenSSL wrapper called
  pyelliptic (written by a different individual) to implement ECIES and
  ECDSA.

  Again I look forward to hearing comments; it is always easier to change
  or add to a protocol earlier than it is later.

  All the best,

  Jonathan Warren



___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] ZKPs and other stuff at Zero Knowledge Systems (Re: Zero knowledge as a term for end-to-end encryption)

2013-02-13 Thread Adam Back

I dont think its too bad, its fairly intuitive and related english meaning
also.  At zero-knowlege we had a precedent of the same use: we used it as an
intentional pun that we had zero-knowledge about our customers, and in
actuality in one of the later versions we actually had a ZKP (to do with
payment privacy).  Well they were a licensee for Brands credentials but
shamefully the early versions were just relying on no logging in a server to
get privacy for the paid up account status necessary to establish a
connection.

One of the fun aspects was totting up the number of people in the company
who actually understood the company name (at a mathematical / crypto level
what a ZKP is), must've been a dozen perhaps out of a peak of 300 employees!

My contribution to their crypto was end-to-end forward anonymity, which
didnt get implemented in the freedom network before they closed it down to
focus on selling personal firewalls via ISPs (under their new brand
radialpoint.com).  But the e2e forward anon concept was implemented by Zach
Brown and Jerome Etienne in a ZKS skunk works project that never got
deployed.  And after ZKS Zach reimplemented something similar in open souce
project cebolla which isnt actively developed at present now but the code is
here:

http://www.cypherspace.org/cebolla/

Now there is ToR and i2p which are actively developed, and I presume at this
point they would both have forward anonymity.

Without e2e forward anon any one of the default 3 hops in your connection
could record traffic passing through and then subpoena the other hops to
identify the source and destination (and web logs perhaps at the dest).

E2e forward anon is pretty simple - establish a forward secret connection
between User and node A call that tunnel 1.  Tunnel a foward secret
connection establishment through tunnel 1 between User and node B call that
tunnel 2.  Then tunnel a foward secret establishment through tunnel 2
between User and node C.  Node A is the entry node, node C is the exit node. 
QED.  Costs no more than the previous method, and actually as I remember the

establishment is faster and more reliable also.

Adam

Not without some precedent, there was a company called Zero Knowledge 
Systems back in the early 2000s that tried to build what we now would 
see as a Skype or Tor competitor.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] openssl on git

2013-01-28 Thread Adam Back

You know other source control systems, and presumably git also, have an
excludes list which can contain wildcards.  It comes prepopulated with eg
*.o - as you probably dont want to check them in.

I think you could classify this as a git bug (or more probably a mistake in
how github are using/configuring git) that it doesnt exclude checking in
.ssh and maybe some of the .ssh exclusive related extensions.  


I say this because its not like ssh is some strange third party app with
unknown extension: git and cvs, cvn etc all directly rely on ssh and have
various things about ssh baked into them.

(The user can always override or change if he really wants to do check in
.ssh on a private heavily guarded repo or because hes using it for test 
keys only etc).


Adam

On Sun, Jan 27, 2013 at 09:36:44PM -0500, Eitan Adler wrote:

On 27 January 2013 21:34, Patrick Mylund Nielsen
cryptogra...@patrickmylund.com wrote:

I don't understand how you can accidentally check in ~/.ssh to your
repository, or at least not notice afterwards. Hopefully the OpenSSL authors
won't do that!


If you keep ~ in a git repo it is surprisingly easy ;)


--
Eitan Adler
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bonding or Insuring of CAs?

2013-01-25 Thread Adam Back

I had the impression this list and its predecssor moderated (too heavily
IMO) by Perry were primarily about applied crypto.  So you get to tolerate a
bit of applied crypto security stuff if you're interested in crypto theory
and vice versa.  Seems healthy to me (cross informs both camps).

In terms of its practical significance I would disagree that this could
conceivably be off topic - the very underpinnings of all trust and security
of public key cryptography are seemingy coming unhinged due largely to
gross trust absuses by a few CAs (and who knows that there arent a few more
such explosive trust issues lurking in other CAs they managed so far to
keep from view).

Anything practical and theoretical to fix that would be in practice rather
important for the security of any X.509 cert using protocol.  ie BEAST
attacks and other SSL crypto attacks are by comparison while more
technically interesting of relative insignificance in terms of how security
guarantees have been in cases being stripped by rogue CAs and some
governments.

I do agree that list proliferation is bad  But at this stage dont we have it
all the noise ratio is very low by list standards?  (Those who were on the
unmoderated cypherpunks list in the old days know what I am talking about in
terms of signal to noise at the worst times).

About bonding that seems like potentially a good idea, in that as we have
seen from finance risks - the pure financial motive can and does commonly
lead to reckless behaviour which can risk the continued existance of a
company.  I'm not sure if it was clear there was a big element of this being
a deterrent that is not already in place though presuaby the threat of
having to declare bankruptcy to avoid liability is not something a CA wants
to do.

Adam

On Fri, Jan 25, 2013 at 04:31:35PM -0800, Paul Hoffman wrote:

On Jan 25, 2013, at 4:11 PM, Natanael natanae...@gmail.com wrote:


If somebody wants there to be a pure cryptography mailing list and separate 
more generic one (like this one currently is), I think that person would have 
to try starting a more strict crypto mailing list, because I don't think most 
people here would want the rules here to get stricter or that they would want 
to switch to a different list that would be just like this one is now.


The off-list responses to my message would disagree with you.


We also don't want too many different lists.


Some of we do. My question was to tease this out a bit.

I'm happy to shut up about it if I'm in the minority, but the question that 
started this thread was a perfect example of something that is about security 
(actually, security operations), not cryptography, and yet gets brought up on 
this list more and more.

--Paul Hoffman
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] yet another certificate MITM attack

2013-01-11 Thread Adam Back

For http there is a mechanism for cache security as this is an issue that
does come up (you do not want to cache security information or responses
with security information in them, eg cookies or information related to one
user and then have the proxy cache accidentally send that to a different
user.  This has to work generically also because some proxies are
transparent.

Cache-Control: private

would instruct a cache not to cache a response because the response message
is intended for a single user and MUST NOT be cached by a shared cache (rfc
2616).

However I would like you take the fact that it is HTTPS to mean blanket the
proxy MUST NOT attempt to MITM connection, as the encryption is designed to
be end-to-end secure.  The browser on the phone must have also been tampered
with to incorrectly display a security indicator which is false (the traffic
is not end to end secure).  This is similarly bad to the old WAP gap and
there is no excuse.  Shame on Nokia.

Adam

On Fri, Jan 11, 2013 at 10:04:42AM -0500, Jeffrey Walton wrote:

On Thu, Jan 10, 2013 at 7:47 PM, Peter Gutmann
pgut...@cs.auckland.ac.nz wrote:

Jon Callas j...@callas.org writes:


Others have said pretty much the same in this thread; this isn't an MITM
attack, it's a proxy browsing service.


Exactly.  Cellular providers have been doing this for ages, it's hardly news.

(Well, OK, given how surprised people seem to be, perhaps it should be news in
order to make it more widely known :-).

Its not so much surprise as it is frustration (for me).

My secure coding guides include something similar to:
 * Do not send sensitive information, such as usernames
   and passwords, through query parameters (GET)
 * Use HTTPS, send using POST

How do web applications pin their certificates when the language
(HTML) and the platform (Browser) do not offer the functionality?

How do the proxies determine which HTTPS traffic is benign, public
information vs sensitive, private information?

How do carriers know when its OK to log benign, public information
vs sensitive, private information?

How do carriers differentiate the benign, public information data
from the sensitive, private information before selling it to firms
like GIGYA?

How do we teach developers to differentiate between the good
men-in-the-middle vs the bad man-in-the-middle?

Until we can clearly answer those questions, I will call a pot and
kettle black. Interception is interception, and its Man-in-the-Middle.
Period.

From my [uneducated] data security point of view, it is best to stop
the practices. HTTPS is the cue to stop the standard operating
procedures on consumer information because the information is (or
could be) sensitive. All I care about is the user and the data (as a
person who endures life after a data breach).

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] fragilities of CTR vs CBC (Re: Tigerspike claims world first with Karacell for mobile security)

2012-12-27 Thread Adam Back

I think you could say CTR mode is fragile against counter reuse exposing
plaintext pair XORs, but CBC is also somewhat fragile against IV reuse,
forming an ECB code book around the set of same IV messages.

CBC itself has other issues eg using non-repeating (but non-random) IVs, for
example using sector number as IV in a file system, I have seen that
introduces a few % of first ciphertext block (per sector) where in practice
using real OS/app disk data the IV cancels with the plaintext.  ie IV1 xor
P1 == IV2 xor P2 (and consequently C1 == C2 as C1 = E(IV1 xor P1)) which
tells you the plaintext difference given the IVs are known.  ie structured
IV cancels with structured plaintext.

Adam

On Thu, Dec 27, 2012 at 06:35:27PM +, Ben Laurie wrote:

On Thu, Dec 27, 2012 at 9:18 AM, Russell Leidich pke...@gmail.com wrote:

there are plenty of Googleable papers showing the Counter Mode is weak
relative to (conventional) cipher-block-chaining (CBC) AES.


Really? For example?

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ElGamal Encryption and Signature: Key Generation Requirements?

2012-12-19 Thread Adam Back

Yep, I see your track from the start of the thread was to understand why
validation was failing.  And validation is good, many protocols can be
broken unintentionally or plausibly deniably sabotaged due to coding error
on one side of the protocol only.  I think it would be easily possible to
encoe the p_i values in the public key format and so run primality tests on
them.  (Format changes... yuck).  Also as p_i are really small perhaps you
could factor n = p_1 * ... * p_k moderately quickly.  Its been quite a while
since factoring a 384 bit key was a big deal.  


But I imagine most commonly these elgamal keys are 2048 bits or more and q so 
that
probably is heading towards the computional in feasibility, so the only real
chance is if people would communicate the p_i values (or the seed for
re-generating them, perhaps.)

I presume like key sizes will be (from DSA) (|p|,|q|) = (1024,160),
(2048,224) (2048,256) (3072,256) and factoring a 512-bit number is still
painful.

Adam

On Tue, Dec 18, 2012 at 08:50:11PM -0500, Jeffrey Walton wrote:

Thanks Adam,


With Lim-Lee as you maybe saw in the paper you just generate a few extra
small p_i values n of them where only k needed, then try permutations C(n,k)
untl you find one for which p = 2q*p_1*...p_k is prime.  As the p_i are
small they are fast and cheap to generate.

So, that's the good case. How about the bad case(s)? What happens
when the bad guy generates or influences the supposed Lim-Lee prime?
He/she will only be caught if someone performs the unique
factorization (versus the previous primality test).

I think the paper's conclusions says it all (Section 5): ...
therefore, our attack can be easily prevented if the relevant protocol
variables are properly checked. It appears many folks did not
validate parameters - presumably due to efficiency/speed/laziness -
and now everyone is penalized (even the folks who were validating
parameters).

Something easy has been made hard. Folks who specialize in these types
of analysis are probably pissing their pants because they are laughing
so hard.

Jeff

On Tue, Dec 18, 2012 at 8:29 PM, Adam Back a...@cypherspace.org wrote:

Well one reason people like Lim-Lee primes is its much faster to generate
them.  That is because of prime density being lower for strong primes, at
the sizes of p  q for p=2q+1 and you need to screen both p  q for
primeness.

With Lim-Lee as you maybe saw in the paper you just generate a few extra
small p_i values n of them where only k needed, then try permutations C(n,k)
untl you find one for which p = 2q*p_1*...p_k is prime.  As the p_i are
small they are fast and cheap to generate.

Adam


On Tue, Dec 18, 2012 at 08:16:01PM -0500, Jeffrey Walton wrote:


So, I've got to read through most of Section 4.

I'm not sure what to think of the shortcut of p = 2 q p_1 p_2 p_3 ... p_n.

With p = 2q + 1, we could verify the the [other party's] parameters
and stop processing. I believe the same is true for p = 2 p_1 q + 1
(which is basically p = q r + 1), but I could be wrong.

With p = 2 q p_1 p_2 p_3 ... p_n, we don't have a witness to the
fitness of the key's generated by GnuPG. So we can't easily decide to
stop processing. Maybe I'm being to harsh and I should do the unique
factorization. But in that case, wouldn't be easier to use p = 2q + 1
since I am validating parameters?

Finally, an open question for me (which seems to be the motivation for
the change): how many folks are using, for example, ElGamal shared
decryption and ElGamal shared verification? Was the loss of
independent verification a good tradeoff *if* the feature is almost
never used?

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ElGamal Encryption and Signature: Key Generation Requirements?

2012-12-18 Thread Adam Back

The reference to Lim Lee is in section 4 of this paper on discrete og
attacks (and how to generate primes immune to them):

http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.44.5296

They recommend that the p_i values are bigger than q.  Ie in a 1024 bit p,
160 bit q, then all of the p_i values making up n should be  160-bits,
where p = 2qn+1 where n = p_1 * ... * p_k and in this case you need
(1024-160)/k  160 so k = 5 and |p_i| = 172.  


For sub-group based crypto systems q is distinct from and not a p_i because
the crypto system uses the subgroup q (eg DSA etc), and there q has to be of
a specific size ie relating to a hash output size for security reasons,
where q  2^out where out size of the hash output in bits.

Crypto++ is expecting a strong-prime where p=2q+1, p  q primes.  btw for
some attacks it is also necessary for q' = (p-1)/2 to be prime.

Adam

On Tue, Dec 18, 2012 at 01:15:05AM +0100, Adam Back wrote:

Those are Lim-Lee primes where p=2n+1 where a B-smooth composite (meaning n
= p0*p1*...*pk where each p0 is f size  B bits.

http://www.gnupg.org/documentation/manuals/gcrypt/Prime_002dNumber_002dGenerator-Subsystem-Architecture.html

So if Crypto++ is testing if the q from p=2q+1 is prime, its right -- its
not!  But its not broken so long as B is large enough.  If B is too small
its very broken.

Adam

On Mon, Dec 17, 2012 at 06:43:15PM -0500, Jeffrey Walton wrote:

Hi All,

This has been bugging me for some time

When Crypto++ and GnuPG interop using ElGamal, Crypto++ often throws a
bad element exception when validating the GnuPG keys. It appears GnuPG
does not choose a q such that q - 1 is prime (in the general form of p
= qr + 1). That causes a failure in Crypto++'s Jakobi test.

I could not find a paper stating q - 1 non-prime was OK (on Google and
Google Scholar). I would think that q - 1 prime would be a
requirement, since some algorithms run in time proportional to q - 1
(for example, Pollard's Rho).

What are the key generation requirements for ElGamal Encryption and
Signature schemes?

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ElGamal Encryption and Signature: Key Generation Requirements?

2012-12-18 Thread Adam Back

Well one reason people like Lim-Lee primes is its much faster to generate
them.  That is because of prime density being lower for strong primes, at
the sizes of p  q for p=2q+1 and you need to screen both p  q for primeness.

With Lim-Lee as you maybe saw in the paper you just generate a few extra
small p_i values n of them where only k needed, then try permutations C(n,k)
untl you find one for which p = 2q*p_1*...p_k is prime.  As the p_i are
small they are fast and cheap to generate.

Adam

On Tue, Dec 18, 2012 at 08:16:01PM -0500, Jeffrey Walton wrote:

So, I've got to read through most of Section 4.

I'm not sure what to think of the shortcut of p = 2 q p_1 p_2 p_3 ... p_n.

With p = 2q + 1, we could verify the the [other party's] parameters
and stop processing. I believe the same is true for p = 2 p_1 q + 1
(which is basically p = q r + 1), but I could be wrong.

With p = 2 q p_1 p_2 p_3 ... p_n, we don't have a witness to the
fitness of the key's generated by GnuPG. So we can't easily decide to
stop processing. Maybe I'm being to harsh and I should do the unique
factorization. But in that case, wouldn't be easier to use p = 2q + 1
since I am validating parameters?

Finally, an open question for me (which seems to be the motivation for
the change): how many folks are using, for example, ElGamal shared
decryption and ElGamal shared verification? Was the loss of
independent verification a good tradeoff *if* the feature is almost
never used?

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] ElGamal Encryption and Signature: Key Generation Requirements?

2012-12-17 Thread Adam Back

Those are Lim-Lee primes where p=2n+1 where a B-smooth composite (meaning n
= p0*p1*...*pk where each p0 is f size  B bits.

http://www.gnupg.org/documentation/manuals/gcrypt/Prime_002dNumber_002dGenerator-Subsystem-Architecture.html

So if Crypto++ is testing if the q from p=2q+1 is prime, its right -- its
not!  But its not broken so long as B is large enough.  If B is too small
its very broken.

Adam

On Mon, Dec 17, 2012 at 06:43:15PM -0500, Jeffrey Walton wrote:

Hi All,

This has been bugging me for some time

When Crypto++ and GnuPG interop using ElGamal, Crypto++ often throws a
bad element exception when validating the GnuPG keys. It appears GnuPG
does not choose a q such that q - 1 is prime (in the general form of p
= qr + 1). That causes a failure in Crypto++'s Jakobi test.

I could not find a paper stating q - 1 non-prime was OK (on Google and
Google Scholar). I would think that q - 1 prime would be a
requirement, since some algorithms run in time proportional to q - 1
(for example, Pollard's Rho).

What are the key generation requirements for ElGamal Encryption and
Signature schemes?

Jeff
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] current limits of proving MITM (Re: Gmail and SSL)

2012-12-16 Thread Adam Back

(note the tidy email editing, Ben, and other blind top posters to massive
email threads :)

See inlne.

On Sun, Dec 16, 2012 at 10:52:37AM +0300, ianG wrote:

[...] we want to prove that a certificate found in an MITM was in the chain
or not.

But (4) we already have that, in a non-cryptographic way.  If we find 
a certificate that is apparently signed by say VeriSign root and was 
found in an MITM, we can simply publish it with the facts.  Verisign 
are then encouraged to disclose (a) it was ours, (b) it wasn't ours, 
or (c) ummm...


Verisign cant claim it wasnt theirs because the signing CA it will be signed
by one of their roots, or a sub-CA thereof.  Now one source of suspicion may
be if there are multiple non-revoked, non-expired server certificates for
the same domain (with different private keys).  However, there are people
who think multiple certs are a good idea for redundant servers and SSL
terminating equipment - to not have the same private key in multiple
devices (eg say so they can revoke and replace in one suspected compromised
server without replacing them all).  Seems kind of dubious to me, but there
you go.

The same server domain with multiple certs (with diff pri keys) one or more
signed by a different CA - even more suspicious.  But many people are
certificate tarts - they wil buy the cert on the day from the days cheapest
cert provider, so again inconclusive.

If the server owner claims that cert was not issued to them that is
conclusive but it becomes a who do you trust question.  Was the server owner
technically confused (he forgot, the cert issue failed and he did it again
and forgot about that.) The details are done by admins, are they going to
keep a signed, audited, backed up log book of such transient, failures and
try-again during cert issuig.  Nopes.  Or is the CA compromised, had a rogue
RA, rogue admin, and wants to avoid the embarrassment.  Or the CA issued a
duplicate cert on request for law enforcement and doesnt want to admit that.

Dont forget that many government, law enforcement, spy organizations, major
defense contractors, outsourced quasi-governmental spy organizations already
own and operate CAs that are in browser databases, in many countries
uncluding many western countries.

Basically no one will talk or you cant tell who is lying.  Quite likely even
neither the CA, nor the domain owner will talk in standard corporate PR
coverup mode.

But if you could prove one of those directions, then you'd be getting
somewhere.  Like domain has to prove, in a publicly auditable way, that the
CA cant disavow, ownership of the private key of each cert issued.  Then the
domain owner can say that cert isnt mine.  And browsers can check that like
they check CRLs now.

Well and thats where these draft protocols come in.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Why using asymmetric crypto like symmetric crypto isn't secure

2012-11-11 Thread Adam Back

(I copied Hans-Joachim Knobloch onto the thread)

Weiner is talking about small secret exponents (small d), no one does that. 
They choose smallish prime e, with low hamming weight (for

encryption/signature verification efficiency) like 65537 (10001h) and get a
random d, which will by definition be |d|  |n|/4.  (The limit of Weiner's
algorithm).  


(To get both |d|  |n|/4 (susceptible to Weiner attack) and small |e| isnt
possible because you need ed = 1 mod LCM(p-1,q-1); which implies ed = 1 + k
LCM(p-1,q-1).  |LCM(p-1,q-1)| ~= |n| so to get |d|  |n|/4 you minimally
need |e|  3|n|/4 ie for a 1024 bit modulus, |e| = 768 bits to get a |d| =
256-bits.

The only people who would ever have ended up vulnerable to Weiner's attack
are those intentionally and aggressively trying to reduce the server
workload by aiming to artificially have very small |d|.

The Knobloch eprint paper says that you cant naively keep the public modulus
secret for small e, if the attacker has a few known plaintext/ciphertext
pairs because c = m^e mod n implies m^e -c = k.n and as k.n can be
factorized to find k.  (p  q will be harder to find so you'll find k first
if it is not also coincidentally itself large and prime, or composite of
large enough primes to not be factorizable; if it doesnt factorize quickly
use another known plaintext/ciphertext pair.

Note they are only saying fixed or small e because their approach requires
to know or guess e in order to compute m^e (if e is small you can try all
possible e).  


I wouldnt think thats the end of it either - more things are clearly
leaking.  eg Even with large, unknown e st |e| = |n| if you had known
plaintext ciphertext pairs with a multiplicative relationship like c1 = m1^e
mod n, and c2 = m2^e mod n and c3 = m3^e mod n where m3 = m1*m2.  Then
c1*c2-c3 = k.n and we're back to the find small factors to find k trick.

Adam

On Sat, Nov 10, 2012 at 10:52:29AM +0800, Sandy Harris wrote:

On Sat, Nov 3, 2012 at 5:29 PM, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:


  [...] We show that if the RSA cryptosystem is used in such a symmetric
  application, it is possible to determine the public RSA modulus if the
  public exponent is known and short, such as 3 or F4=65537, and two or more
  plaintext/ciphertext (or, if RSA is used for signing, signed
  value/signature) pairs are known.


Is this a different attack from Weiner's Cryptanalysis of Short RSA
Secret Exponents?
madchat.awired.net/crypto/codebreakers/ShortSecretExponents.pdf

I thought it had been known for at least a decade that small exponents were
a bad idea, because of the Weiner paper.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Questions about crypto in Oracle TDE

2012-11-08 Thread Adam Back

I'd guess they mean salt is pre-pended to the plaintext and then presume eg
then salt + plaintext encrypted with AES in CBC mode with a zero IV.  That
would be approximately equivalent to encrypting with a random IV (presuming
the salt, IV and cipher block are all the same size) because

CBC-Enc( iv = Enc( salt ), plain ) == CBC-Enc( iv = 0, salt || plain )

PGP when using IDEA (PGP uses CFB mode for IDEA) does something similar.. 
there is a zero IV and a randomish first plaintext block.



Now for indexing to set iv = 0 and salt =  that might not be ideal as its
basically ECB for the first plaintext block.  Particularly not good if the
plaintext is larger than the cipher block size and if there are repeats in
the first blocks worth of the plaintext.


IMO this fixed IV for searchability/indexability crypto pattern is a common
mistake.  I prefer and consider it more secure to use a (keyed) MAC of the
plaintext as the index, and then encrypt the plaintexts conventionally with
a random IV.

Adam

On Thu, Nov 08, 2012 at 11:49:42AM -0500, Kevin W. Wall wrote:

All,

I'm hoping someone on this list can either provide details on how
Oracle's Transparent Data Encryption (TDE) works in their Oracle Database,
especially with Oracle 10g.

We have an application that is storing SSNs as cleartext which they are
finally getting read to store in an encrypted format using 128-bit AES. (I am
not sure if the SSNs are presently stored as NUMBER or VARCHAR2 though.)
The application also will *still* have a legitimate business need of doing
indexed searches via the *full* SSN.

Oracle TDE is being looked at as oneoption because it is thought to be
more or less transparent to application itself and its JDBC code. Also,
presumably it would simplify key change operations as well since the
development team wouldn't have to code for that.

The Oracle TDE documentation (here for 10g:
http://docs.oracle.com/cd/B19306_01/network.102/b14268/asotrans.htm)
discusses the use of salt in section 3.2.4. Specifically, this documentation
states:

   Salt is a way to strengthen the security of encrypted data.
   It is a random string added to the data before it is encrypted,
   causing repetition of text in the clear to appear different when
   encryptee. Salt thus removes one method attackers use to steal data,
   namely, matching patterns of encrypted text.

Salting is the TDE default for encrypted columns (at least in 10g).  However,
this documentation goes on to say:

   However, if you plan to index the encrypted column, you must
   use NO SALT.

Doing searches by full SSN over close to a 1M records is obviously going
to need indexing, so that implies that salting cannot be used for SSNs
(at least not w/out application changes, to say, search for a MAC of the
SSN instead of the SSN itself, or some other similar mechanism).

My confusion comes from trying to understand exactly what Oracle means
when they refer to salt. Are they really discussing the use of
a random IV vs. a fixed IV? Or are the XOR'ing some random salt with
the encryption key in some cases and not in others or what?

For that matter, does anyone even know what cipher modes or padding
schemes they use with Oracle TDE in Oracle 10g? For all I know they
may be doing something like using ECB mode.

It's hard to ascertain the downside of using Oracle TDE if I don't know
any of these details so I'm hoping that someone on this list can
comment on it.

Thanks,
-kevin
--
Blog: http://off-the-wall-security.blogspot.com/
The most likely way for the world to be destroyed, most experts agree,
is by accident. That's where we come in; we're computer professionals.
We *cause* accidents.-- Nathaniel Borenstein
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] ZFS dedup? hashes (Re: [zfs] SHA-3 winner announced)

2012-10-03 Thread Dr Adam Back

I infer from your comments that you are focusing on the ZFS use of a hash
for dedup?  (The forward did not include the full context).  A forged
collision for dedup can translate into a DoS (deletion) so 2nd pre-image
collision resistance would still be important.

However 2nd pre-image collision resistance is typically offered at higher
assurance than chosen pairs of collisions (because you can use birthday
effect to roughly square root the search space with pairs).  So to that
extent I agree your security reliance on hash properties is weaker than for
integrity protection.  And SHA1 is still secure against 2nd pre-image
whereas its collision resistance has been demonstrated being below design
strength.

Incidentally a somewhat related problem with dedup (probably more in cloud
storage than local dedup of storage) is that the dedup function itself can
lead to the confirmation or even decryption of documents with
sufficiently low entropy as the attacker can induce you to store or
directly query the dedup service looking for all possible documents.  eg say
a form letter where the only blanks to fill in are the name (known
suspected) and a figure (1,000,000 possible values).

Also if there is encryption there are privacy and security leaks arising
from doing dedup based on plaintext.

And if you are doing dedup on ciphertext (or the data is not encrypted), you
could follow David's suggestion of HMAC-SHA1 or the various AES-MACs.  In
fact I would suggest for encrypted data, you really NEED to base dedup on
MACs and NOT hashes or you leak and risk bruteforce decryption of
plaintext by hash brute-forcing the non-encrypted dedup tokens.

Adam

On Wed, Oct 03, 2012 at 03:41:27PM +0200, Eugen Leitl wrote:

- Forwarded message from Sašo Kiselkov skiselkov...@gmail.com -

From: Sašo Kiselkov skiselkov...@gmail.com
Date: Wed, 03 Oct 2012 15:39:39 +0200
To: z...@lists.illumos.org
CC: Eugen Leitl eu...@leitl.org
Subject: Re: [cryptography] [zfs] SHA-3 winner announced
User-Agent: Mozilla/5.0 (X11; Linux i686; rv:7.0.1) Gecko/20110929 
Thunderbird/7.0.1

Well, it's somewhat difficult to respond to cross-posted e-mails, but
here goes:

On 10/03/2012 03:15 PM, Eugen Leitl wrote:

- Forwarded message from Adam Back a...@cypherspace.org -

From: Adam Back a...@cypherspace.org
Date: Wed, 3 Oct 2012 13:25:06 +0100
To: Eugen Leitl eu...@leitl.org
Cc: cryptography@randombit.net, Adam Back a...@cypherspace.org
Subject: Re: [cryptography] [zfs] SHA-3 winner announced
User-Agent: Mutt/1.5.21 (2010-09-15)

(comment to Saso's email forwarded by Eugen):

Well I think it would be fairer to say SHA-3 was initiatied more in the
direction of improving on the state of art in security of hash algorithms
[snip]
In that you see the selection of Keecak, focusing more on its high security
margin, and new defenses against existing known types of attacks.


At no point did I claim that the NIST people chose badly. I always said
that NIST's requirements need not align perfectly with ZFS' requirements.


If the price of that is slower, so be it - while fast primitives are very
useful, having things like MD5 full break and SHA-1 significant weakening
take the security protocols industry by surprise is also highly undesirable
and expensive to fix. To some extent for the short/mid term almost
unfixable given the state of software update, and firmware update realities.


Except in ZFS, where it's a simple zfs set command. Remember, Illumos'
ZFS doesn't use the hash as a security feature at all - that property is
not the prime focus.


So while I am someone who pays attention to protocol, algorithm and
implementation efficiency, I am happy with Keecak.


ZFS is not a security protocol, therefore the security margin of the
hash is next to irrelevant. Now that is not to say that it's entirely
pointless - it's good to have some security there, just for the added
peace of mind, but it's crazy to focus on it as primary concern.


And CPUs are geting
faster all the time, the Q3 2013 ivybridge (22nm) intel i7 next year is
going to be available in 12-core (24 hyperthreads) with 30GB cache.  Just
chuck another core at if if you have problems. ARMs are also coming out in
more cores.


Aaah, the good old but CPUs are getting faster every day! argument. So
should people hold off for a few years before purchasing new equipment
for problems they have now? And if these new super-duper CPUs are so
much higher performing, why not use a more efficient algo and push even
higher numbers with them? If I could halve my costs by simply switching
to a faster algorithm, I'd do it in a heartbeat!


And AMD 7970 GPU has 2048 cores.


Are you suggesting we run ZFS kernel code on GPUs? How about driver
issues? Or simultaneous use by graphical apps/games? Who's going to
implement and maintain this? It's easy to propose theoretical models,
but unless you plan to invest the energy in this, it'll most likely
remain purely theoretical.


For embedded and portable
use

Re: [cryptography] Can there be a cryptographic dead man switch?

2012-09-06 Thread Adam Back

And make sure there are multiple internet connections to the hidden servers.

Adam

On Thu, Sep 06, 2012 at 03:40:23AM +0100, StealthMonger wrote:


Good argument.  Thanks.  It makes Natanael's solution, or some variant
of it, all the more appealing.  Keep Natanael's servers secret, such
as on scattered Virtual Private Servers.  They read the Grantor's
signed messages from a message pool such as alt.anonymous.messages and
use that channel also to communicate among themselves, outputting via
anonymizing remailers.  The adversary wouldn't know which of the
world's internet connections to pull.  When the servers agree that the
Grantor is dead, they release the secret, encrypted all the while with
the Trustee's key.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Master Password

2012-05-31 Thread Adam Back

Reminds me of Feb 2003 - Moderately Hard, Memory-bound Functions NDSS 03,
Martin Abadi, Mike Burrows, Mark Manasse, and Ted Wobber. 
(cached at) http://hashcash.org/papers/memory-bound-ndss.pdf


By microsoft research, but then when exchange and oulook added a
computational cost function, for hashcash anti-spam postage stamp like
purposes they actually used hashcash and not microsoft research's memory
bound puzzle.  In fact they cooked up their own format, and made a slight
tweak to SHA1 just to make existing SHA1 hardware inapplicable.
(As part of their Coordinated Spam Reduction Initiative
http://www.microsoft.com/en-us/download/details.aspx?id=17854
there is an open spec).

Actually I worked for microsoft for a year or so around that the time, and
had talked with the anti-spam team to give them a brain dump on anti-spam
and hashcash, but I dont know why they rejected memory bound puzzles.  That
particular one has a disadvantage that it requires a few megs of pre-baked
numbers to be shipped with the library.  Seems like Percival's one does not.

My guess was there's just something comparatively elegant and simple about
hashcash.  (If I recall they also used 8 sub-puzzles to reduce the variance
of the compute cost).


One quite generic argument I could suggest for being wary of scrypt would be
if someone said, hey here's my new hash function, use it instead of SHA1,
its better - you would and should very wary.  A lot of public review goes
into finding a good hash algorithm.  (Yeah I know SHA1 has a chink in its
armor now, but you get the point).

It is entirely possible to have flaws crop up in the non-parallelizable
aspect of the design, or non-reuse of RAM across parallel invocations.

I had a go at making non-parallelizable variants of hashcash also, and its
not so hard to do, but for most applications it doesnt really help
practically because the attacker would get lots of parallelism from trying
different passwords or different users or what have you in parallel.

The main point of scrypt is to increase the hardware cost by needing lots of
ram to evaluate the function which seems like a reasonable objective.

Abadi et al's mbound puzzle objective was to make the bottleneck memory
latency (plus a minimum required amount of RAM) instead of CPU.  They made
the argument that there mbound would be fairer because there is smaller
variance in RAM latency between eg smartphone ram and desktop server ram
compared to the imbalance between desktop/server cpu (core i7 3930k?  vs
arm9).  The stats seem to add up.  Though that was a few years ago, these
days some of the xeon or i7 have a pretty impressive on die L3 cache!

Adam

On Thu, May 31, 2012 at 10:02:17AM -0500, Nico Williams wrote:

On Thu, May 31, 2012 at 2:03 AM, Jon Callas j...@callas.org wrote:

On May 30, 2012, at 4:28 AM, Maarten Billemont wrote:


If I understand your point correctly, you're telling me that while scrypt might 
delay brute-force attacks on a user's master password, it's not terribly useful 
a defense against someone building a rainbow table.  Furthermore, you're of the 
opinion that the delay that scrypt introduces isn't very valuable and I should 
just simplify the solution with a hash function that's better trusted and more 
reliable.

Tests on my local machine (a MacBook Pro) indicate that scrypt can generate 10 hashes per 
second with its current configuration while SHA-1 can generate about 1570733.  This 
doesn't quite seem like a trivial delay, assuming rainbow tables are off 
the... table.  Though I certainly wish to see and understand your point of view.


My real advice, as in what I would do (and have done) is to run PBKDF2 with 
something like SHA1 or SHA-256 HMAC and an absurd number of iterations, enough 
to take one to two seconds on your MBP, which would be longer on ARM. There is 
a good reason to pick SHA1 here over SHA256 and that is that the time 
differential will be more predictable.


If you'll advise the use of compute-hard PBKDFs why not also memory
hard PBKDFs?  Forget scrypt if you don't like it.  But the underlying
idea in scrypt is simple: run one PBKDF instance to generate a key for
a PRF, then generate so many megabytes of pseudo-random data from that
PRF, then use another PBKDF+PRF instance to generate indices into the
output of the first, then finally apply a KDF (possibly a simple hash
function) to the output of the second pass to generate the final
output.  The use of a PRF to index the output of another PRF is
simple, and a simple and wonderful way to construct a memory-hard
PBKDF.

Meory and comput hardness in a PBKDF is surely to be desired.  It
makes it harder to optimize hardware for fast computation of rainbow
tables.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin-mining Botnets observed in the wild? (was: Re: Bitcoin in endgame

2012-05-11 Thread Adam Back
Strikes me 12TH/sec is not actually very much computation? 
http://bitcoinwatch.com/ also gives network hashrate at 12.4 TH/sec.


But a single normally clocked (925Mhz) AMD 7970 based graphics card which
has 2048 cores is claimed to provide 555MH/sec.  


https://en.bitcoin.it/wiki/Mining_hardware_comparison#ARM (ignore the
aggresively overclocked 7970s...  the one that counts has 925 in the clock
speed column!)

But that means the entire bitcoin network compute is only equivalent to 23
AMD 7970 GPUs?  That seems wrong by orders of magnitude, there are youtube
videos of private individuals with mining rigs in that range in their
garage.

(Somewhat amusing scale of raw compute being thrown via SHA256 hashcash at
the bitcoin core at this thing- still its arguably not wasted CPU because as
human endeavours go other forms of currencies have their costs too.  So a
p2p currency if it played out might save a lot.  Personally I lost a bundle
on major currency devaluation GBP, EUR, USD look at them against the CHF to
see how much you'all lost due to whatever human / political inefficiencies
you attribute currency devaluation on that scale to.  A stable mechanism for
value storage would be a rather useful instrument.)

Adam

On Fri, May 11, 2012 at 01:20:48PM -0600, Zooko Wilcox-O'Hearn wrote:

Folks:

Here's a copy of a post I just made to my Google+ account about this
alleged Botnet herder who has been answering questions about his
operation on reddit:

https://plus.google.com/108313527900507320366/posts/1oi1v7RxR1i

=== introduction ===

Someone is posting to reddit claiming to be a malware author, botnet
operator, and that they use their Botnet to mine Bitcoin: ¹.

I asked them a question about the economics of using a Botnet for the
Bitcoin distributed transaction-verification service (Bitcoin
mining): ².

They haven't provided any proof of their claims, but on the other hand
what they write and how they write it sounds plausible to me.


=== details ===

Here are my notes where I try to double-check their numbers and see if
they make sense.

They in their initial post ¹ that they do 13-20 gigahashes/sec of work
on the Bitcoin distributed transaction verification service.

The screenshot they provided ³ shows 10.6 gigahashes/sec (GH/s) in
progress, and that they're using a mining pool named BTCGuild.
According to this chart of mining pools ⁴, BTCGuild currently totals
about 12.5% of all known hashing power, and according to ⁵ the current
total hashing power on the network is about 12.5 terahashes/sec
(TH/s), so BTCGuild probably accounts for about 1.5 TH/s.

They say that their Botnet has about 10,000 bots. The screen shot
shows a count of total bots = 12,000 and connected in the last 24
hours = 3500. This ratio of total bots to bots connected in the last
24 hours is consistent with other reports I've read of Botnets ⁶, and
also consistent with my experience in p2p networking. The number of
live bots available at any one time for this Botnet herder should
probably average out to somewhere between 350 and 550. Let's pick 500
as an easy number to work with. Does it makes sense that 500 bots
could generate 10 GH/s? That's 20 MH/s per live bot. According to the
Bitcoin wiki's page on mining hardware ⁷, a typical widely-available
GPU should provide about 200 MH/s. Hm, so they are claiming only 1/10
the total hashpower that our back-of-the-envelope estimates would
assign to them. Here is an answer they give to another person's
question that sheds light on this: ⁸.

Q: Isn't Bitcoin mining pretty resource intensive on a computer? Like
to the point someone would notice something is up on their system form
it slowing eveyrthing down?

A: My Botnet only mines if the computer is unused for 2 minutes and
if the owner gets back it stops mining immidiatly, so it doesn't suck
your fps at MW3. Also it mines as low priority so movies don't lag. I
also set up a very safe threshold, the cards work at around 60% so
they don't get overheated and the fans don't spin as crazy.

It sounds plausible to me that those stealth measures could cut the
throughput by 10 compared to running flat-out 24/7. Also it isn't
clear if the botnet counts computers that don't have a GPU at all, or
don't have a usable one. Maybe such computers are rare nowadays?
Anyway if they are counted in there then that would be another reason
why the hashing throughput per bot is lower than I calculated.

In answer to another question ¹⁰, they said they get a steady $40/day
from running the Bitcoin transaction-confirmation (mining) service.
According to this chart ¹¹ from ¹², the current U.S. Dollar value of
Bitcoin mining is (or was a couple of days ago when they wrote that)
about $0.33 per day for 100 MH/s. Multiplying that out by their claim
of 10.6 GH/s results in $35/day. So that adds up, too.

(Note that it sounds like their primary business is stealing and
selling credit card numbers, and the Bitcoin transaction-verification
service is a sideline.)

I don't 

Re: [cryptography] data integrity: secret key vs. non-secret verifier; and: are we winning? (was: “On the limits of the use cases for authenticated encryption”)

2012-04-26 Thread Adam Back

I think the separate integrity tag is more general, flexible and more secure
where the flexibility is needed.  Tahoe has more complex requirements and
hence needds to make use of a separate integrity tag.

I guess in general it is going to be more general, flexible if there are
separate keys (including none with keyless self-authenticated URLs) for
different properties.

Hence there remains a need for separate integrity and encryption even with
authenticated encryption modes.  


And typically AE modes have a cost - several of the standardized encryption
modes are actually just standardizing ways to combine separate integrity 
encryption primitives.  The others are mostly patented.  They tend to be
more fragile through binary reliance on strictly one use nonces, XOR via
counter mode and such modes which are I think in implementation terms
unforgiving or fragile.

Exercise for the reader to list the non-patented, non-trivial (combining an
integrity  encryption primitive) modes :)

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] SHA1 extension limitations (Re: Doubts over necessity of SHA-3 cryptography standard)

2012-04-10 Thread Adam Back

Well the length extension is not fully flexible.  ie you get SHA1( msg )
which translates into msg-blocks || pad msg-length which is then fed to
SHA1-transform, and the IV is some magic values.

So the length extension is if you start with a hash that presumably you dont
know all the msg-blocks.

h1 = SHA1( msg-blocks || pad msg-length

then the extended hash is:

h2 = SHA1( msg-blocks || pad msg-length || msg2-blocks || pad
msg2-length

so you have an unclean message with a block with pad msg-length in the
middle of it.

Maybe for some not very carefully designed protocols you can confuse
something with that.

Adam

On Tue, Apr 10, 2012 at 12:16:28PM +1000, ianG wrote:

On 10/04/12 02:40 AM, Marsh Ray wrote:

On 04/09/2012 07:00 AM, Jeffrey Walton wrote:

http://h-online.com/-1498071

none of the five finalists are affected by known attacks on MD5,
SHA-1 and SHA-2 and the Merkle-Damgård construction on which all
three are based.


Well, gee, isn't that enough?



Not really.  When the AES competition was run, it was because DES was on
its last legs in both cryptographic terms and protection/utility terms.

In contrast, SHA1 is closely challenged in cryptographic terms, but
still no prize, and SHA2 seems light-steps away.

Further, from a cryptographic pov, these might be exciting results, but
from an engineering perspective, ho hum ... yawn?  Protocols rarely
depend solely on the hash algorithm for their integrity, there is a lot
of fat built in to most designs.  And, we've known to do this for a long
long time, e.g., to introduce nonces in signed designs.



True, one thing we've learned from the SHA-3 competition is that
SHA-2 is surprisingly good. It has held up to the collision attacks
that have plagued previous SHAs better than most had hoped. It turns
out to be quite efficient on modern 64-bit CPUs for long messages
when compared to the SHA-3 designs of similar strength. In
comparisons of hardware efficiency (i.e., throughput per gate) SHA-2
appears as good (or better) than the SHA-2 finalists.

But as SHA-2 is still a pure Merkle–Damgård construction it deviates
from an ideal pseudorandom function or random oracle in a couple of
ways.

Firstly, and most significantly, it is subject to length extension
attacks. This means that given a hash value of some secret message,
we can compute the hash value of that message with our own chosen
plaintext appended without needing to know the original message. This
is surprising to many protocol designers!



Well, that's a surprise to me.  But on reflection, the reason it is a
surprise that I (we?) never considered that feature is because we would
never ever rely on it.  It's like saying, oh, gosh, we can use SHA1 to
protect against buffer overruns!  Happy Days!



HMAC with a secret key is supposed to be a mitigation for this, but
it is not magic pixie dust. The SSL 3.0 and TLS 1.0 - 1.2 protocols
get it wrong:

http://tools.ietf.org/html/rfc5246

7.4.9. Finished verify_data PRF(master_secret, finished_label,
Hash(handshake_messages)) [0..verify_data_length-1];


The PRF is based on HMAC, but by performing an MD5/SHA-1 on the
messages before supplying the result to the HMAC message input it
re-introduces the extension attack. I don't think this translates
into a vulnerability in TLS but it's a bit too close for comfort.

Additionally, there are:

Joux muticollisions Multicollisions in Iterated Hash Functions.
Application to Cascaded Constructions
http://math.boisestate.edu/~liljanab/MATH509Spring2012/JouxAttackSHA-1.pdf

Herding or Nostradamus attacks http://eprint.iacr.org/2005/281

Second preimage attacks Second Preimages on n-bit Hash Functions for
Much Less than 2^n Work
http://www.schneier.com/paper-preimages.html

These are all closely related and have a basis that the attacker is
able to find that first collision, something that we hope will never
happen with SHA-2. But the manner in which other important security
properties deteriorate rapidly after that first collision is found
represents a deviation from ideal behavior.

My understanding is that the SHA-3 finalists address these issues.



Meanwhile, notwithstanding excitement nearly a decade ago now, SHA1
still chugs on.  And software engineering's got your back.

That's not to say that the SHA3 comp was unneeded.  But it wasn't the
same level of necessity that AES had.



iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PINS and [Short] Passwords

2012-04-06 Thread Adam Back

The bit tying in to my comment a few days ago is they note that apple wont
confirm but no doubt does provide a signed private app that takes the
encrypted key material off the device for brute forcing.  And an app for
dumping all data off the device if thats also not possible without jail
breaking.

Maybe they even quietly sign apps by elcomsoft et al for sale or service
provision to law enforcement only that do the job.

And you cant tell me that a PIN 4 or 6 digits is of any credible security no
matter what iterator is used for PBKDF2 - its a joke period.  Obviously
because the login delay has to be acceptable to the user on a puny phone
processor, vs a GPU optimized or FPGA massively parallel, or amazon
flash-rented server farm brute force of the same.  QED.

These days the same pretty much applies to any 8 char alphanumeric password. 
Passwords are dead.  People need to face reality and adapt.


(And by that I mean improve the crypto away from 1980s era password based
protocols, not give up.)

Adam

On Thu, Apr 05, 2012 at 11:42:16PM -0400, Jeffrey Walton wrote:

On Wed, Apr 4, 2012 at 3:45 PM, Jeffrey Walton noloa...@gmail.com wrote:

Hi All,

Older iOS devices used a 4 digit PIN code, which was next to no
protection. Newer iOS allow passcodes which consist of a full
(fuller?) alphabet.

Assuming a weak password policy (for example, 4 or 6 characters) are
there any real benefits over PINs?

What is the state of the art for mobile password cracking on iOS and Android?

Ask and you shall receive (Ars Technica dropped it yesterday):

http://arstechnica.com/apple/news/2012/04/can-apple-give-police-a-key-to-your-encrypted-iphone-data-ars-investigates.ars

Does Apple have a backdoor that it can use to help law enforcement
bypass your iPhone's passcode? That question became front and center
this week when training materials (PDF) for the California District
Attorneys Association started being distributed online with a line
implying that Apple could do so if the appropriate request was filed
by police.

As with most things, the answer is complex and not very
straightforward. Apple almost definitely does help law enforcement get
past iPhone security measures, but how? Is Apple advising them using
already well-known cracking techniques, or does the company have
special access to our iDevices that we don't know about? Ars decided
to try to find out.
...

If Apple does keep device key records, they could be given to law
enforcement for a faster brute-force session off-device. It is pretty
much impractical to break a six-character passcode on the device
itself, but may be entirely practical offline using specialized
systems. So to me it seems like it might be possible for Apple to help
[a law enforcement official], but not directly, if they really store
these hardware keys, but again, nobody knows if they do that or not,
[Charlie] Miller said.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] PINS and [Short] Passwords

2012-04-04 Thread Adam Back

Surely one cant think of the limitations (requirement for cooperation from
the OS to test the PIN) as if they are cryptographic limitations...

Apple probably supplies such a service themself to law enforcement as a
private apple approved ready-to-go app.

Adam

On Wed, Apr 04, 2012 at 03:45:09PM -0400, Jeffrey Walton wrote:

Hi All,

Older iOS devices used a 4 digit PIN code, which was next to no
protection. Newer iOS allow passcodes which consist of a full
(fuller?) alphabet.

Assuming a weak password policy (for example, 4 or 6 characters) are
there any real benefits over PINs?

What is the state of the art for mobile password cracking on iOS and Android?

Jeff

PS, I am aware of XRY software
(http://www.forbes.com/sites/andygreenberg/2012/03/27/heres-how-law-enforcement-cracks-your-iphones-security-code-video/)
and its limitation
(http://9to5mac.com/2012/04/02/xrys-two-minute-iphone-passcode-exploit-debunked/)
(thanks LW).

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Key escrow 2012

2012-03-30 Thread Adam Back

As I recall people were calling the PGP ADK feature corporate access to
keys, which the worry was, was only policy + config away from government
access to keys.

I guess the sentiment still stands, and with some justification, people are
still worried about law enforcement access mechanisms for internet 
telecoms equipment and protocols being used in places like Syria, Iran etc,
which is a quite similar scenario.

And as we all know adding key recovery and TTPs etc is a risk, cf
The Risks of Key Recovery, Key Escrow, and Trusted Third-Party Encryption
by Abelson, Anderson, Bellovin, Benaloh, Blaze, Diffie, Gilmore, Neumann,
Rivest, Schiller  Schneier.

http://www.crypto.com/papers/key_study.pdf

Not sure that we lost the crypto wars.  US companies export full strength
crypto these days, and neither the US nor most other western counties have
mandatory GAK.  Seems like a win to me :)

Adam

On Fri, Mar 30, 2012 at 12:24:47PM +1100, ianG wrote:

On 30/03/12 09:38 AM, Jon Callas wrote:


Also, there wasn't a PGP system. The PGP additional decryption key is really what we'd 
call a data leak prevention hook today, but that term didn't exist then. Certainly, 
lots of cypherpunks called it that at the time, but the government types who were talking up the 
concept blasted it as merely a way to mock (using that very word) the concept.




And therein lies another story!  Which always seems to end:  and then 
we lost the crypto wars.  I treat it as a great learning experience.




iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] The NSA Is Building the Country's Biggest Spy Center (Watch What You Say)

2012-03-23 Thread Adam Back

You know PFS while a good idea, and IMNSO all non-PFS ciphersuites should be
deprecated etc, PFS just ensures the communicating parties delete the key
negotiation emphemeral private keys after use.

Which does nothing intrinsic to prevent massive computation powered 1024
discrete log on stored PFS traffic, other than to the limited extent of
creating a fresh public key to attack on each session.

A secondary point if you are concerned about after-the-fact computational
power overtaking your stored ciphertext (PFS or not) is to avoid using
shared public crypto parameters for discrete log based cryptosystems.

Shared public parameters are also bad because there can be some reusable
precomputation or shared parameter specific optimized hardware.  The flip
side is the parameter generation for the EC DL cryptosystems is so complex
that it will probably be safer for most purposes to use a standardized
parameter set, than to generate your own, if your library even supports it.

Adam

On Fri, Mar 23, 2012 at 08:42:29AM -0400, d...@geer.org wrote:


jd...@lsuhsc.edu writes, in part:

 Even if the intercepted communication is AES encrypted and unbroken
 today, all that stored data will be cracked some day. Then it too
 can be data-mined.

What percentage of acquirable/storable traffic do you
suppose actually exhibits perfect forward secrecy?

--dan


...those who control the past control the future.
  --  George Orwell
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-03-05 Thread Adam Back

Further the fact that the entropy seeding is so bad that some
implementations are generating literally the same p value (but seemingly
different q values) I would think you could view the fact that this can be
detected and efficiently exploited via batch GCD as an indication of an even
bigger problem.

Namely if the seeding is that bad you could outright compute all possible
values of p even for cases where p was not shared, by running throught the
evidently small by cryptographic standards number of possible PRNG states...

Then you might be looking at more than 1% or whatever the number is that
literally collide in specific p value.  Assuming p is more vulnerable than
q, you could then use the same batch GCD to test.

Adam

On Thu, Feb 16, 2012 at 02:47:21PM -0600, Nico Williams wrote:

On Thu, Feb 16, 2012 at 12:28 PM, Jeffrey Schiller j...@qyv.net wrote:

Are you thinking this is because it causes the entropy estimate in the RNG to 
be higher than it really is? Last time I checked OpenSSL it didn't block 
requests for numbers in cases of low entropy estimates anyway, so line 3 
wouldn't reduce security for that reason.


I  am thinking this because in low entropy cases where multiple boxes generate 
the same first prime adding that additional entropy before the second prime is 
generated means they are likely to generate a different second prime leading to 
the GCD attack.


I'd thought that you were going to say that so many devices sharing
the same key instead of one prime would be better on account of the
problem being more noticeable.  Otherwise I don't see the difference
between one low-entropy case and another -- both are catastrophic
failures.

Nico
--
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Adam Back
I also was pondering as to how the implementers could have arrived at
this situation towards evaluating Stephen Farrell's draft idea to have
a service that double checks at key gen time (that none of the p, q
values are reused).  http://www.ietf.org/id/draft-farrell-kc-00.txt

(Which I dont think is nearly robust enough for relying on, but doesnt
hurt as a sanity check that will catch a small percentage of entropy
offenders).

So then what *could* have gone wrong?

1. (most generous) they had a rng that accounted for entropy
estimation, waited until it had the target amount (eg 128-bits
minimum) and then generated p, q BUT there was an overestimate of the
entropy, it was MUCH worse than estimated

2. they didnt bother estimating entropy, the system clock was not set
or it didnt have one and/or they didnt use it or used memory state
after boot from rom or something predictable enough to show the
collision rate.  (aka incompetence)

3. your idea -- maybe -- more incompetent things have happened :)

Occam's razor suggests cryptographic incompetence.. number one reason
deployed systems have crypto fails.  Who needs to hire crypto people,
the developer can hack it together, how hard can it be etc.  There's a
psychological theory of why this kind of thing happens in general -
the Dunning-Kruger effect.  But maybe 1 happened.

Adam

[1] http://en.wikipedia.org/wiki/Dunning–Kruger_effect

On 18 February 2012 07:57, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 Adam Back a...@cypherspace.org writes:

Further the fact that the entropy seeding is so bad that some implementations
are generating literally the same p value (but seemingly different q values)
I would think you could view the fact that this can be detected and
efficiently exploited via batch GCD as an indication of an even bigger
problem.

 Do we know that this is accidental rather than deliberate?  A cute
 optimisation for keygen would be to only randomly generate one half of the
 {p,q} pair.  It's plenty of randomness after all, surely you don't really need
 both to be generated randomly, only one will do, and it'll halve the keygen
 time...

 Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-18 Thread Adam Back
I missed a favorite of mine that I've personally found multiple times
in deployed systems from small (10k users) to large (mil plus users),
without naming the guilty:

4. The RNG used was rand(3), and while there were multiple entropy
additions, they were fed using multiple invocations of srand(3).

Thats a double fail, at the least,:and commonly a triple or quadruple fail also:

a. rand3 internal state is tiny (effectively 32 bits the size of the seed);

b. they even misunderstood the srand(3) api, calling srand multiple
times resets the state fully each time, ie sometimes with less entropy
on subsequent calls;

c. commonly the entropy was weak and not measured anyway;

d. the entropy was added at random points during the security critical
phase, so that even if there was 128-bits it was released in
externally observable or testable ways in sub 32-bit lumps.

I guess the developers were uber-confident in their competence while
hacking that together :)  Dunning-Kruger effect at work!

Sometimes they may even have competent math cryptographers but who
cant program, dont look at code, and the developers never asked about
how to generate randomness.

I'm wondering if that is quite plausible for this case... the effect
would be rather like observed.

Adam

On 18 February 2012 10:40, Adam Back a...@cypherspace.org wrote:
 I also was pondering as to how the implementers could have arrived at
 this situation towards evaluating Stephen Farrell's draft idea to have
 a service that double checks at key gen time (that none of the p, q
 values are reused).  http://www.ietf.org/id/draft-farrell-kc-00.txt

 (Which I dont think is nearly robust enough for relying on, but doesnt
 hurt as a sanity check that will catch a small percentage of entropy
 offenders).

 So then what *could* have gone wrong?

 1. (most generous) they had a rng that accounted for entropy
 estimation, waited until it had the target amount (eg 128-bits
 minimum) and then generated p, q BUT there was an overestimate of the
 entropy, it was MUCH worse than estimated

 2. they didnt bother estimating entropy, the system clock was not set
 or it didnt have one and/or they didnt use it or used memory state
 after boot from rom or something predictable enough to show the
 collision rate.  (aka incompetence)

 3. your idea -- maybe -- more incompetent things have happened :)

 Occam's razor suggests cryptographic incompetence.. number one reason
 deployed systems have crypto fails.  Who needs to hire crypto people,
 the developer can hack it together, how hard can it be etc.  There's a
 psychological theory of why this kind of thing happens in general -
 the Dunning-Kruger effect.  But maybe 1 happened.

 Adam

 [1] http://en.wikipedia.org/wiki/Dunning–Kruger_effect

 On 18 February 2012 07:57, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 Adam Back a...@cypherspace.org writes:

Further the fact that the entropy seeding is so bad that some implementations
are generating literally the same p value (but seemingly different q values)
I would think you could view the fact that this can be detected and
efficiently exploited via batch GCD as an indication of an even bigger
problem.

 Do we know that this is accidental rather than deliberate?  A cute
 optimisation for keygen would be to only randomly generate one half of the
 {p,q} pair.  It's plenty of randomness after all, surely you don't really 
 need
 both to be generated randomly, only one will do, and it'll halve the keygen
 time...

 Peter.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-17 Thread Adam Back
Further the fact that the entropy seeding is so bad that some
implementations are generating literally the same p value (but seemingly
different q values) I would think you could view the fact that this can be
detected and efficiently exploited via batch GCD as an indication of an even
bigger problem.

Namely if the seeding is that bad you could outright compute all possible
values of p even for cases where p was not shared, by running through the
evidently small by cryptographic standards number of possible PRNG states...

Then you might be looking at more than 1% or whatever the number is that
literally collide in specific p value.  Assuming p is more vulnerable than
q, you could then use the same batch GCD to test.

Adam


On 16 February 2012 21:47, Nico Williams n...@cryptonector.com wrote:
 On Thu, Feb 16, 2012 at 12:28 PM, Jeffrey Schiller j...@qyv.net wrote:
 Are you thinking this is because it causes the entropy estimate in the RNG 
 to be higher than it really is? Last time I checked OpenSSL it didn't block 
 requests for numbers in cases of low entropy estimates anyway, so line 3 
 wouldn't reduce security for that reason.

 I  am thinking this because in low entropy cases where multiple boxes 
 generate the same first prime adding that additional entropy before the 
 second prime is generated means they are likely to generate a different 
 second prime leading to the GCD attack.

 I'd thought that you were going to say that so many devices sharing
 the same key instead of one prime would be better on account of the
 problem being more noticeable.  Otherwise I don't see the difference
 between one low-entropy case and another -- both are catastrophic
 failures.

 Nico
 --
 ___
 cryptography mailing list
 cryptography@randombit.net
 http://lists.randombit.net/mailman/listinfo/cryptography
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] how many MITM-enabling sub-roots chain up to public-facing CAs ?

2012-02-14 Thread Adam Back

Well I am not sure how they can hope to go very far underground.  Any and
all users on their internal network could easily detect and anonymously
report the mitm cert for some public web site with out any significant risk
of it being tracked back to them.  Game over.  So removal of one CA from a
major browser like mozilla would pretty much end this practice if it is true
that any CAs other than trustwave actually did this...

Adam

On Tue, Feb 14, 2012 at 11:40:06AM +0100, Ralph Holz wrote:

Ian,

Actually, we thought about asking Mozilla directly and in public: how
many such CAs are known to them? I'd have thought that some would have
disclosed themselves to Mozilla after the communication of the past few
weeks. Your mail makes it seem as if that was not the case, or not to a
satisfying degree. Which makes me support Marsh Ray's one-strike
proposal even more strongly: issuing a death sentence to a CA who has
disclosed is counter-productive. It will drive the others deeper into
hiding.

You kno, I can't help but think of the resemblance to the real world
death penalty for humans - AFAICT it does not seem to deter criminals.

Ralph

On 02/14/2012 03:31 AM, ianG wrote:

Hi all,

Kathleen at Mozilla has reported that she is having trouble dealing with
Trustwave question because she doesn't know how many other CAs have
issued sub-roots that do MITMs.

Zero, one, a few or many?

I've sent a private email out to those who might have had some direct
exposure.  If there are any others that might have some info, feel free
to provide evidence to kwil...@mozilla.com or to me if you want it
suitably anonymised.

If possible, the name of the CA, and the approximate circumstance.  Also
how convinced you are that it was a cert issued without the knowledge of
the owner.  Or any information really...

Obviously we all want to know who and how many ... but right now is not
the time to repeat demands for full disclosure.  Right now, vendors need
to decide whether they are dropping CAs or not.

iang
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography



--
Ralph Holz
Network Architectures and Services
Technische Universität München
http://www.net.in.tum.de/de/mitarbeiter/holz/
PGP: A805 D19C E23E 6BBB E0C4  86DC 520E 0C83 69B0 03EF






___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] how many MITM-enabling sub-roots chain up to public-facing CAs ?

2012-02-14 Thread Adam Back

My point is this - say you are the CEO of a CA.  Do you want to bet your
entire company on no one ever detecting nor reporting the MITM sub-CA that
you issued?  I wouldnt do it.  All it takes is one savy or curious guy in a
10,000 person company.

Consequently if there are any other CAs that have done this, they now know
mozilla and presumably other browsers are on to them and they need to revoke
any mitm sub-CA certs and stop doing it or they risk their CA going
bankrupt like with diginotar.

Adam

On Tue, Feb 14, 2012 at 03:51:16PM +0100, Ralph Holz wrote:

If all users used a tool like Crossbear that does automatic reporting,
yes. But tools like that are a recent development (and so is
Convergence, even though it was predated by Perspectives).

More importantly, however, how capable do you judge users to be? How
wide-spread do you expect such tools to become? Most users wouldn't know
what to look for in the beginning, and they would much less care.

Following your argument, in fact, we should have a large DB with Mitm
certs and incidents already. We don't - but not because CAs would not
have issued Mitm certs for Sub-CAs, surely?

No, CAs would try to hide the fact that they have issued certs that are
good for Mitm a corporate network. Some big CAs -- to big too fail even,
maybe, and what about them? -- have not yet publicly stated that they
have never issued such certs. I think giving them a chance at amnesty is
a better strategy.

Ralph

--
Ralph Holz
Network Architectures and Services
Technische Universität München
http://www.net.in.tum.de/de/mitarbeiter/holz/
PGP: A805 D19C E23E 6BBB E0C4  86DC 520E 0C83 69B0 03EF




___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] reports of T-Mobile actively blocking crypto

2012-01-11 Thread Adam Back

You know I also noticed mail sending problems when I was in the UK a month
or two ago.  I am transit via heathrow right now, and now I have no problem. 
This is pay as you go t-mobile.  So maybe they saw the PR problem brewing

and stopped whatever they were doing.

One gotcha (though I am sure it wasnt my problem before) is they introduced
around that time a splash page to give you more pricing options, before it
was automatic single price plan chosen at time of top up but applying
automatically to your connections.  The result is now you are not actually
connected until you click through the splash page in a browser and chose a
plan.

(Off topic, but the t-mobile pay as you go data is pretty convenient if you
are in the uk occasionally 2gbp for 1 day, 4gbp/3days, 7gbp/7days,
15gbp/30days.  Cant seem to find anything decent like that in the US.)

Adam

On Tue, Jan 10, 2012 at 01:59:09PM +, Joss Wright wrote:

On Mon, Jan 09, 2012 at 09:14:54PM -0500, Steven Bellovin wrote:

https://grepular.com/Punching_through_The_Great_Firewall_of_TMobile

I know nothing more of this, including whether or not it's accurate

--Steve Bellovin, https://www.cs.columbia.edu/~smb

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Password non-similarity?

2012-01-02 Thread Adam Back
On 2 January 2012 03:01, ianG i...@iang.org wrote:
 When I was a rough raw teenager doing this, I needed around 2 weeks to
 pick up 5 letters from someone typing like he was electrified.  The other 3
 were crunched in 4 hours on a vax780.

 how many samples? (distinct shoulder surf events)


 About 1 a day, say 10, without making it obvious.

The trick to counter-acting shoulder surfing is to touch type and hold the
shoulder suffers gaze so you know they are not looking at your
key-presses. Computer teacher in high school used to do that I
noticed.


Seperately and relatedly I was thinking of having a go at designing a human
computable challenge response for occasional when you know or believe your
typing is being observed.  eg Human remembers single digit numeric
coefficients to a 8 mod 10 simultaneous equations (16 digits):

r1 = a.x1+b.x2 mod 10
r2 = c.x3+d.x4 mod 10
...
r8 = o.x15+p.x16 mod 10

computer generates x1 - x16 at random between -9 and +9.  Now a shoulder
surfer sees less than 8 challenges responded to and they have only 1
equation for each pair of unknowns.  The challenges are one use.

The response (what is typed to login) are r1.. r8 an 8 digit number.

That was just the rough idea, no calculations done yet, maybe one can reduce
the number of terms and safely allow more than one use with a bit of
tinkering.

I was thinking it might be interesting for encrytped file systems
also. Normally you login with your passphrase when you are confident
you are not being shoulder surfed, or no public video surveillance in place (eg
airport).  But this way you have a second login mechanism with limited
number of logins that are safe to use.  The challenges and the disk key
encrypted with salted, iterated hash of the challenge response can be stored
separately, one per login, and over-written after use, preventing hostile
reuse.  After login they can be replaced with a new one.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] implementation of NIST SP-108 KDFs?

2011-12-28 Thread Adam Back

As there are no NIST KAT / test vectors for the KDF defined in NIST SP 108,
I wonder if anyone is aware of any open source implementations of them to
use for cross testing?

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] How are expired code-signing certs revoked? (nonrepudiation)

2011-12-22 Thread Adam Back

Stefan Brands credentials [1] have an anti-lending feature where you have to
know all of the private components in order to make a signature with it.

My proposal related to what you said was to put a high value ecash coin as
one of the private components.  Now they have a direct financial incentive -
if they get hacked and their private keys stolen they lose $1m untraceably.

Now thats quite reassuring - and encapsulates a smart contract where they
get an automatic fine, or good behavior bond.  I think you could put a
bitcoin in there instead of a high value Brands based ecash coin.  Then you
could even tell that it wasnt collected by looking in the spend list.

Adam

[1] http://www.cypherspace.org/credlib/ a library implementing Brands
credentials - it has pointers to the uprove spec, Brands thesis in pdf form
etc.

On Thu, Dec 22, 2011 at 07:17:21AM +, John Case wrote:


On Wed, 7 Dec 2011, Jon Callas wrote:

Nonrepudiation is a somewhat daft belief. Let me give a 
gedankenexperiment. Suppose Alice phones up Bob and says, Hey, 
Bob, I just noticed that you have a digital nature from me. Well, 
ummm, I didn't do it. I have no idea how that could have happened, 
but it wasn't me. Nonrepudiation is the belief that the 
probability that Alice is telling the truth is less than 2^{-128}, 
assuming a 3K RSA key or 256-bit ECDSA key either with SHA-256. 
Moreover, if that signature was made with an ECDSA-521 bit key and 
SHA-512, then the probability she's telling the truth goes down to 
2^{-256}.


I don't know about you, but I think that the chance that Alice was 
hacked is greater than 1 in 2^128. In fact, I'm willing to believe 
that the probability that somehow space aliens, or Alice has an 
unknown evil twin, or some mad scientist has invented a cloning ray 
is greater than one in 2^128. Ironically, as the key size goes up, 
then Alice gets even better excuses. If we used a 1k-bit ECDSA key 
and a 1024-bit hash, then new reasonable excuses for Alice suggest 
themselves, like that perhaps she *considered* signing but didn't 
in this universe, but in a nearby universe (under the many-worlds 
interpretation of quantum mechanics, which all the cool kids 
believe in this week) she did, and that signature from a nearby 
universe somehow leaked over.



This is silly - it assumes that there are only two intepretations of 
her statement:


- a true collision (something arbitrary computes to her digital 
signature, which she did not actually invoke) which is indeed as 
astronomically unlikely as you propose.


- another unlikely event whose probability happens to be higher than 
the collision.


But of course there is a much simpler, far more likely explanation, 
and that is that she is lying.


However ... this did get me to thinking ...

Can't this problem be solved by forcing Alice to tie her signing key 
to some other function(s)[1] that she would have a vested interest in 
protecting AND an attacker would have a vested interest in exploiting 
?


I'm thinking along the lines of:

I know Alice didn't get hacked because I see her bank account didn't 
get emptied, or I see that her ecommerce site did not disappear.


I know Alice didn't get hacked because the bitcoin wallet that we 
protected with her signing key still has X bitcoins in it, where X is 
the value I perceived our comms/transactions to be worth.


Or whatever.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] airgaps in CAs

2011-12-13 Thread Adam Back

Presumably these sub-CA certs have pathLenConstraint = 0?  (Meaning they are
only authorized to issue site certificates, not further sub-CA
certificates). 


Otherwise it could run away from them... one compromise and the attacker
gets a sub-sub-CA, rather than a site certificate.


For these sub-CA certs with no name constraints, is there a some other form
of operationally imposed name constraint - ie does the issuing software
impose a choice of owned domains, so that the day to day cert issuer role is
not able to issue certs for non-owned domains?

(With an admin role required to add further valid domains)?


I have to say it doesnt seem that much of an operational headache (for the
justification of removing name constraints).  The only operational task is
to pay an admin fee to the CA and click on a few links in emails to acquired
company domain registration addresses and import the new cert?

Adam

On Mon, Dec 12, 2011 at 06:21:41PM -0800, Arshad Noor wrote:

On 12/9/2011 12:27 AM, Adam Back wrote:


Do the air gapped private PKI root certs (and if applicable their
non-airgapped sub-CA certs they authorize) have the critical name
constraint
extension eg .foocorp.com meaning it is only valid for creating certs for
*.foocorp.com?



The early ones did.  However, we stopped putting in the constraint as
we became aware that it created some operational headaches when
companies merged or acquired other companies, and needed certificates
under the domain-name of the merged/acquired company (to preserve
legacy applications and customers) which were different from the
domain names in the constraint.

Secondly, the constraint is perceived as protecting the TTP CA's more
than the Subject; and since the TTP did not mandate it in their CP,
there was no reason to include it.  (I have already heard that one TTP
CA is rethinking this and is considering mandating it on all new and
renewed certs).


(I am presuming these private PKI certs are sub-CA certs certified by a CA
listed in browsers.)


In some cases, that is correct.  Others are closed PKIs - self-signed
and only for internal use (example: as in multiple components of
bio-technology products that strongly authenticate to each other before
enabling the product's use).

Arshad Noor
StrongAuth, Inc.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] airgaps in CAs

2011-12-09 Thread Adam Back

Hi Arshad

Do the air gapped private PKI root certs (and if applicable their
non-airgapped sub-CA certs they authorize) have the critical name constraint
extension eg .foocorp.com meaning it is only valid for creating certs for
*.foocorp.com?

(I am presuming these private PKI certs are sub-CA certs certified by a CA
listed in browsers.)

Adam

On Thu, Dec 08, 2011 at 10:04:05AM -0800, Arshad Noor wrote:

I am aware of at least one public CA - still in business - that
fits this description.

Every private PKI we have setup since 1999 (more than a dozen, of
which a few were for the largest companies in the world) has had
the Root CA on a non-networked machine with commensurate controls
to protect the CA.

Arshad Noor
StrongAuth, Inc.

On 12/08/2011 06:54 AM, Eugen Leitl wrote:


Is anyone aware of a CA that actually maintains its signing
secrets on secured, airgapped machines, with transfers batched and
done purely by sneakernet?

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Another CA hacked, it seems.

2011-12-08 Thread Adam Back

Did they successfully hack the CA functionality or just a web site housing
network design documents for various dutch government entities?  From what
survives google translate of the original dutch it appears to be the latter
no?

And if Kerckhoff's principle was followed what does it matter if some
network design docs were leaked.  You would hope they dont contain router
passwords or such things.

I'd hestitate calling that a CA hacked even if the web site was a web site
belonging to someone who operates a CA.  


Is there more detail?

Adam

On Thu, Dec 08, 2011 at 03:26:08PM +0100, Ralph Holz wrote:

As I said, at this rate we shall have statistically meaningful large
numbers of CA hacks by 2013:

http://translate.google.com/translate?sl=autotl=enjs=nprev=_thl=enie=UTF-8layout=2eotf=1u=http%3A%2F%2Fwebwereld.nl%2Fnieuws%2F108815%2Fweer-certificatenleverancier-overheid-gehackt.htmlact=url

Ralph

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] really sub-CAs for MitM deep packet inspectors? (Re: Auditable CAs)

2011-12-06 Thread Adam Back

Yes, Peter said the same, BUT do you think they have a valid cert chain?  Or
is it signed by a self-signed company internal CA, and the company internal
CA added to the corporate install that you mentioned...  Thats the cut off
of acceptability for me - full public valid cert chain on other peoples
domains for MitM thats very bad.  Internal cert chain via adding cert to
browser - corporate can go for it, its their network, their equipment to
install software on!

(Bearing in mind its the corporate intention to keep other people off their
network with firewalls, network auth etc).  One claim by Lucky if I recall
is that the new trend in bring your own device (iphone, android, ipad etc)
starts to cause a conflict - becomes complicated for the corporate to expect
to install certs into all those browsers.  They no longer control the OS/app
install.

I think thats true - but in effect if your environment is that security
conscious, you probably should not be allowing BYOD anyway - who knows what
malware is on it, bypassing your egress is completely _trivial_ with
software, or even just config of software.  And anyway since when does your
minor inconvenience of installing certs authorize you or CAs to subverting
the SSL guarantee and other people's security.  Even people who have
internal CAs for certification SHOULD NOT be abusing them for MitM.

Adam

On Tue, Dec 06, 2011 at 10:52:43AM +, Florian Weimer wrote:

* Adam Back:


Are there really any CAs which issue sub-CA for deep packet inspection aka
doing MitM and issue certs on the fly for everything going through them:
gmail, hotmail, online banking etc.


Such CAs do exist, but to my knowledge, they are enterprise-internal CAs
which are installed on corporate devices, presumably along with other
security software.  Even from a vendor point of view, this additional
installation step is desirable because it fits well with a per-client
licensing scheme, so I'm not sure what the benefit would be to get a
certificate leading to one of the public roots.

--
Florian Weimerfwei...@bfk.de
BFK edv-consulting GmbH   http://www.bfk.de/
Kriegsstraße 100  tel: +49-721-96201-1
D-76133 Karlsruhe fax: +49-721-96201-99

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] really sub-CAs for MitM deep packet inspectors? (Re: Auditable CAs)

2011-12-02 Thread Adam Back

Well I was aware of RA things where you do your own RA and on the CA side
they limit you to issuing certs belonging to you, if I recall thawte was
selling those.  (They pre-vet your ownership of some domains foocorp.com,
foocorpinc.com etc, and then you can issue www.foocorp.com, *.foocorp.com .. 
however you dont get a CA private key, you get web integration with the RA

so you dont have to do the email verification dance for each cert you
create).

Handing over a signed sub-CA is a much higher level of risk, unless perhaps
it has a constraint on the domain names of certs it can sign baked into it,
which is possible.

To hand over a blank cheque sub-CA cert that could sign gmail.com is
somewhat dangerous.  But you notice that geotrust require it to be in a
hardware token, and some audits blah blah, AND more importantly that you
agree not to create certs for domains you dont own.

Start of the thread was that Greg and maybe others claim they've seen a cert
in the wild doing MitM on domains the definitionally do NOT own.

(The windows 2k box sounds scary for sure, and you better hope that's not a
general unrestricted sub-CA cert, but even then you could admit its similar
to the security practices of diginotar.) 

Secure as the weakest link, and the weakest link just keeps getting lower. 
It would be interesting to know if there really are CAs lax enough to issue

a sub-CA cert to a windows box with no hardware container for the private
key.  (Not that it makes that much difference... hack the RA and the private
key doesnt matter so much).

The real question again is can we catch a boingo or corp lan or government
using a MitM sub-CA cert, and then we'll know which CA is complicit in
issuing it, and delist them.

Adam

On Fri, Dec 02, 2011 at 07:07:19PM +1300, Peter Gutmann wrote:

Ben Laurie b...@links.org writes:


They appear to actually be selling sub-RA functionality, but very hard to
tell from the press release.


OK, so it does appear that people seem genuinely unaware of both the fact that
this goes on, and the scale at which it happens.  Here's how it works:

1. Your company or organisation is concerned about the fact that when people
go to their site (even if it's an internal, company-only one), they get scary
warnings.

2. Your IT people go to a commercial CA and say we would like to buy the
ability to issue padlocks ourselves rather than having to buy them all off
you.

3. The CA goes through an extensive consulting exercise (billed to the
company), after which they sell the company a padlock-issuing license, also
billed to the company.  The company is expected to keep records for how many
padlocks they issue, and pay the CA a further fee based on this.

4. Security is done via the honour system, the CA assumes the company won't do
anything bad with their padlock-issuing capability (or at least I've never
seen any evidence of a CA doing any checking apart from for the fact that
they're not getting short-changed).

This is why in the past I've repeatedly referred to unknown numbers of
unknown private-label CAs, we have absolutely no idea how many of these
private-label CAs are out there or who they are or who controls them, but
they're probably in the tens, if not hundreds, of thousands, and many are
little more than a Windows server on a corporate LAN somewhere (and I mean
that literally, it was odd to sit in front of a Windows 2000 box built from
spare parts located in what used to be some sort of supplies closet and think
I can issue certs that chain to $famous_ca_name from this thing :-).

Going through the process is like getting a BS 7799 FIPS 140 certification,
you pay the company doing the work to get you through the process, and you
keep paying them until eventually you pass.  The only difference is that while
I've heard of rare cases of companies failing BS 7799, I've never heard of
anyone failing to get a padlock-issuing license.

Are people really not aware of this?  I thought it was common knowledge.  If
it isn't, I'll have to adapt a writeup I've done on it, which assumes that
this is common knowledge.

Peter.

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] if MitM via sub-CA is going on, need a name-and-shame catalog (Re: really sub-CAs for MitM deep packet inspectors?)

2011-12-02 Thread Adam Back

Now we're getting somewhere.  If this is going on even the policy
enforcement aspect of CAs is broken...  CAs are subverting their own
certification practice statement.  The actions taken by the user of the
sub-CA cert are probably illegal also in the US  europe where there are
expectations of privacy in work places (and obviously public places).

More below:

On Fri, Dec 02, 2011 at 11:02:14PM +1300, Peter Gutmann wrote:

Adam Back a...@cypherspace.org writes:


Start of the thread was that Greg and maybe others claim they've seen a cert
in the wild doing MitM on domains the definitionally do NOT own.


It's not just a claim, I've seen them too.  For example I have a cert issued
for google.com from such a MITM proxy.  


a public MitM proxy?  Or a corporate LAN.


I was asked by the contributor not to reveal any details on it because it
contains the name and other info on the intermediate CA that issued it, but
it's a cert for google.com used for deep packet inspection on a MITM proxy. 


That intermediate CA needs publishing, and the CA that issued it.  SSL
Observatory ought to take an interest in finding catalogging and publishing
all of these both public, corporate and government/law-enforcement.  It
breaks a clear expectation of security and privacy the user, even very
sophisitcated user, has about privacy of their communications.


The real question again is can we catch a boingo or corp lan or government
using a MitM sub-CA cert, and then we'll know which CA is complicit in issuing
it, and delist them.


Given that some of the biggest CAs around sell private-label CA certs, you'd
end up shutting down half the Internet if you did so.


There is an important difference between:

1. private label sub-CA (where the holder has signed an agreement not to
issue certs for domains they do not own - I know its policy only, there is
no crypto enforced mechanism, but thats the same bar as the main CAs
themselves).

2. corporate LAN SSL MitM (at least the corporation has probably a contract
with all users of the LAN waiving their privacy).  Probably even then its
illegal re expectation of privacy in workplace in most contexts in US 
Europe.

3. public provider SSL MitM - if your ISP, wifi hotspot, 3g data prov, is
doing this to you, paid or free, thats illegal IMO.  Heads should roll up
the CA tree.

4. government SSL MitM - we need to know which CAs have issued MitM sub-CAs
for places like Iran, Syria, pre-revolution Egypt etc.  If the CA isnt owned
by their local government or local company that they leant on, heads need to
roll.  Similar if US and European governments and Law Enforcement have been
up to this, we need to know.

Obviously the most interesting ones are 3  4.  But Peter says he has
evidence 2 (LAN mitm) is going on in the name of deep packet inspection I
guess in corporate LANs and that itself employees should be aware of that.

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] if MitM via sub-CA is going on, need a name-and-shame catalog (Re: really sub-CAs for MitM deep packet inspectors?)

2011-12-02 Thread Adam Back

On Sat, Dec 03, 2011 at 01:00:14AM +1300, Peter Gutmann wrote:
I was asked not to reveal details and I won't, 


Of course, I would do the same if so asked.  But there are lots of people on
the list who have not obtained information indirectly, with confidentiality
assurances offered, and for them remailers exist.


but in any case I don't know whether it would achieve much.  For the case
of a public CA doing it, you'd see that CA X is involved, ...


personally I'd like to know who is doing this and at what scale.


I guess if you're running into this sort of thing for the first time then
you'd be out for blood, but if you've been aware of this it going on for more
than a decade then it's just business as usual for commercial PKI.  I'm
completely unfazed by it, it's pretty much what you'd expect.


I do not think its what you'd expect.  A CA should issue certificates only
to the holders of certificates.  It should NOT issue sub-CA certifactes to
third parties who will then issue certs to domains they dont own.  Not even
on the fly inside a packet inspection box.

If someone wants to inspect packets on a corporate lan they can issue their
own self-signed cert, and install that in their users browsers in their OS
install image.

Then if I go on their LAN with my own equipment, I'll get a warning.

I think its unacceptable to have CAs issuing such certs.


It breaks a clear expectation of security and privacy the user, even very
sophisitcated user, has about privacy of their communications.


Not on a corporate LAN.  IANAL but AFAIK your employer's allowed to run that
in whatever way they want.


No.  Also IANAL but there were several cases where employees did have an
expectation of privacy upheld even in the US.  Certainly you cant do that in
the EU legally either.


3. public provider SSL MitM - if your ISP, wifi hotspot, 3g data prov, is
doing this to you, paid or free, thats illegal IMO.  Heads should roll up
the CA tree.


I think this is where we differ in our outlook.  This (and yeah, I'd still
like to see the certs for one of these from a public location :-) is business
as usual for commercial PKI.  


I dont view this as business as usual.  If I am on a public hotspot, 3g or
my own DSL/cable I do ABSOLUTELY NOT expect the ISP to be getting inside my
SSL connection to my bank, my gmail account etc.  Whether I paid or not. 
For any reason at all (not to do advert analysis, not to do anti-virus, not

to re-write pages etc).  I use airport/hotel wifi a lot and I've never seen
it and I am suspicious enough to use cert patrol etc.


Remember the link to the SonicWall docs I posted
a few days ago,
http://www.sonicwall.com/downloads/SonicOS_Enhanced_5.6_DPI-SSL_Feature_Module.pdf?
It's an advertised feature of the hardware (not just from SonicWall, other
vendors do it too), d'you think people are going to buy that and then not make
use of it?  So you just build your defences around the fact that it's broken
and then you won't run into problems.


Well yes I know the hardware exists.  I even helped design and implement a
software-only MitM at ZKS.  Before you get alarmed the cert it created was
known only to the user, generated on the machine, and its purpose was to
protect the user via a cookie manager protecting their privacy.

If you read the sonic wall stuff its fairly clear the model that they talk
about at least is that you do not have a sub-CA key.  You generate a
self-signed CA key, and install it in your corporate LAN browsers trusted CA
dbs.  Clearly it would also work with such a cert.

Similarly their doco about server SSL shows they dont expect you to have a
proper sub-CA key.  (scenario where the web server for public access is
behind the sonicwall) you are expected to import your server SSL certs
mapped per IP address into the box.  (Otherwise the public would see
self-signed MitM notices when they browse the site.  Or the sub-CA cert.


Oh, another place where this happens is WAP gateways, where they MITM
everything so they can rewrite the content to save bandwidth and make things
work on mobile devices.  So, is that bad, or good, or both, or neither?


Well people were complaining about the WAP gap a long time ago.  I thought
this was more about the phones at the time being not CPU powerful enough to
terminate SSL.  The idea that a gap exists somewhere where your traffic is
decrypted was viewed as a significant security limitation people were
pleased to see go a way with phones fast enough to terminate their own SSL.

That is bad.  Are you saying there is anyone doing SSL mitm for stream
compression reasons?  Who?

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] if MitM via sub-CA is going on, need a name-and-shame catalog (Re: really sub-CAs for MitM deep packet inspectors?)

2011-12-02 Thread Adam Back

I wonder what that even means.  *.com issued by a sub-CA?  that private key
is a massive risk if so!  I wonder if a *.com is even valid according to
browsers.  Or * that would be funny.

Adam

On Sat, Dec 03, 2011 at 02:24:53AM +1300, Peter Gutmann wrote:

Adam Back a...@cypherspace.org writes:


[WAP wildcard certs]

That is bad.  Are you saying there is anyone doing SSL mitm for stream
compression reasons?  Who?


The use of wildard certs in WAP gateways came up from the SSL Observatory
work... hmm, there's at least a mention of it in An Observatory for the
SSLiverse.

Peter.


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] trustable self-signed certs in a P2P environment (freedombox)

2011-11-30 Thread Adam Back

Its rather common for people with load balancers and lots of servers serving
the same domain to have multiple certs.

Same for certs to change to a new CA before expiry.  (Probably switched to a
new CA when adding more servers to the load balanced web server farm).

I installed cert patrol and the popups about this are frequent.  Any
solution that hopes for easy interim deployment needs to work with this.

Adam

On Wed, Nov 30, 2011 at 12:05:29PM -0800, Peter Eckersley wrote:

Perspectives and Convergence are one effort to do this (what key do other
people see on this server?).  MonkeySphere is another (which humans in a web
of trust will vouch that this is the right key for this server?).

Perspectives/Convergence suffer from the problem that there is no way to tell
the difference between the server was reinstalled and now has a new key and
the whole world sees an attack in progress.  The former is more common but
the second can also occurr.

MonkeySphere has the problem that the web of trust has to be enormous before
it's likely that you can build a chain to the admins of all of the websites
you visit.

On Wed, Nov 30, 2011 at 01:30:03PM +0100, Eugen Leitl wrote:


I presume many here are aware of the Eben Moglen-started
FreedomBox initiative, which sets out to build a Debian
distro for lplug computers and similar which will package
many existing tools for the end result of an end-user
owned and operated, anonymizing and censorship-resistant
infrastructure.

One of the problems I did not see well-addressed yet is
infrastructure for a cert trust network, which uses social
graph information (FreedomBox is supposed to package a P2P
alternative to Facebook  Co) for cert fingerprint validation.

Is anyone aware of existing code which caches SSL cert
fingerprints and alerts when one suddenly changes, informing
of a potential MITM in progress?

Thanks.

--
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


--
Peter Eckersleyp...@eff.org
Technology Projects Director  Tel  +1 415 436 9333 x131
Electronic Frontier FoundationFax  +1 415 436 9993
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] ECDSA - patent free?

2011-11-09 Thread Adam Back
Anyone have informed opinions on whether ECDSA is patent free?  


Any suggestions on EC capable crypto library that implements things without
tripping over any certicom claimed optimizations?

(Someone pointed out to me recently that the redhat shipped openSSL is devoid
of ECC which is kind of a nuisance!)

Suite B pushed use of EC you would think would increase the interest in
having clarity on the EC patent situation..

Adam
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] ssh-keys only and EKE for web too (Re: preventing protocol failings)

2011-07-13 Thread Adam Back

You know this is why you should use ssh-keys and disable password
authentication.  First thing I do when someone gives me an ssh account.

ssh-keys is the EKE(*) equivalent for ssh.  EKE for web login is decades
overdue and if implemented and deployed properly in the browser and server
could pretty much wipe out phishing attacks on passwords.

We have source code for apache, mozilla, maybe could persuade google; and
perhaps microsoft and apple could be shamed into following if that was done.

Of course one would have to disable somethings (basic auth?) and do some
education - never enter passwords outside of the browsers verifiably local
authentication dialog - but how else are we going to get progress, this is
2011, and the solution has been known for nearly 20 years - its about time
eh?  Maybe you could even tell the browser your passwords so it could detect
and prevent users typing that into other contexts.

(*) The aspect of EKE like SRP2 that fixes the phising problems is you dont
send your password to the server, the authentication token is not offline
grindable (even to the server), and the authentication token is bound to the
domain name - so login to the wrong server does not result in the phishing
server learning your password.

Adam


I can second that with an observation made by several users of the
German Research Network (DFN), in December 2009. Someone had registered
a long list of typo domains, i.e. domains like tu-munchen.de instead of
tu-muenchen.de, and then installed an SSH daemon that would respond on
all subdomains.

Some users (including a colleague and myself) noticed that they suddenly
got a host-key-mismatch warning when accessing their machines via SSH -
and found that they had mistyped the host name *and still got an SSH
connection*. Neither my colleague nor me had entered our passwords yet,
but that was only because we were sensitive to host key changes at that
moment because we had re-installed the machines just a few days before
the event.

The server that delivered the typo domains was located in South Africa,
BTW. I don't even know if legal persecution is possible, and I don't
think anyone attempted. The DFN reacted in a robust way by blocking
access to the typo domains in their DNS. Not a really good way, but
probably effective for most users.

The question, after all, is how often do you really read the SSH
warnings? How often do you just type on or retry or press accept? What
if you're the admin who encounters this maybe 2-3 times day?

(Also, Ubuntu, I believe, has been known to change host keys without
warning when doing a major update of openssh.)

Ralph

--
Dipl.-Inform. Ralph Holz
I8: Network Architectures and Services
Technische Universität München
http://www.net.in.tum.de/de/mitarbeiter/holz/






___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Bitcoin observation

2011-07-08 Thread Adam Back

I thought I already said this in another message, but perhaps it didnt get
to the list.  Apart from the fact that they have some kind of script which
trivially allows you to set the conditions of validity to something
impossible to satisfy eg 0 = 1 that Seth Schoen described; the key in the
signature is actually a hash of a public key, so what I said in my other
message is simply set the hash of the key to be all 000s.

Adam

On Fri, Jul 08, 2011 at 01:13:12PM +0300, lodewijk andré de la porte wrote:

I'm aware of the basic functionality of private-public key encryption. Brute
forcing possible private keys should eventually result in a specific public
key (seeing as how there's a limited set of private keys). I think it might
be possible to have public keys that no private key maps to, I'm not sure
however and it would also be hard to prove experimentally seeing how the
universe of private keys is quite large.
Also note that this kind of brute force attack isn't going to be feasible in
the near future. (however in 2100 it's likely an easy trick they teach in
high school's equivalent.)

Lewis

___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


  1   2   >