Re: [cryptography] NIST Workshop on Elliptic Curve Cryptography Standards

2015-05-11 Thread Tom Ritter
On 11 May 2015 at 20:13,  d...@deadhat.com wrote:
 There is also the Lightweight Crypto Workshop at NIST. This heavily
 overlaps with the ECC thing, because the right options for ECC curves are
 also the right options for lightweight crypto.

 I'm attending the lightweight Crypto Workshop, but not the ECC Workshop. I
 don't have bandwidth for both.

On the lightweight side, I get the impression that block ciphers are
also a big topic, but that there isn't a ton of work being done
there... besides the NSA ciphers, SIMON and SPECK. John Kelsey
mentioned these at RWC. The NSA came to NIST and said Check out these
ciphers! and NIST said Those look cool, but please publish them for
academic review so we're not favoring you in any way.  So they did.
But now the onus is on the community to analyze them and either poke
holes in them or present something better.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Underhanded Crypto

2015-01-28 Thread Tom Ritter
On Jan 28, 2015 5:00 PM, d...@geer.org wrote:

 https://underhandedcrypto.com/rules/

We will keep submissions secret until they have been judged. Once the
contest is over, all submissions will be published. Winners will be
announced on December 30, 2014.


 Did this complete?  If it did, my searching has been inept as
 I can't find it.  Sorry for the noise if noise it is.

I don't think so, if it has I owe someone a prize. I think they extended
the judging timeframe to end of January.

https://twitter.com/undercrypto

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] STARTTLS for HTTP

2014-08-19 Thread Tom Ritter
On 18 August 2014 23:29, Tony Arcieri basc...@gmail.com wrote:
 Anyone know why this hasn't gained adoption?

 http://tools.ietf.org/html/rfc2817

 I've been watching various efforts at widespread opportunistic encryption,
 like TCPINC and STARTTLS in SMTP. It's made me wonder why it isn't used for
 HTTP.

What's the point?  Anything that speaks HTTP also speaks HTTPS, so
there's no need for the If you support it, I have TLS available.
Just use any of multitude of redirect mechanisms for your webserver to
kick people onto HTTPS.

 Opportunistic encryption could be completely transparent. We don't need any
 external facing UI changes for users (although perhaps plaintext HTTP on
 port 80 could show a broken lock). Instead, if the server and client
 mutually support it, TLS with an unauthenticated key exchange is used.

I didn't read the draft word for word, but I don't see anything in it
that indicates the client MUST NOT validate the server certificate or
MUST use anonymous ciphersuites.  Indeed it seems to say the opposite.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] JackPair Voice Encryption Dongle

2014-08-18 Thread Tom Ritter
https://www.kickstarter.com/projects/620001568/jackpair-safeguard-your-phone-conversation
https://www.youtube.com/watch?v=rh6yF79FkAA

DH with a 'Pairing Code' (ala ZRTP) to prevent MITM. Light on exact
details, but they say it will be open source.

we think 10-12 digits [for a pairing code] is probably good enough,
and more friendly for human reading

We are using Diffie-Hellman at this point, and working on Elliptic
curve Diffie-Hellman.

Synchronous stream cipher is used, with XOR'ed key-stream resulted
from pseudo random number generator using OTSK as seed, and periodic
marker flag for re-synchronization.

JackPair uses audio codec from Codec2, which has reasonable good
sound quality at 1.2kbps. We have tested JackPair on top of GSM AMR
4.75 (Adaptive Multi-Rate, 4.75kbps) and HR (Half-Rate, 6.5kbps).

Unlike traditional fax-modem technologies, our modem is designed from
scratch to fight off the optimization done by GSM codec, including
memory-less codec, voice activity detection (VAD)., automatic gain
control (AGC) etc.. Basically we have to use synthesized voice to make
mobile phones  media servers believe our signal is human voice, not
just modulated waves.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Weak random data XOR good enough random data = better random data?

2014-07-28 Thread Tom Ritter
You're talking about two different things here.

As others have said, if you XOR good random with 'not very good but
non-malicious random' - you are unlikely to reduce the entropy.  (And
as Seth said, if you XOR good random with malicious random (e.g. a
trojaned RDRAND instruction) you're in bad shape.)

But that's not what you asked.

On Mon, Jul 28, 2014 at 11:23 AM, Lodewijk andré de la porte
l...@odewijk.nl wrote:
 That way the user can still verify
 that I didn't mess with the randomness, no MITM attacks can mess with the
 randomness, but given a good transport layer I can still supplement usually
 bad randomness.

What this _sounds like_ to me, is that you want to try and make a good
faith effort to users that you can't deduce the randomness their
browser generates. You ship them high quality RNG output, and then
generate some randomness (probably Math.random() based?) and the
output should be unguessable by you.

But it's not. Math.random()-based random is guessable in 2^X, to
varying degrees of X: maybe between 20  60? (I'm estimating off
recollections of papers from my head.)  Math.random() seeded
algorithms are also guessable - once seeded an algorithm doesn't make
more entropy.

It sounds like what you want is a way to generate randomness a user
can trust, in a browser lacking crypto.getRandomValues.  That's hard
to impossible - it's why crypto.getRandomValues was made.  I believe
state of the art prior to crypto.gRV was using mouse movements and
other server-unpredictable events.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Client certificates, Tor-exit nodes and renegotiation

2014-03-14 Thread Tom Ritter
On 14 March 2014 08:40, Guido Witmond gu...@witmond.nl wrote:
 Dear all,

 I have a question regarding TLS, client certificates and Tor Exit nodes.

 Am I correct in my assumption that when a client connects to a
 TLS-server, both the server and client certificate are passed in
 clear-text (clear enough) to the other end before the certificates are
 validated and the secured connection gets established?

 If so, does it mean that using client certificates over Tor allows every
 exit node and system on-route to the server to learn both the
 client-certificate and the end-point, defeating the purpose of Tor?

 Is TLS-renegotiation, where the client connects anonymously to the
 server, validates the server certificate, sets up the secured connection
 and only then offers to send the client certificate, sufficient to make
 client certificates safe to use over Tor?

 Or are there more pitfalls to expect with client certificates and Tor?



This might be more appropriate for tor-users or tor-dev, but I'll give
it a shot.

Yes - sending client certificates over Tor will de-anonymize in the
same way that sending your real name or username over HTTP over Tor
will de-anonymize you.  Personally I consider this a flaw of TLS, not
Tor, which does not protect the client certificate from either a
passive or active adversary.  There were some proposals to move client
certificates further into the handshake, and protect them against a
passive and/or active adversary (depending on proposal) - but they did
not have much traction and then Snowden happened and everyone is
focused on TLS 1.3.

A nit: when you say every system on-route to the server, I assume
you mean between the exit node and the HTTPS endpoint, in which case
yes. If you mean every Tor intermediate node, then no.

Using TLS-renegotiation to send the client certificate inside an
already-server-authenticated channel seems like it would work to me -
I have not tried doing it with any library.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Commercialized Attack Hardware on SmartPhones

2014-03-02 Thread Tom Ritter
Hey all, wondering if anyone knows of any commercialized hardware
(e.g. developed into a product, not just a research paper) that
conducts attacks on powered-on, Full Disk Encrypted Android/iPhone
phones that _isn't_ PIN guessing?

So a powered-off FDE-ed iPhone or Android can be attacked by brute
force with no limiting factor.  A good example of this type of
software is Elcomsoft [0] - they brute force the passphrase.

A powered-on FDE-ed iPhone or Android can also be attacked by manual
or automated PIN entry - on the iPhone this can introduce a lockout,
but not on Android.  Assuming they can't see your smudges and guess
the PIN/Swipe/password of course.  I'm not sure if I know of a
commercialized solution to this that does it electronically, but a
friend of mine built a robot. [1]

But if you have a strong passphrase, things are looking good.  But
what about Cold Boot or DMA?

I don't believe you can do a DMA attack against most Android phones -
it's just a USB port.  But what about the HDMI-mini port?  And is the
iPhone Thunderbolt/Lightning connector hooked up to DMA?

As far as cold boot, I'm aware of the FROST paper[2], but that isn't a
commercialized offering, nor does it seem reliable or robust enough
for law enforcement needs.  Chip-off attacks are very unlikely.  AFAIK
iPhone jailbreaks require you to unlock your phone for technical
reasons, so those aren't possible without an unlocked phone (although
I'm not positive about that.)

Does anyone know about anything in this space? Where an 'ordinary' law
enforcement agency (e.g. the NYPD, not the NSA) could shortcut a
strong passphrase on a phone technically? (e.g. not beating it out of
someone?)

-tom

[0] http://www.elcomsoft.com/eift.html#passcode
[1] http://boingboing.net/2013/07/26/pin-punching-200-robot-can-br.html
[2] https://www1.informatik.uni-erlangen.de/frost
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Fwd: Re: Commercialized Attack Hardware on SmartPhones

2014-03-02 Thread Tom Ritter
 -- Forwarded message --
 From: shawn wilson
 How about a dictionary and rules. Even if you choose an alphanumeric
strong pass, you're kinda limited to the phone's keyboard - you're not
going to want to switch case or between letters and special too often.
Also, IIRC Android limits length to 15 chars. I also don't think the screen
lock can be different than the boot pass (so everything I said above should
hold true).

 Basically what I'm saying is use hashcat.

In regular use I agree completely. But in my threat model (what I'm
preparing for) is 'prepared use' - you're knowingly crossing a border or
attending a protest, want/need your phone, and are willing to have a
painful password for a short bit.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Commercialized Attack Hardware on SmartPhones

2014-03-02 Thread Tom Ritter
On Mar 2, 2014 11:47 AM, Kevin kevinsisco61...@gmail.com wrote:
 Tom:
 Pherhaps I am in the dark about this, but I'm sure attacking android is
quite simple as mobile security is farely new.  I have to wonder why you
are asking?

If it's simple, surely there are product descriptions, manuals,  commercial
offerings, leaked documents, tutorials, etc?  I mean we have all that for
testing mobile apps, web apps, disk forensics, those portable power
machines that let you move a server without powering out down etc...

I'm asking specifically because I want to understand the risk between a
powered on vs a powered off phone that was seized by police.  I understand
the password complexity issues, the tools for forensic acquisition of
unlocked phones, the tower and IMSI catching interception/tracking
concerns, how better it is to have no phone or a burner phone etc. It's
just the info on this particular corner is lacking.

(The info on iPhone data recovery via jailbreaks is also a mess but I'm
just going to have to try and test that stuff, it's documentation is not
very good.)

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] [Cryptography] TLS2

2013-09-30 Thread Tom Ritter
On 30 September 2013 07:07, Ralph Holz h...@net.in.tum.de wrote:
 Hi Ben,

 Boy, are you out of
 date: http://en.wikipedia.org/wiki/Server_Name_Indication.

 I am not so sure many servers support it, though. My latest data,
 unfortunately, is not evaluated yet. But in 2011 the difference between
 switching on SNI and connecting without it, was pretty meagre across the
 Alexa range. Granted, many of those hosts may not be VHosts.

 Does Google have better data on that?

I think you're testing that wrong. The major websites run one website
at multiple IPs - not multiple websites at a single IP.  So connecting
with/without SNI will usually get you the same result.

You want to test the Alexis 2,000,000 - 3,000,000 sites and see if you
get a different result - hit shared hosting sites, where multiple
sites run on a single IP.

As far as software support, there are a few clients where support
isn't there (most notable Java 1.7 and anything on Windows XP), but
server support is there.[0]

-tom

[0] https://en.wikipedia.org/wiki/Server_Name_Indication
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Paypal phish using EV certificate

2013-08-13 Thread Tom Ritter
On 13 August 2013 07:00, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 Erwann Abalea eaba...@gmail.com writes:

Looks like paypal-communication.com is a legit domain owned by Paypal, Inc.

 Even though, according to the second article I referenced, Paypal said it was
 a phishing site and said they'd take it down?

When sites have a phsihing domain that contains their name taken down,
isn't the domain actually transferred to them, because of copyright?
Perhaps it went into a domain pool, and someone unaware of its
provenance reused it.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Potential funding for crypto-related projects

2013-07-01 Thread Tom Ritter
On 1 July 2013 05:04, Ben Laurie b...@links.org wrote:
 On 1 July 2013 01:55, Jacob Appelbaum ja...@appelbaum.net wrote:
 So then - what do you suggest to someone who wants to leak a document to
 a press agency that has a GlobaLeaks interface?

 I would suggest: don't use GlobalLeaks, use anonymous remailers.
 Bottom line: Tor is weak against powerful adversaries because it is
 low latency. High latency mixes are a lot safer.

 GlobalLeaks should have an email API, IMO.

Having looked a lot at the current remailer network, and a bit at
GlobaLeaks - I'm going to wade in and disagree here. (Although this
thread has gotten woefully off topic after I've bumped it. =/)  Ben: I
love mix networks. I've been learning everything I can about them, and
have been researching them voraciously for a couple years.[0]  But IMO
the theoretical gains of high latency *today* are weaker than the
actual gains of low latency *today*.

Virtually all remailer use is Mixmaster, not Mixminion.  If you want
to use anything but a CLI on Linux - you're talking Mixmaster.  So I'm
assuming you mean that.  Mixmaster uses a very, very recognizable SMTP
envelope, that often goes out with no TLS, let alone no PFS.  There's
also precious few people actually using it.  And finally, if you look
at the public attacks on remailers (the unfortunate bombing threats of
last summer) and Tor (the Jeremy Hammond case) - you see that Feds are
willing to go on fishing expeditions for remailers, but less so Tor.
Tor was traffic confirmation, Remailers was fishing.[1]

Compare to GlobaLeaks.  Tor Hidden Service, Tor network.  The two
biggest threats are Traffic Correlation and the recent attacks on
Hidden Services.

Assume a Globally Passive Adversary logging all SMTP envelopes
(because... they are. So don't assume, know.).  Now assume a leak
arrives over email.  Light up all the nodes who sent a message via
Mixmaster within a couple days, and you'll get at most, a couple
hundred.  Now dim all the lights who've never sent a mixmaster message
before.  You'll get a couple.  That's enough to investigate them all
using traditional methods.

Now you *do* have to assume a GPA who's logging all Tor traffic.  It's
possible.  Some would even say it's probable.  But we've seen no
evidence. Do the same light-up.  You get a hundreds if not thousands
of nodes.  Too many to investigate traditionally.  And to do Traffic
Confirmation, you need to identify the Hidden Service.  And there's
the issue that it's not trivial to do traffic confirmation.

Oh and there's also the little problem of sending anything over 10,236
bytes via Mixmaster splits the message into multiple messages that all
emanate from your machine which makes it wildly probable some won't
arrive, and also drastically makes you stand out the crazy person
who's trying to send anything other than text through Mixmaster.

I'm not saying GlobaLeaks+Tor is safe.  I'm saying I think our current
remailer network is wildly unsafe.  (Now what I think about fixing
it... that's a whole other story, for a whole other time.)

-tom

[1] https://crypto.is/blog
http://defcon.org/html/defcon-21/dc-21-speakers.html#Ritter
[1] If you don't like my last argument, fine, ignore it, and work with
the others.
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Information-theoretic cryptography for the masses

2013-06-25 Thread Tom Ritter
From a high level view this looks like it provides similar features as
OTR + OTR's SMP.  Which works pretty well.

Well, actually, I have to say it works 'okay' because in practice I
have to run SMP a couple of times with my partner until we hit upon
the identical punctuation, capitalization, and question to which we
both have the same unambiguous answer.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Compression Attack on SSL

2012-09-11 Thread Tom Ritter
This comes from the same school of Attacker controlled requests
inside of an SSL tunnel to reveal some other portion of the data going
over SSL style attacks, BEAST being the other big example.  Many
people (including me) *think* this is the new CRIME attack Thai and
Juliano have announced [0].

The short of it is: by sending guesses of what you think a session
cookie is, you can observe how the data compresses if you can observe
packet sizes.  Guess all 16-64 possible characters, the single
character that compressed the smallest was the correct guess.  Move to
the next.  I've confirmed (outside of SSL) that this theory does work
in practice and allows you to guess out a portion of a message (the
cookie in this case).  Porting it to SSL using RC4 should be easy (no
padding); for AES it would be trickier because of the block padding,
but if you line it up to a boundary I'm sure it's possible.

Thomas Pornin has a good writeup here:
http://security.stackexchange.com/a/19914  And some circumstantial
evidence that this is CRIME is this commit to chrome[1] by the right
person, the fact that Tor is not vulnerable to the attack[2] (it
chunks data into 512 byte blocks), the hints given that it affects all
versions of TLS and the global warming comment, and the fact that
CRIME begins with a C for Compression ;)

I have a few random unanswered questions:
 - When did FF disable this? I went looking in the diffs but couldn't find it =/
 - Is there any way to fix this without just wholesale disabling compression?
 - Does this have implications to SPDY? (Which also compresses)

-tom

[0] http://www.ekoparty.org//2012/thai-duong.php
[1] https://chromiumcodereview.appspot.com/10825183
[2] https://twitter.com/nickm_tor/status/243460419501559808
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] RSA OAEP (Was: Explaining crypto to engineers)

2012-03-08 Thread Tom Ritter
On 26 February 2012 14:21, Ondrej Mikle ondrej.mi...@nic.cz wrote:
 I've just found an article about the OAEP padding oracle (that I couldn't 
 recall
 before):

 http://ritter.vg/blog-mangers_oracle.html

 Reportedly there is no major implementation that would suffer from error
 side-channel, although there is an interesting experiment with timing 
 side-channel.

Hey that's me! An error-based side channel does seem to prevented in
all libraries, and I got a good timing side channel out of libgcrypt -
but smaller timing side channels may be present in others.  Pascal
Junod shows that even OpenSSL that tried to prevent this attack
specifically may be vulnerable to a timing side channel on local or
embedded systems: http://crypto.junod.info/hashdays10_talk.pdf because
it uses a very short branch (slide 59 abouts).

On 26 February 2012 22:25, Peter Gutmann pgut...@cs.auckland.ac.nz wrote:
 Ondrej Mikle ondrej.mi...@nic.cz writes:

I've just found an article about the OAEP padding oracle (that I couldn't
recall before):

 There's another one that was published about a year ago that looks at things
 like side-channel attacks via the integer-to-octet-string conversion
 primitives and other really low-bandwidth channels, I think it was Manger's
 Attack Revisited.  At the time I was thinking of doing a writeup on 
 generalised
 defences (via randomisation) against this sort of thing because as Revisited
 points out, you're always going to get timing channels somewhere if you look
 hard enough and a generalised defence would be better than the penetrate-and-
 patch approah to stopping timing channels.

Interesting, this appears to be it:
http://www.cdc.informatik.tu-darmstadt.de/reports/reports/mangers_attack_revisited.pdf

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Certificate Transparency: working code

2012-03-02 Thread Tom Ritter
On 1 March 2012 13:14, Thierry Moreau thierry.mor...@connotech.com wrote:
 May I ask a (maybe stupid) question?

 ... audit proofs will be valid indefinitely ...

 Then what remains of the scheme reputation once Mallory managed to inject a
 fraudulent certificate in whatever is being audited (It's called a log but
 I understand it as a grow-only repository)?

At the risk of espousing on something I didn't author while the
authors are present: CT doesn't address revocation (yet).  According
to the original doc, revocation will still be needed.  It posed the
idea similar to the DNSSEC Proof of Nonexistence where the CA will
publish a list of all revoked certs, sorted, updated every so often.
The server would then present, or the client obtain somehow, this
list.  If the cert in question isn't in the list at the point it would
be (because it's sorted), it's still valid.

I don't know if this idea has changed, it was published before the
browser-pushed CRLs that Chrome is moving to was announced.  But yes,
revocation still needs to be addressed, somehow.  Auditing the log is
designed to for finding the certificates that need revoking, hopefully
very quickly.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Duplicate primes in lots of RSA moduli

2012-02-15 Thread Tom Ritter
On 15 February 2012 11:56, Ben Laurie b...@links.org wrote:
 I did this years ago for PGP keys. Easy: take all the keys, do
 pairwise GCD. Took 24 hours on my laptop for all the PGP keys on
 keyservers at the time. I'm trying to remember when this was, but I
 did it during PETS at Toronto, so that should narrow it down. With
 Matthias XXX (I've forgotten his surname!).

I mentioned this a few months ago, you had said you did it at PETS 2004. [0,1]

Something I found strange in their paper was this quote:

PGP keys have no expiration dates or hashes. All public keys were
further analysed as described below. (bottom of page 4)

PGP keys *may* have no expiration date, but they may, and anecdotally
most I've seen do.  Likewise, nearly all keys have a self-signed UID
associated with them, and that signature uses a hash algorithm.

-tom

[0] Original Thread:
http://lists.randombit.net/pipermail/cryptography/2011-September/001301.html
[0] Your Reply:
http://lists.randombit.net/pipermail/cryptography/2011-September/001305.html
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Non-governmental exploitation of crypto flaws?

2011-11-27 Thread Tom Ritter
On 27 November 2011 20:10, Steven Bellovin s...@cs.columbia.edu wrote:
 Does anyone know of any (verifiable) examples of non-government enemies
 exploiting flaws in cryptography?  I'm looking for real-world attacks on
 short key lengths, bad ciphers, faulty protocols, etc., by parties other
 than governments and militaries.  I'm not interested in academic attacks

The Padding Oracle attack enabled real-world attacks on both common
(DotNetNuke) and proprietary .Net and JSF web applications, as well as
CAPTCHAs.  Based on emails I've seen, this was widely exploited
online.

The BEAST attack on TLS was demonstrated practically, but wasn't
exploited widely AFAIK, which is the same case for the MD5-colliding
CA cert.

The console hacking scene may have more examples besides the PS3 break
mentioned by Marsh.  XBox 360 was rooted using a glitch attack to make
a hash comparison fail:
http://www.free60.org/Reset_Glitch_Hack
This may not be what you're looking for, but inducing a fault to
bypass a cryptographic check is at least on the same street.

Several encrypted hard drives are crappy implementations.  This one:
http://www.h-online.com/security/features/Cracking-budget-encryption-746225.html
was broken after discovering its encryption was just a matrix
multiplication.  I'd say this is actually farther from crypto than the
fault attack.

The Debian Weak Key bug produced many exploitable scenarios, although
I'm not sure if there are public tales of one being actively
exploited.

There was also a presentation in the last three years about practical
crypto attacks on web applications.   I believe it had two examples,
one of which was a crappy RNG in the password reset mechanism of a
popular web framework.  I can't for the life of me find it after
searching for 30 minutes though.  (There was another recently I
believe around a timing attack on string comparisons but that's not
really crypto.)

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Auditable CAs

2011-11-27 Thread Tom Ritter
So my biggest question is what defines a publically visible
certificate?  Of course every certificate gmail uses would be
public... but what about the cert that corresponds to the new product
google is launching that's in beta for a few users?  That cert should
be published... but then that lets the cat out of the bag.  (Isn't
this almost the same problem DNSSEC has?)  I'm confused about whether
people opt-in, or opt-out, or opt-anything.

 Similarly it might be possible to allow an intermediate CA to create
 private certificates within a subdomain - in this case the intermediate CA 
 certificate would have to be logged
 along with which domain it could create subdomains in, so that mis-issues 
 can still be detected.
 For example, an X.509 extension specifying the permitted domains could be 
 included in the certificate.

Wouldn't this be easier done with NameConstraints?

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


[cryptography] Running a keyserver is valuable OR pairwise attacks on public keys

2011-09-08 Thread Tom Ritter
A long time ago I read an account on a website of a test done in the 90s
on public RSA keys.  A keyserver operator was politely asked for the
entire database of public keys, and he complied (I think it was the MIT
keyserver and the researchers were at MIT, but I don't recall.)

The public keys were all analyzed and compared efficiently pairwise
(computing the GCD I believe) to see if by some small chance a
factorization would occur.  And it did - I recall the website saying it
was a very strange scenario with one of the keys not actually being
correctly semiprime and having several small factors.

I was never able to find the website giving this account again.

But the idea seems to be coming back with this paper:
http://eprint.iacr.org/2011/477.pdf

NTRU is a fast public key cryptosystem presented
in 1996 by Hoffstein, Pipher and Silverman. It operates in
the ring of truncated polynomials. In NTRU, a public key
is a polynomial defined by the combination of two private
polynomials. In this paper, we consider NTRU with two
different public keys defined by different private keys. We
present a lattice-based attack to recover the private keys
assuming that the public keys share polynomials with a suitable
number of common coefficients.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Minimally Sufficient Cryptosystem

2011-07-05 Thread Tom Ritter
 Perhaps anybody else that was there or is familiar with Shamir's work along
 this line might comment.

I was in Boston last Friday as well - Jean-Philippe is correct, the
second half of the talk was on the Even-Mansour system, and Adi talked
about his SLIDEX attack.  He may have expanded on it a little bit, I
think he had some more developments - but I don't recall exactly. I
got more out of the first half of the talk on GOST.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Digitally-signed malware

2011-06-22 Thread Tom Ritter
 What happens if the bad guy just strips the signature? What are the
 circumstances under which an OS or user+OS will refuse to run code that just
 isn't signed at all?

In the case of Microsoft Clickonce, the Install Dialog is changed from
Publisher: Discount Bob's Software  Hanggliding to Publisher:
Unknown Publisher and the icon from a yellow shield to a red shield.
I took a look at Man-in-the-Middling Clickonce deployments last
summer.  Stripped the signature, decompiled to IL, injected code, and
recompiled all as part of a transparent proxy.
http://seclists.org/bugtraq/2010/Jul/164

A similar project is Evilgrade:
http://blog.infobytesec.com/2010/10/evilgrade-20-update-explotation.html
although that's a framework for targeting different applications, each
one possibly behaving differently.

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography


Re: [cryptography] Mobile Devices and Location Information as Entropy?

2011-04-02 Thread Tom Ritter
 At most, I would think you'd only be able to collect a few bits.

Agreed, I think using anything but the lowest bits would be dangerous.
 But most smartphones (especially ones with GPS sensors) have other
sensors that would be better contributors of entropy, and aren't
monitorable by any remote adversary: Acceleration, Orientation,
Microphone, Camera, probably some others.  You may also be able to get
some bits from the Antenna and Wifi Signals Strengths as well.

But, most phone API's already provide a random number generator they
say is cryptographically sound.  Java's SecureRandom on Android,
SecRandomCopyBytes on iOS, net.rim.device.api.crypto.RandomSource on
Blackberry, System.Security.Cryptography.RNGCryptoServiceProvider on
Windows, and CreateRandomL on Symbian.  Is there a particular reason
you distrust or can't use one of those?

-tom
___
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography