Re: smartcards, electronic ballots

2001-02-04 Thread Ed Gerck



David Honig wrote:

 First of all, that's not "privacy", that's "anonymity".
 
 We have voter registration precisely so that we know who the voters
 are!  We are not changing voter registration
 
  Ed Gerck wrote:
 4. Fail-safe privacy in universal verifiability. If the
encrypted ballots are successfully attacked, even with
 court order, the voter’s name must not be revealed. In

 On Keeping Votes Secret

 If you give people a paper receipt with their votes on it
 (as WAS's scheme mentions) then their votes can be bought or blackmailed.
 Now, this may be an acceptable *tradeoff* (trust gained from paper trail
 vs. increased succeptability to coercion), that's not for me to decide.

The law does not allow it, and for good reasons as you mention.  Also, proposals
to print the vote usually advance it as the "silver bullet" solution.  This is a
fatal mistake because to increase realibility in communications it is much better to
have a number of independent channels than one "strong" channel (Shannon,
tenth theorem).

 One potential solution is to make the 'receipts' readily forgable --something
 anyone could print up at home, on ordinary commercial blank paper.  Such
 ready counterfeiting would deter vote buying and blackmail.

Not really. The buyer might be waiting outside the precinct, the seller might not
be able to fake it (technically -- think about the "digital divide" issues just to
have a computer), the election official might also get in collusion, etc.

 On Banning Video Cameras From Voting Places

 The voting apparatus may keep a serial record of each vote, in order, for
 auditing purposes.

No, it MUST not.  See the FEC standards on voting. The FEC standards also
demand "storage alocation scrambling" in order to avoid even a serial order
of storage.

 This is also mentioned in WAS's legislative text.

which is a miconception, albeit a common one

  Now,
 if an evil vote buyer had someone recording who entered which booth
 and also had access to the audit records, the correlation lets them
 buy or blackmail votes.  Note that this requires only *one* conspirator if
 that conspirator is a poll worker with a concealed camera.

Yes, this is one of the reasons. It could also be the election official.

Cheers,

Ed Gerck





Re: smartcards, electronic ballots

2001-02-04 Thread Ed Gerck



William Allen Simpson wrote:

 -BEGIN PGP SIGNED MESSAGE-

 I'm sorry for the second message, but I could not let the egregious
 error pass uncorrected:

:-) egregious ...

 Ed Gerck wrote:
  The law does not allow it, and for good reasons as you mention.
 ...
   The voting apparatus may keep a serial record of each vote, in order, for
   auditing purposes.
 
  No, it MUST not.  See the FEC standards on voting. The FEC standards also
  demand "storage alocation scrambling" in order to avoid even a serial order
  of storage.
 
   This is also mentioned in WAS's legislative text.
 
  which is a miconception, albeit a common one
 
 Mr Gerck would do well to precisely specify the "law" which does not
 allow this?

California Election Code, for example.  In the US, there is NO federal jurisdiction on
election code -- as it became clear to Joe Doe after Florida. Pls also read about it in
Eva Waskell's article in The Bell, page 7, November 2000 issue, and also in Jim Hurd's
article in The Bell,page 6, July 2000 issue (both issues available at www.thebell.net 
in
the archives section),

 Mr Gerck would also do well to specify which FEC "standards" have the
 force and effect of law?

None -- and I never said so.  They are voluntary standards, but 40+ states have
decided to follow them and incorporate them in their laws.

 As to the matter of "law", the Congress is granted the power to set
 standards for its own election (Const Article I, Sections 4 and 5).
 The FEC isn't mentioned.

Indeed, this is what Article I, Section 4 says:  “The times, places, and manner of 
holding elections
for Senators and Representatives shall be prescribed in each State by the Legislature 
thereof;
but Congress may at any time by law make or alter such Regulations, except as to the 
Places
of chusing Senators.”

Thus, each individual state has exercised its right to administer elections in a 
manner reflecting
that state’s political, social and cultural make-up.  Although the Constitution 
clearly gives
Congress the authority to make or alter such state regulations, Congress has been very 
reluctant
to do so. However, Congress has intervened in state election procedures when, for 
example, they
gave women the right to vote and when they passed the Voting Rights Act. Nonetheless, 
states’
rights have taken precedence when it comes to conducting elections.

(sections above by Eva Waskell, ibid.)

 But the FEC proposed standards don't even consider networks, database
 replication with offsite storage, and as mentioned earlier, cryptographic security.

read the new drafts, already past first public meetings.  Read also the state 
documents.

Cheers,

Ed Gerck





Re: smartcards, electronic ballots

2001-02-03 Thread Ed Gerck



William Allen Simpson wrote:

 And in the same vein, I forwarded Ed Gerck's list of published
 'requirements' to Lynn.  She intends to use them as a perfect example
 of what we DO NOT want!

see below, before you set yourself to re-invent the wheel.

 Ed Gerck wrote:
  1. Sixteen requirements for voting. The requirements are technologically
  neutral and can be applied to paper, electronic or Internet systems.  There
  is an extensive discussion of alternatives, before the requirements are
  summarized. Available at http://www.thebell.net/archives/thebell1.7.pdf ,
  page 3.
 
 There are some requirements that are nearly identical to those that
 we've selected.

The 16 requirements include many that are either a recommended standard by the FEC
or are being considered for recommended standards.  I did not re-invent the wheel.

  And I like the kudos to IETF, and open systems.

 However, the first half dozen are based on the bad presumption that:

 1. Fail-safe voter privacy. Define: “voter privacy is the
 inability to know who the voter is.” Assure voter privacy
 even if everything fails and everyone colludes.

 First of all, that's not "privacy", that's "anonymity".

Just for you. See the technical papers in http://www.safevote.com/information.htm,
especially  ftp://ftp.inf.ethz.ch/pub/publications/papers/ti/isc/wwwisc/HirSak00.pdf
and its references. See Gennaro's paper quoted at the end, as well.

Further, see also my posting here of Oct/99, in which I wrote:

The current useful voting properties as proposed by Fujioka,
Okamoto and Ohta, 1992, and  Benaloh and Tuinstra, 1994, are:

1. Completeness: All valid votes are counted correctly, if all participants are honest.

2. Robustness: Dishonest voters, other participants or outsiders can't disturb or 
disrupt
an election.

3. Privacy: The votes are casted anonymously.

4. Unreusability: Every voter can vote only once.

5. Eligibility: Only legitimate voters can vote.

6. Fairness: A voter casts his vote independently and is not influenced (e.g. by 
publishing
intermediate results of the election, copying and casting of the encrypted vote slip of
another voter as his own vote).

7. Verifiability: The tally can not be forged, as it can be verified by every voter. 
The
verifiability is locally, if a voter can only check if his own vote if counted 
correctly. If
it is verifiable whether all votes are counted correctly, then the verifiability is 
universally.

8. Receiptfreeness: A voter can't prove to a coercer, how he has voted. As a result,
verifiable vote buying is impossible.



 We have voter registration precisely so that we know who the voters
 are!  We are not changing voter registration

You are mixing apples with  speedboats. The 16 requirements apply especifically to
voting, as it says. Of course, in voter registration the election officials must know 
who
the voter is (and more -- where the voter lives, etc.).

BTW, there are other requirements being discussed especifically to voter registration,
and here privacy will also be a BIG issue.  One that is being infringed today by 
third-party
voter registration services that transfer the voter data to the state but keep copies, 
which
copies they are legally allowed to share with their 'affiliates' (read: anyone that 
signs a
contract with them).

 4. Fail-safe privacy in universal verifiability. If the
 encrypted ballots are successfully attacked, even with
 court order, the voter’s name must not be revealed. In
 addition, the system must provide for “information-theoretic
 privacy” (i.e., privacy which cannot be broken
 by computation, even with unbounded time and
 resources) in contrast to systems that would only provide
 for “computational privacy” (i.e., privacy which could be
 broken by computation, given time and resources).

 I cannot believe any security analyst worth his salt could 'specify'
 such as requirement.  When I specified computational infeasibility of
 100 years, the Science staff came back and asked how NIST would test
 that?  We reduced it to 10 years, something that might be achievable.

You are, again, mistaken. See the classical paper by Rosario Gennaro and others,
at  http://www.research.ibm.com/security/election.ps BTW, this is their remark on
this (and, voter privacy):

  Privacy of an individual vote is assured against any reasonably sized coalition of 
parties (not
  including the voter herself). That is, unless the number of colluding parties 
exceeds a certain
  threshold, different ballots are indistinguishable irrespective of the contained 
votes. We say
  that informationtheoretic privacy is achieved when the ballots are 
indistinguishable indepen
  dent of any cryptographic assumption; otherwise we will say that computational 
privacy is
  achieved.

BTW,  my replies above might also indicate that the US election process would be much
improved if proper attention is given to what must not be

Re: electronic ballots

2001-02-01 Thread Ed Gerck



William Allen Simpson wrote:

 -BEGIN PGP SIGNED MESSAGE-

 I've been working with Congresswoman Lynn Rivers on language for
 electronic ballots.  My intent is to specify the security sensitive
 information, and encourage widespread implementation in a competitive
 environment.  We'd like feedback.

I suggest you take a look at:

1. Sixteen requirements for voting. The requirements are technologically neutral
and can be applied to paper, electronic or Internet systems.  There is an extensive
discussion of alternatives, before the requirements are summarized. Available at
http://www.thebell.net/archives/thebell1.7.pdf , page 3.

2. Talk to Assemblyman Kevin Shelley (D, CA) , who proposed an Online Voting
Modernization Act this January. Contact through [EMAIL PROTECTED]

3. Talk to Assemblyman John Longville (D, CA), who chaired the California
Legislative Hearing on Elections this Jan/16-17.

4. My testimony to the California Legislative Hearing on Elections, available in a
verbatiom copy from the tapes, at  
http://www.mail-archive.com/tech@ivta.org/msg00104.html

Cheers,

Ed Gerck





Election Technology Expo -- Jan 16, 2001

2001-01-12 Thread Ed Gerck


[Perry -- this may interest cryptography]

List:

The Expo was announced right before Christmas by the SoS and The
Bell immediately announced it in the website www.thebell.net and in
the December edition.  In case you missed it, the Expo is next week.

The California Secretary of State is sponsoring the Election Technology
Expo. The Expo will be at the Hyatt Regency in Sacramento, January 16,
from 9:00 to 3:00. It opens at 8:00 for registration. There will  be also
a series of panels in the morning.

For information, contact

Bruce McDannold [EMAIL PROTECTED]


Cheers,

Ed Gerck




Re: snake-oil projects -- 2 University Presidents Will Try to Improve Voting

2000-12-19 Thread Ed Gerck



Derek Atkins wrote:

 It's not snake oil if you can possibly produce it.  There are plenty
 of "electronic voting" (read: NOT internet voting) systems that are
 "foolproof, secure, simple to operate", so the question is whether you
 can make it affordable.  This is not selling a product, it's selling a
 project goal.  Therefore, it is not snake oil.

Derek:

The idea of more people working on voting projects and products
is excellent and useful, IMO. What is less than useful is promising
what cannot be delivered -- and, clearly, promising to "give everyone
a record of their vote, so they know exactly   what they have done at the
polls" as Baltimore's words did, and that is why you will like the system
I produce, is snake oil of good quality IMO.

I know you are at MIT, but please do not feel offended -- just help
correct it, if you can.

Cheers,

Ed Gerck

 Ed Gerck [EMAIL PROTECTED] writes:

  In http://www.nytimes.com/2000/12/15/politics/15MIT.html
 
 "The idea," Dr. Baltimore said, "is to produce a system that is 
foolproof, secure,
simple to operate and affordable so that it can be in every precinct in 
America. The
system should also give everyone a record of their vote, so they know 
exactly
what they have done at the polls."
 
  and which allows the voter to prove how he voted (and cash in), might
  someone add.
 
  I guess snake-oil projects are getting a good company.  Also, they all seem to
  like to promise "foolproof, secure, simple to operate and affordable".





snake-oil projects -- 2 University Presidents Will Try to Improve Voting

2000-12-17 Thread Ed Gerck


In http://www.nytimes.com/2000/12/15/politics/15MIT.html

   "The idea," Dr. Baltimore said, "is to produce a system that is foolproof, 
secure,
  simple to operate and affordable so that it can be in every precinct in 
America. The
  system should also give everyone a record of their vote, so they know exactly
  what they have done at the polls."

and which allows the voter to prove how he voted (and cash in), might
someone add.

I guess snake-oil projects are getting a good company.  Also, they all seem to
like to promise "foolproof, secure, simple to operate and affordable".

Cheers,

Ed Gerck





Internet voting attack test

2000-11-01 Thread Ed Gerck


The purpose of the information released in this page is to help hackers and security 
specialists attack the Internet Shadow Election test in Contra Costa County. This test 
is an official
test of Internet voting contracted with the state of California.

http://www.safevote.com/tech.htm

Cheers,

Ed Gerck





Re: Non-Repudiation in the Digital Environment (was Re: First Monday August 2000)

2000-10-18 Thread Ed Gerck

Tony,

Your examples were so bad!

;-) of course, I meant "good" as in that new IBM commercial where the IBM guy says that
the IBM laptop is "bad" ;-)

I appreciate your comments and, yes, very often society uses contrary words to
mean another thing.

But if we step aside a bit from the usefulness or not of dumbed down soundbites
or current slang in technical documents that should be precise, I see this
"identity theft" discussion mainly as a counterexample to those that like to require
a legal context to every word -- whereas we do not even have a worldwide legal context.
As we saw,  lawyers and lawmakers are oftentimes the first ones to use the term
"identifty theft" -- which simply is not a theft, it is impersonation.  Of course, I
continue to hope that we in crypto don't have to use "identity theft" as well. But,
should they can continue to use it?

Some lawyers don't think so, including Mac Norton in this list who wrote:

 Speaking as a lawyer, one of "they,", they should not continue to use
 it.  Identity theft might be accomplishable in some scenario, one in which
 I somehow induced amnesia in you, for example, but otherwise the use of
 the term to cover what you rightly point is simply impersonation, does a
 disservice to my profession as well as yours.

I also think that using "identity theft" for what actually is impersonation
is a disservice to our profession. In the same way that I think we need to
make sure lay people understand that non-repudiation in the technical realm
is not an absolute authentication or undeniable proof.  If we can only this,
deny that non-repudiation means undeniable proof, it will be already very useful.
Then, we may be able to apply the concept of non-repudiation as we feel the need
for it in protocols -- and note that we did not invent it, rather we discovered it.  
Authentication is not sufficient to describe validity. 

Cheers,

Ed Gerck




Re: Non-Repudiation in the Digital Environment (was Re: First Monday August 2000)

2000-10-07 Thread Ed Gerck



"Arnold G. Reinhold" wrote:

 In public-key cryptography "Non-Repudiation" means that that the
 probability that a particular result could have been produced without
 access to the secret key is vanishingly small, subject to the
 assumption that the underlying public-key problem is difficult.  If
 that property had be called "the key binding property" or "condition
 Z," or some other matheze name, we would all be able to look at this
 notion more objectively. "Non-repudiation," has too  powerful a
 association with the real world.

Your definition is not standard. The Cryptography Handbook by Menezes
defines non-repudiation as a service that prevents the denial of an act.  The
same is the current definition in PKIX, as well as in X.509.  This does not mean, 
however as some may suppose, that the act cannot be denied -- for example,
it can be denied by a counter authentication that presents an accepted proof.

Thus, non-repudiation is not a stronger authentication --  neither a long lived
authentication.  Authentication is an assertion that something is true. Non-
repudiation is a negation that something is false. Neither are absolute.  And
they are quite different when non-boolean variables (ie, real-world variables)
are used. They are complementary concepts and *both* need to be used or
we lose expressive power in protocols, contracts, etc..

Cheers,

Ed Gerck



 To transfer the cryptographic meaning of "non-repudiation" to a legal
 presumption against repudiation requires legislative acceptance four
 things:

 1. the mathematically unproven assumptions in public key cryptography

 2. the binding of a particular public key to a person

 3. the ability of an ordinary individual to keep a private key secret

 4. holding the individual responsible for failure to do so.

 As for 1, note that at the moment there is not even consensus as to
 the long term security of , say, a 1024-bit RSA key. As to 2., read
 the Verisign certification practice statement. As to 4. not that in
 the US we do not presently hold individuals responsible for loss of a
 credit card.

 The most problematic assumption is 3. McCullagh lists a couple of
 attacks, but there are many more. Here is my incomplete list:

 1. Planting a program on the user's computer to capture their keyring
 and passphrase.

 2. Replacing the users copy of the cryptographic program with a
 doctored version

 3. Planting a bug in their keyboard to capture key strokes

 4.* Using a microTV camera to capture passwords and PIN numbers

 5.* Substituting documents. (You think you are buying a pizza but you
 are actually signing a deed to your house.

 6. Public/private key pairs generated by a third party who's security
 is less than perfect

 7. Poor or deliberately weak random number generation at key creation

 8.* Algorithm substitution (e.g. multiprime) that weakens security to
 reduce computation times

 9. Guessable passphrases and PINs

 10.* Allowing someone else to use your key (does the president of
 World Wide Widget really hold the key token, or does he give it to
 his secretary?)

 11.* Con artist techniques ("I'm an field agent from CyberSec --
 here's my ID card -- and we'd like your help in tracking down child
 pornography dealers on the Internet. We'll need your key token and
 PIN. ")

 12.* Finding ways to penetrate "tamper proof" mechanisms, e.g. power
 fluctuation attacks.

 McCullagh believes that "trusted systems," which he defines as "at
 least Bl (TCSEC)/E3(ITSEC)/ or even possibly B2(TCSEC)/E4( ITSEC)"
 can provide a basis for non-repudiation in the legal sense.  He is
 under the apprehension that "A trusted computing system performs in
 accordance with its documented specification and will prevent any
 unauthorised activity."  Since Mr. McCullagh background is in law,
 let me provide an equivalent statement: "Laws reflect the public's
 consensus of what is right and wrong and the judicial system fairly
 and accurately enforces those laws." Both are statements of a lofty
 goal, not a reality that anyone has been able to achieve.

 Well designed cryptographic tokens can counter some of the attacks I
 listed, but not all. The ones I marked with an asterisk are still
 applicable and there is still the problem of verifying and auditing
 the token manufacturer, a lucrative target for organized crime.

 I can't address the legal arguments he makes since he is in
 Australia, but my understanding of the recently enacted electronic
 signature law in the US is that it attempts to put electronic
 signatures on exactly the same legal footing as paper signatures. It
 has no special status for PKC signatures. Clicking an http "I Accept"
 button is just as valid, as I understand the law.

 The term "non-repudiation" should be retired.  The best that one can
 say about public key signature systems for use 

Re: reflecting on PGP, keyservers, and the Web of Trust

2000-09-12 Thread Ed Gerck



lcs Mixmaster Remailer wrote:

 This is in contrast to the practice in the X.509 PKI, where a root CA
 has the ability to delegate trust as far as it wishes.

This is not correct. In X.509 it is the verifier that defines how that
is accepted and to how many levels, irrespective of what was signed.

The contrast is not true for PGP either.  A signer in PGP may sign
any number of keys that may have a transitive relationship to one
another' signatures as far as the signer wishes -- what the verifier
does (as in X.509) is another story.


 If your browser
 trusts Verisign, and Verisign trusts someone else, you automatically
 trust that other party.

Depends on the browser.  This is not a requirement or feature of X.509,
though often so confused. For an example where it is not, see Apache.


Cheers,

Ed Gerck





Re: Secrets Lies, a comment

2000-09-05 Thread Ed Gerck



[EMAIL PROTECTED] wrote:

 Ed says,

  The solution is to use a multifold of links, arranged in time and space
  such that rather than making the impossible assumption that "no part
  will fail at any time," we can design a system where up to M parts can
  fail at any time provided that not all M parts fail at the same time --
  where M can be the entire number of parts.

 This sounds like `proactive security`, as defined in several cryptographic
 works. You may want to check it out at http://www.hrl.il.ibm.com/proactive

But you make the assumption that "as long as most systems are secure most of the time."
which I do not find necessary.

BTW, one of the earliest references to the security design I mentioned can be
found in the Hindu governments of the Mogul period, who are known to have used at
least three parallel reporting channels to survey their provinces with some degree
of reliability, notwithstanding the additional efforts.

Cheers,

Ed Gerck





Re: reflecting on PGP, keyservers, and the Web of Trust

2000-09-05 Thread Ed Gerck

Ed Gerck wrote:
 Even though the web-of-trust seems to be a pretty good part of PGP,
 IMO it is actually it's Achilles heel.

I agree with most comments but they seem to deal more with symptons. Let
me just clarify/justify the above and why I think this is IMO actually the root
cause of problems.

PGP is based on an “introducer-model” which depends on
the integrity of a chain of authenticators, the users
themselves. The users and their keys are referred from one
user to the other, as in a friendship circle, forming an
authentication ring, modeled as a list or “web-of-trust”.
The web-of-trust model has some problems, to wit:

1. At the end, you may not know very well the last person who
entered the ring ... but you hope that someone else in the ring
does!

2. You may have different rings with “contact points”
which guarantee the referrals. However, no user can know for
sure if everyone in his authentication ring has a valid entry.

3. Let's use the term “chain” to denote such connected rings, which
can also, of course, have multiple connections. The reader should
notice further that the maintenance of this chain -- changing,
adding or deleting data -- is done by the authenticators themselves
in a happenstance pattern.

4. There is no guarantee if and when the chain is up-to-date.

5. Everyone familiar with the classical problem (or need) of
file-locking in a multi-user environment will recognize that
there is no “file-locking” mechanism here.

6. PGPdoes not scale well in size (because of the aforementioned
asynchronous maintenance difficulties of the web of trust)
or time (because of the same maintenance problems reflected
in the certificate of revocation certificates, a CRL for PGP
certificates).

So, while PGP enforces a "hard" trust policy with “trust is
intransitive” to setup entries in the web of trust, it uses a
"soft" policy to upkeep entries, without discussing their
validity/gauge or allowing for time factors and lack of synch.

This is not a dismissive treatment of PGP! One of the benefits
of PGP is that it can interoperate with a CA fully-trusted by
all parties in a domain (such as an internal CA in a company)
that is willing to guarantee certificates as a trusted introducer.
Better tools would certainly be necessary for central administration
of PGP trust parameters in a corporate system, but the flexibility of
PGP makes it a good example of a quasi-decentralized system.

Because there is no entity responsible if (or when)
something goes wrong – not even the user – the use of PGP
in a commercial situation is difficult and may not
adequately protect the business interests involved.

But again, within a circle of close friends or clients this is not
important.

Cheers,

Ed Gerck





Re: reflecting on PGP, keyservers, and the Web of Trust

2000-09-01 Thread Ed Gerck



Greg Rose wrote:

 I was an early adopter of PGP, and put a lot of effort into advancing the
 Web of Trust. I use PGP actively on a daily basis. Nevertheless, I have
 been disillusioned for some time, and today's fun prodded me into writing
 this. Here is a list of things which I consider to be problems with "the
 PGP Scene":

I discussed these problems (and others, listed in http://www.mcg.org.br/cert.htm)
with the PGP management during two week-long visits a former Director and
their  security architect made to myself while I was in Brazil in 1997/8.  Some
of the problems I mentioned have been solved, others have remained. Some solutions
are indicated in the cert.htm paper, including the question of central administration
with its pros and cons. I think that PGP is a fine program for communication within a
small circle of friends but, beyond this which was the initial goal anyway, PGP does
not have the capabilities to do the job.  However, PGP could be used as a component
in a system that would provide for a wider usage scope -- which, however, would require
IMO a radical re-design of the web-of-trust. Even though the web-of-trust seems to be
a pretty good part of PGP, IMO it is actually it's Achilles heel.

BTW, many lawyers like to use PGP and it is a good usage niche.  Here, in the
North Bay Area of SF, PGP is not uncommon in such small-group business users.

Cheers,

Ed Gerck





quantitative levels of trust, Re: Secrets Lies, a comment

2000-09-01 Thread Ed Gerck



David Honig wrote:

 At 04:45 PM 8/30/00 -0700, Ed Gerck wrote:
 about whether they work.  So, understanding the mathematical
 properties of trust (trust not as an emotion but as something
 essentially communicable), how can trust can provide an answer

 Hmm, the flow of trust.

 There are no such things as holes, just missing electrons.

(note: holes have "mass" and it turns out to be negative, so that
it is not just a missing electron but it interacts with electrons and
other holes)

 I wonder if its not trust, but anti-trust ('secret' information) that flows.
 Each 'trusted' node must be a diode and you can ask what if it breaks
 down.

Anti-trust or the complement of trust exists as well -- it is when you knowingly
refuse to trust, when you distrust (I call it cotrust). This is distinct from lack of
trust (i.e., neutral trust, I call it atrust) when you don't know whether you could
trust or not.  And there is also unknown trust (I call it ignorance), when you don't
know you should assign a trust decision and so you don't even choose one of the
former three possibilities (trust, cotrust or atrust) -- being totally blind as to even
the need to choose.

These four types of trust have mathematical counterparts in software and can be
likewise used to "tag" information with a "validity label", providing for a reliance
metric used in four-level logic calculations.

The degrees of trust can thus be expanded beyond the simple "trust or no trust"
dilemma, into a set of four trust values which can be shown to be ordered from a
least value (unknown trust) to a highest value (trust).  This metric can be further
subdivided, producing 64 degrees of trust -- again, ordered from least value to
highest value.  And, so on, to highest orders still.  Thus, we can have quantitative
levels of trust --  they are not ordinal numbers,  they are cardinal numbers.

Again, software is able to use these values in order to properly process information
according to a reliance metric, and avoiding the pitfalls of simpler systems that
just consider "trust" or "not trust" as the end-all be-all of trust decisions.

This approach thus provides for a series of nested approximations (as higher orders
of trust are introduced) that solves the problem of dealing with incomplete 
information,
which solution can be optimized for accuracy, reliability and cost.  Of course, we 
cannot
have at the same time (in general) 100% accuracy, 100% reliability and zero cost -- but
the approach allows solution spaces to be found, some of which may have acceptable
values for accuracy, reliability and cost (and other estimators, such as delay time).
Note that, in general, information is always incomplete -- so, this approach is already
in use even though just intuitively.  Information is incomplete either because we 
simply
do not have it or because it was deleted/changed/inserted by a fraudster/bug and we do
not know it.

Dealing with incomplete information is therefore the real security issue here.

I note that  findings similar to mine were made in the field of relational databases 
some
20 years ago already, when a need was felt to deal with incomplete information.  There,
the so-called "null-theory" model allows also four levels of reliance to be defined, 
levels
which can be assigned to the four levels of trust I mention above.

Cheers,

Ed Gerck






Secrets Lies, a comment

2000-08-31 Thread Ed Gerck


List:

I welcome Bruce's new book on the faults of cryptography to provide
security.  But saying that "no computer is secure, any network can be
hacked" is a sweeping overstatement, as false as saying the opposite.

The point is that while it is a good role to be positioned with
an ambulance at the bottom of the cliff and then rescue (for
a price) those that fall down the cliff, or to sell insurance to
those that fall, it is also as meritorious to make it very, very
hard if not entirely impossible to fall down that same cliff.
Even though it would stop the business down there.

The main problem I see with Bruce's statement is that it is
not true.  The basis for the solution is simple: recognize
that cryptography is about keys and locks, whereas trust is
about whether they work.  So, understanding the mathematical
properties of trust (trust not as an emotion but as something
essentially communicable), how can trust can provide an answer
to that which we cannot measure and how we can induce trust
over networks of networks using machine-human protocols will
go much further than denying that a solution exists.

More specifically, what would such a solution be? In this case,
we need to change paradigms and avoid the "Fort Knox Syndrome"
so widely seen in the Internet security community -- make it
stronger! But in this model the entire chain can still be compromised
by failure of one weak link -- even if that link is made stronger.  The
solution is to use a multifold of links, arranged in time and space
such that rather than making the impossible assumption that "no part
will fail at any time," we can design a system where up to M parts can
fail at any time provided that not all M parts fail at the same time --
where M can be the entire number of parts.

Further, rather than seeking "infinite protection" at one point (which is
clearly impossible) we set up a system where a measure of protection
as large as desired can be attained by using an open-ended number
M of points, each one individually affording some "finite" protection
and contributing to higher-orders of integrity.

Some of these principles are gaining public exposure in products
and open protocols designed by myself at Safevote, Inc. and are for
example discussed in the article "From Voting to Internet Voting" in
the May 2000 issue of The Bell, with copy at
http://thebell.net/archives/thebell1.1.pdf , as well as in the paper
"Overview of Certification Systems" with copy at
 http://thebell.net/papers/certover.pdf
There are also pending patents on this technology. Open discussions
at the IVTA (Internet Voting Technology Alliance) at ivta.org will
surely deal with these principles more and more.

Please see my comments from the viewpoint of understanding what 
needs to be done in terms of raising awareness about the 
difficulties -- kudos for Bruce! However, denying a solution 
is IMO not intellectually fair and not according to what we 
already have learned.

Cheers,

Ed Gerck




Newsletter on Internet voting, privacy and security issues

2000-04-13 Thread Ed Gerck

List:

I would like to extend an invitation to the list to read "The Bell" 
-- the first newsletter entirely dedicated to Internet voting and 
collaborative decision-making in Internet protocols. Electronic and 
hard copy subscriptions are available at www.thebell.net.

Safevote [1] is sponsoring the newsletter as an open forum.  Regarding 
the name "The Bell", the idea was to use the image of a bell because 
mission bells were used in colonial California for telling time, 
announcing events, and for passing on news from one city to another. 
This newsletter intends to serve similar purposes and is also a public 
service open and free for all who may want it.

With monthly 16-page issues, the "The Bell" will focus on privacy, 
security and technology used in Internet voting. 

In Internet tradition, the newsletter is available free over the 
Internet (PDF format), or at an annual cost of $30 for first class 
mail delivery in hard copy.   The first issue contains an extensive 
marketing study of the year 2000 U.S. public elections market - a 
study that will be serialized in upcoming issues.  It also includes 
an article on today's public voting systems used in the United States 
by election expert Roy G. Saltman and an essay by myself outlining the
need for a strong separation between identification and authentication
in order to ensure fair Internet voting protocols.

Internet voting and its potential impact on society will increasingly
call upon us to understand and keep abreast of the latest developments 
in various fields of work. As a developer in this market, I see a 
widening gap between the 100-year-old voting technologies in use today 
and what Internet voting needs to take into account. The Bell is 
dedicated to help fill this gap -- perhaps with your help as well.

Cheers,

Ed Gerck

[1] Safevote (www.safevote.com) is a founding member of the Internet 
Voting Technology Alliance (www.ivta.org) and develops OEM (Original 
Equipment Manufacturer) systems for Internet voting, polling, public 
elections, bidding, consensus assessment and other Internet decision-
making applications. The Bell is edited at Safevote. The May 2000 issue 
is now available at the "The Bell" Web site, www.thebell.net




Announcement ivta.org

2000-02-13 Thread Ed Gerck


List:

  Announcement ivta.org

Internet voting is a case where privacy must be protected, so that
arguments to justify losing voter privacy in the good name of security
are simply not possible.  Which  firmly posits security  as a protection
of privacy -- not as an enemy of privacy -- in the problem-solving
assumptions to be considered.

In this context, an international team of experts and companies are calling for
open discussions on Internet voting technology.  A public founding
assembly will take place February 28 in Washington D.C. at 9 a.m.

Details at http://www.ivta.org

Cheers,

Ed Gerck



Yet Another Most Secure And Encrypted Service

2000-02-08 Thread Ed Gerck


List:

Looks like superlatives in security are not a sparse commodity,
from "super" to "supra" to "most" secure is all an easy question
of how many bits one has, or so it still seems in this YAMSAES:

---
http://messages.yahoo.com/bbs?action=mboard=4688083tid=mailsid=4688083mid=13969
 2/6/00 8:28 pm

 Supra Secure Mail (SSM), the highest encryption technology available.

 SSM technology provides the most secure and encrypted Email service transaction 
technology in
the world. This breakthrough developed exclusively by SupraNet AG will quickly become 
the most
accepted encryption standard for business to business (B2B) communication and safe 
financial
transactions. The protocol for security gives the user the ability to have 448 bit key 
which exceeds
any available encryption in the world. It is the first  key with over 400 bit 
encryption technology
which is a worldwide milestone in the development of security and  privacy software 
compared to
the current standard of 128 to 256 bit keys.

 www.supra.net


To set the record once and for all, we should create the (n+1)-key -- recursively 
defined as a key
which has always one bit more than the highest bit of any key in existence, even of 
itself.  Without
any regard, of course, to the protocol it uses.

Cheers,

Ed Gerck




Re: prove me wrong, go to jail

2000-01-27 Thread Ed Gerck


Ted Lemon wrote:

 Amateurs in the crypto world seem to get bitten by this fairly
 frequently - read the recent transcripts to the New York preliminary
 injunction on the DeCSS case for supporting evidence.  If you're out
 to prove a point, and you're riding the fine edge of legality and
 civil disobedience in doing it, it helps to make sure that you keep
 your nose clean and stay focused on what you're really trying to do,
 rather than, e.g., venting your anger or trying to get people who
 didn't ask you to help them to pay for your "help."

Yes, this also my opinion -- "one may question a guy's right
to charge for advice that was not requested but why should he
provide it for free?".   And, there are also IMO other sides to the
issue (public trust, public gullibility, unchecked fraud, government
indirect responsibility, regulation, etc.) so that , I suggested we
could reflect on how security risks must not be handled as it is.  In
fact,  if there would be a pre-defined reward for those that find holes
in today's increasing electronic  and "secure" systems then companies
could rely in that reward both as a payment cap and as way to separate
reward from extortion.  I can imagine a company writing, for the benefit
of all:

 We support open assessment of risks -- if you find a security fault
 in our systems, please tell us first so that we can fix it first.  We commit
 ourselves to making public all such communications after a solution
 is found so that publication will not compromise the system further. We
 also reward any recognized security fault called to our attention, up to
 US $1,000 from a minimum of US$ 50 -- value to be defined by us in
 relationship to known faults and to its relevance.  To be ellegible for
 the reward, we must be the first and only to be informed about it. The
 company reserves the right to consider legal measures to the full extent
 of law if  a fault is discovered or a reward is pursued by illegal actions.

Of course, the above is not perfect and is probably too short to
satisfy all the legals ins and outs, but the idea is to use the reward
mechanism in a positive way to counter what I may call a "tendency"
and its potential bad effects, while preserving the good ones -- especially
to enhance security in a quasi-public review process.

Comments?

Ed Gerck




Truth-In-Advertising proposal, was Re: prove me wrong, go to jail

2000-01-27 Thread Ed Gerck



Ted Lemon wrote:

 Ed Gerck wrote [reinserted for context]:

 In fact,  if there would be a pre-defined reward for those that find holes
 in today's increasing electronic  and "secure" systems then companies
 could rely in that reward both as a payment cap and as way to separate
 reward from extortion.  I can imagine a company writing, for the benefit
 of all:
 
  We support open assessment of risks -- if you find a security fault
  in our systems, please tell us first so that we can fix it first.  We commit
  ourselves to making public all such communications after a solution
  is found so that publication will not compromise the system further. We
  also reward any recognized security fault called to our attention, up to
  US $1,000 from a minimum of US$ 50 -- value to be defined by us in
  relationship to known faults and to its relevance.  To be ellegible for
  the reward, we must be the first and only to be informed about it. The
  company reserves the right to consider legal measures to the full extent
  of law if  a fault is discovered or a reward is pursued by illegal actions.
 
 Of course, the above is not perfect and is probably too short to
 satisfy all the legals ins and outs, but the idea is to use the reward
 mechanism in a positive way to counter what I may call a "tendency"
 and its potential bad effects, while preserving the good ones -- especially
 to enhance security in a quasi-public review process.
 
  Comments?

 My impression of banks is that as long as they can quantify the potential
 loss, they can just set the margins to allow for a reasonable profit
 over the loss.   That way, they don't have to worry about security
 unless a cost/benefit analysis shows that additional security will
 produce a significant profit.

Almost verbatim from a friend's comment and who used to be a lawyer
for banks in the city of London (N. Bohm), such experience as I have of
the attitudes of banks causes me to  believe that unless constrained by
law or otherwise persuaded by powerful social forces, banks will not be
willing to trust their  customers to take the necessary precautions, but
will expect them to  take the risk of failing to take the precautions -- and
also to take the costs of the bank's own failed precautions.

  In order for your scheme to work, you'd have to
 convince *someone* that auditing the bank will drop the margin by more
 than the cost of doing the audit, and indeed by enough more that it's
 an attractive prospect.

No, I don't think we need an YACA -- Yet Another Centralized Authority.
We can simply include a provision for "cool-off" limit:

  Cool-Off Limit: if we do not act and make public the comment provided
 in secret within a cool-off time limit of 30 business days of proven receipt,
 we agree that the comment can be made public by the proponent -- regardless
 of our future use and reward for the comment.

In other words, in the absence of  mandated standards and YACAs, our
approach to this issue would be to try to provide a credible Truth-In-Advertising
label, even though the mechanisms necessary to provide the independent
verification of that label may be somewhat weak or missing to date.

 Given a worldwide Internet, with no worldwide uniformity or government,
maintaing the independency of such verification channels may be the best
way to provide such Truth-In-Advertising -- even if not all channels are
equally efficient/reliable/fair to all.

Cheers,

Ed Gerck





desirable properties of secure voting

1999-10-11 Thread Ed Gerck

List:

In reference to the recent discussions on voting, I am
preparing a list of desirable properties of voting, as a
secure protocol. Of course, it may not be desirable or even
possible for a particular election process to include *all*
of them -- the objective is just to have a list of choices.

I include below the properties that I have found in the literature.
I feel however that some properties are missing, such as a
"complementary" property when voting is mandatory (in order to
use the absentee ballot to help detect the occurrence of false
votes in the actual ballots for each voting section).  My list is
also not yet complete and I am further looking into a voting
model which would allow one to include other properties in
an analytical (formal) way.  So, I would like to ask for list help in
adding to the known properties already proposed and listed below,
or improving upon them, or discussing/criticizing them.

This will also help, IMO, distinguish between those commercial
initiatives in voting products/services that comply to accepted
metrics (as given by known properties) or  those that propose new
and useful metrics, from those that define self-serving metrics
or impossible metrics.  Or, next, we might have the "Disappearing
Vote, Inc." ;-)

The current useful voting properties as proposed by Fujioka,
Okamoto and Ohta, 1992, and  Benaloh and Tuinstra, 1994, are:

1. Completeness: All valid votes are counted correctly, if all participants are honest.

2. Robustness: Dishonest voters, other participants or outsiders can't disturb or 
disrupt
an election.

3. Privacy: The votes are casted anonymously.

4. Unreusability: Every voter can vote only once.

5. Eligibility: Only legitimate voters can vote.

6. Fairness: A voter casts his vote independently and is not influenced (e.g. by 
publishing
intermediate results of the election, copying and casting of the encrypted vote slip of
another voter as his own vote).

7. Verifiability: The tally can not be forged, as it can be verified by every voter. 
The
verifiability is locally, if a voter can only check if his own vote if counted 
correctly. If
it is verifiable whether all votes are counted correctly, then the verifiability is 
universally.

8. Receipt­freeness: A voter can't prove to a coercer, how he has voted. As a result,
verifiable vote buying is impossible.

Cheers,

Ed Gerck




Re: snake-oil voting?

1999-09-27 Thread Ed Gerck



Anonymous wrote:

 There is a wide variation in the amount of validation done at polling
 places.  In the local region none of this is done; you are asked to sign,
 bug your signature is not checked.  No ID is required, and observers
 from political parties are not present.

In California, the situation regarding validation is different and improving
security-wise, see http://www.ss.ca.gov/elections/elections_q.htm with:

In late 1995, the Secretary of State was authorized by the Legislature and Governor to 
begin development of our
first-ever statewide voter registration database. By building this cumulative database 
and eliminating many of
the duplicate or erroneous registrations, known as "deadwood", currently on the 58 
county's voter rolls, the
state and counties can reduce election costs and take another step toward prevention 
of fraudulent voting. For
the first time, county elections officials will be able to maintain their voter 
registration files with the assistance of
other elections offices throughout the state, as well as interfacing with the 
Department of Motor Vehicles and
the Bureau of Vital Statistics. Duplicate registrations can be cancelled, persons who 
have died can be removed
from voter rolls, and cross-county registrations can be updated once the CALVOTER 
database is in place.

Of relevance here, is that cryptographic protocols may have a better security support 
if
registration data is reliable and can be verified in more than one channel (eg, using 
DMV data).

 It seems clear that the system is primarily oriented towards preventing
 fraud by election officials and those involved in setting up the
 electronic voting.

I can't see VoteHere providing that, as I explained before -- the system
is more towards "One Name, Any Vote" than what it claims to be, as
"One Person,  One Vote".  There is no way you can verify if a vote
with my name was just stuffed into the ballot, for example -- but if everyone
would verify and if everyone would have just one name and if everyone
would be 100% honest and if everyone would tell all the others what
it verified, then it would work ;-) but, then, no protocol is necessary
or even possible for the sheer size of msgs involved.

Cheers,

Ed Gerck





Re: snake-oil voting?

1999-09-24 Thread Ed Gerck



Anonymous wrote:

 Ed Gerck wrote:
 Did any of you see this
 http://www.votehere.net/content/Products.asp#InternetVotingSystems
 
 that proposes to authenticate the voter by asking for his/her/its SSN#?

 It looked like the idea for this part was to prevent double voting,
 plus make sure that only authorized people could vote.  It wasn't
 necessarily SSN, it could be name/address/date of birth or whatever.
 Similar to what is done when you go and vote in person.

The disconnect here is that it does not make sure that only authorized *people*
can vote -- but that an authorized he/she/it can vote.   Thus, I find that this is
not similar to when I go vote in *person*, when election officials will not allow
bots or dogs to vote ;-)  Here, anything can get an authorization, not just anyone.

And, someone could easily have a directory of "voters" (real or made-up) and
automatically proceed to obtain authorization and vote with each one of these
"voters". There is nothing to prevent bulk voting commanded by one person.

 There was also this idea of what they earnestly called a VERN, Voter
 Encrypted Registration Number, which would be distributed in advance
 to people who were authorized to vote.  You'd provide your VERN along
 with your authenticating info (DOB/SSN/whatever) to prove that you were
 authorized.

Again, one is mislead by the assumption that it would be distributed to people.
The VERN voting is similar to what we see in majordomo for example, when a nonce
is sent to a *virtual* subscriber and must be mailed back to confirm list subscription,
from the same requesting email address -- ie, similar  to casting a vote in VoteHere.
But, bots can also subscribe using majordomo.  So, since the VERN is requested by
and provided along with virtual info, there is no verification of the voter's identity
even as a person (ie, not a bot) neither when the VERN is sent to the presumed
voter nor  when the VERN is used.

 Any voting system ultimately relies on real world proof like this.

But, there is no real-world proof here, everything happens entirely in the virtual
world.

 The real point of the protocol is to keep people from finding out HOW
 each person voted, while assuring that the vote count is correct.  There
 has been a lot of work on crypto protocols for secure voting and this
 appears to be what they have implemented.

I see no protocol, I see a table of names and nonces.  Each one can see
their name, but no one can verify if two or more names may (wrongly)
correspond to  one person or, if a nonce listed is the correct one for a name.
So, "One Person, One Vote" as declared by VoteHere is more likely
"One Name, One Vote".

And, what is to prevent populating the table with names/nonces?  If absentee
ratio is large, there is considerable room to populate the table and still have less
than a given number of voters (assuming that the total number of voters is known,
which is not true for USENET or in the Internet -- we don't even know how
many hosts the Internet has, let alone users, and known host statistics are
only relative to in-addr.arpa registration).

 This looks like a good system although it would be nice to see more
 details.  It certainly sounds better than alternatives.

What alternatives do you mean?

  With current
 Usenet votes everyone gets to see how you voted.  With this VoteHere
 system you could be assured that your vote was correct (because it would
 match the encryption you sent in),

But, you could not be sure whether any other vote is correct.

 nobody else could see how you voted,

This is not what the site says -- it says: "..decrypted by a simultaneous coordination
of election officials and observers, to obtain and/or audit the election results.", 
which
means that such group can decrypt the tally result but does not mean that it cannot
decrypt *one* entry from the name/nonce table.  Since the table is public, if  voting
nonces can be decrypted one by one then any vote can be identified.

To avoid this, the encryption method used to create the nonces would have to be
one-way with trapdoor for the tally but one-way for any nonce.  Let us  suppose
that this is true.  But, since a tally is the sum of two or more nonces, if I have
*one* known vote (my own) then I can know any other vote in a sum of two nonces.
 And, knowing two votes I can know the sum of three votes, and so on.  I can
continue the process and eventually learn how everyone voted, starting only from my
own vote -- even under the assumption that the encryption method used to create the
nonces is one-way with trapdoor for the tally but one-way for any nonce.

 and yet you could be sure that the vote total was correct (by running the
 sum operation on the encrypted data, and verifying that the decryption
 of this is the claimed sum).

Given the information in the site, I cannot see how you deducted this. But,
even if what you say is true and I missed it elsewhere, 

snake-oil voting?

1999-09-23 Thread Ed Gerck


List:

Did any of you see this
http://www.votehere.net/content/Products.asp#InternetVotingSystems

that proposes to authenticate the voter by asking for his/her/its SSN#? And, by the
contents of ... an email msg sent to him/her/it?

Besides confusing authentication with identification, VoteHere also confuses the
problem of non-repudiation (that the PKIX WG is struggling with for some years),
as they declare to have solved it as well:

 "...also prevents voters from later denying that they cast a ballot."

And, as customary in these cases, by declaring to use very strong keys:

 "Every voted ballot is encrypted using 1024-bit public-key encryption."

that, presumedly to them and to the public, must be self-secure. But, the "best claim" 
is
right at the begining, when they postulate the VoteHere system as commented above
is  a "universally verifiable election system" with their own following definition:

4.  Universally Verifiable Elections - secure, efficient, and maintains the voter's
privacy. Furthermore, anyone can verify that the election was conducted fairly,
without compromising voters' privacy.

Comments?

Cheers,

Ed Gerck





Demise of H.R. 1714 and its lessons for Internet voting

1999-01-02 Thread Ed Gerck


California – http://www.votesite.com/CIVI.PDF

This initiative by the Attorney General of California aims to
make California safe for Internet voting by creating an ad hoc
validity for Internet voting while vacating current laws
(including the California Constitution) and even theoretically
possible laws that could impede the use of Internet voting in
California.

The initiative, like the now failed federal initiative H.R. 1714
for digital signatures, seems however to have been drafted
far too broadly.  The provisions miss several requirements
in the “Voting System Standards” of the Federal Election
Commission in regard to registration, authentication,
auditing, delivery and processing.

The initiative seems to intend to provide security by
criminalizing fraud, an approach however that has not been
very successful in reducing fraud – as well-known in the
very field of public elections – and which runs counter to
the current perception of a steady increase in fraud
schemes in spite of laws (http://www.mcg.org.br/dtrep.htm).

The initiative thus uses concepts of digital signatures but
is actually counter to the idea behind digital signatures (viz.
to make fraud technically impossible).  The initiative centrally
uses terms yet technically undefined, for example in Section
16956 that requires Internet voting to  “Provide support for
non-repudiation of all electronic electoral transactions
(including voter registration, the signing of petitions, and
the casting of ballots) between and among voters, elections
officials, and electoral jurisdictions.” – notwithstanding the
fact that “non-repudiation” and supporting services are yet
utterly unresolved matters in technical IETF PKIX Workgroup
discussions, also in other security groups. Indeed, various
Internet technical groups are making progress in resolving
these issues -- but one should not require the unknown in a
public initiative, since the scope can easily become
unreasonably broad.

And unreasonably broad scope was one of the main reasons
for the recent fall of H.R. 1714 (the “electronic signature” bill
in the House of Representatives) – for example, Commerce
Department General Counsel Andrew J. Pincus declared
(Washington Post 10/29/99) that  “unscrupulous people”
would be able to use the broad bill to their advantage by
preying on online consumers, leading to a loss of consumer
confidence in the Internet.  In other words, flawed public
initiatives in Internet voting pose a risk of possibly
discrediting Internet voting technology in general, even
before the technology has a chance to mature, and may
decrease public trust on the very market that law seems to
try to create – whereas law should regulate.

Perhaps, this is the answer that Internet stakeholders can
help provide to the Attorney General of California and to
the authors of similar initiatives, that is not the lack of laws
that is preventing Americans from keeping pace with the
Information Age and Internet voting, but rather the lack
of technology – which is a market opportunity for whoever
arrives first.

However, I note that these comments do not intend to be a
dismissive treatment of the California Internet voting
initiative, which can be very useful as a prototype for a
“wish list” of Internet voting properties, especially when
seen in conjunction with other requirements (e.g., the
FEC’s “Voting System Standards”) and what is provided
by current cryptographic protocols.

Comments are welcome.

Cheers,

Ed Gerck




On leaving the 56-bit key length limitation

1998-12-30 Thread Ed Gerck


List:

After reading gazillions of messages on the "weak cryptography" that
looms upon some of us, due to recent 56-bit symmetric key-length
limitations of the Wassenaar Arrangement, it is perhaps time to thank
the various Jonahses for the wake-up call and take a different look
to life in Niniveh. And, similarly to Ninivians, I intend to show
below that the solution lies in our hands and we can indeed leave the
56-bit key length limitation -- not live with it.

1. First, I wish to point out that Theoretically-Secure Cryptographic
Systems (hereafter TSCS) do not depend on key-length for secrecy --
in their design region. In fact, Shannon already showed 50 years ago
that a TSCS does not depend on key-length when one works within the
system's "unicity distance". So, the factor we need to work on in
order to protect our privacy is "unicity distance" -- not key length.
And, as we all know, Wassenaar does not impose a limit on maximum
"unicity distance". So, we are free in regard to what matters to us,
our privacy, and we can leave the key length limitation.

2. Even though one usually thinks only of "one-time pads" and XOR
encoding when talking about theoretically secure systems, this is
just the simplest case. Almost any cryptographic system (DES, IDEA,
RSA, RC4, RC5, Twofish, etc.) is theoretically secure if used within
its "unicity distance" -- for example, even simple XOR encoding.

3. It is interesting to note that a TSCS cannot be attacked by
exaustive key-search -- denying thus the very "brute-force attack"
that looms over protocols under key length limitations. So, I can
even safely use 56-bit DES (notwithstanding fast and cheap hardware
key-searching devices) within a TSCS's design region.

4. A TSCS is secure against any attacker, including a computationally
unbounded attacker -- both in time, space and resources. Which may
well represent the broad level of attackers one may face in an open
environment, when compared with private resources.

5. Even though current versions of protocols such as SSL/TLS, S/MIME
and PGP ignore the issues of "unicity distance" (and indeed
compromise security for short key lengths)...  this can IMO be
improved quite competitively and fast, to provide commercially useful
message lengths even under current key-length limitations. For
example, would a user feel limited if I say 56 Kbyte messages are
allowed for each 56-bit session key -- with theoretically unbreakable
security?

6. The final word on cryptographic strength is thus not to be found
in enforced export controls for key length. It is to be found in our
own drawing boards in terms of a system's "unicity distance" and its
derived design issues, which is feasible to deal with and lies in our
hands.

7. To reach the heart of the matter, one must devise ways to thwart
automatic surveillance decoding -- which is additional from only
making it harder or theoretically-impossible to decipher the
messages, as dealt with by TSCSs. The objective here is to make
decryption either ambiguous or ambiguously related to the key, even
if sucessful (say, by collusion, forced escrow, etc.). So, the
attacker would have difficulties to detect that a key does NOT work,
that it DOES work, and what the decrypted message is, from a possible
list of choices. To contrast, in DES, a given key either produces
garbage or readable text -- too easy for an attacker. One simple
suggestion is to reverse one or two random bits in each encrypted
block of a TSCS (in a block's "salt window" defined for example by
the key itself) so that automatic decoders would have to be much
augmented to cover the enlarged search space and search time would
also increase for every tried key (the user would do all that rather
quickly, since he has the right key). Another suggestion is to
develop TSCSs that can provide ambiguous and meaningful decryptions
-- which the user's system can choose based on some trusted
information provided out-of-band. This would also allow the user to
be able to always repudiate a message, either sent as his own or
received as by him, protecting his privacy if needed.


To close, in order to extract the full benefit from such approach to
security as commented in the seven items above, I believe that one
must first revisit the concept of "unicity distance" -- since it is
usually regarded more as a "proof-of-concept" definition than a
practical tool.  Which is IMO due to a series of unfortunate
historical facts -- beginning with the name, since it is not a
"distance" (i.e., it is not a metric function that provides
distance).

BTW, on leaving the 56-bit key length limitation we may well bid
farewell to security systems which do not take into account the
message's statistics and perfunctorily pad bits -- which is a funny
flaw, since the attackers of such systems always tend to do
othe