Re: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-24 Thread Joseph Ashwood
- Original Message - 
Subject: [Tom Berson Skype Security Evaluation]


Tom Berson's conclusion is incorrect. One needs only to take a look at the
publicly available information. I couldn't find an immediate reference
directly from the Skype website, but it uses 1024-bit RSA keys, the coverage
of breaking of 1024-bit RSA has been substantial. The end, the security is 
flawed. Of course I told them this now years ago, when I told them that 
1024-bit RSA should be retired in favor of larger keys, and several other 
people as well told them.

   Joe




Re: [EMAIL PROTECTED]: Skype security evaluation]

2005-10-23 Thread Joseph Ashwood
- Original Message - 
Subject: [Tom Berson Skype Security Evaluation]


Tom Berson's conclusion is incorrect. One needs only to take a look at the
publicly available information. I couldn't find an immediate reference
directly from the Skype website, but it uses 1024-bit RSA keys, the coverage
of breaking of 1024-bit RSA has been substantial. The end, the security is 
flawed. Of course I told them this now years ago, when I told them that 
1024-bit RSA should be retired in favor of larger keys, and several other 
people as well told them.

   Joe




Re: SHA1 broken?

2005-02-22 Thread Joseph Ashwood
- Original Message - 
From: Dave Howe [EMAIL PROTECTED]
Subject: Re: SHA1 broken?


  Indeed so. however, the argument in 1998, a FPGA machine broke a DES 
key in 72 hours, therefore TODAY... assumes that (a) the problems are 
comparable, and (b) that moores law has been applied to FPGAs as well as 
CPUs.
That is only misreading my statements and missing a very large portion where 
I specifically stated that the new machine would need to be custom instead 
of semi-custom. The proposed system was not based on FPGAs, instead it would 
need to be based on ASICs engineered using modern technology, much more 
along the lines of a DSP. The primary gains available are actually from the 
larger wafers in use now, along with the transistor shrinkage. Combined 
these have approximately kept the cost in line with Moore's law, and the 
benefits of custom engineering account for the rest. So for exact details 
about how I did the calculations I assumed Moore's law for speed, and an 
additional 4x improvement from custom chips instead of of the shelf. In 
order to verify the calculations I also redid them assuming DSPs which 
should be capable of processing the data (specifically from TI), I came to a 
cost within a couple orders of magnitude although the power consumption 
would be substantially higher.
   Joe 



Re: SHA1 broken?

2005-02-20 Thread Joseph Ashwood
- Original Message - 
From: Dave Howe [EMAIL PROTECTED]
Subject: Re: SHA1 broken?


  Indeed so. however, the argument in 1998, a FPGA machine broke a DES 
key in 72 hours, therefore TODAY... assumes that (a) the problems are 
comparable, and (b) that moores law has been applied to FPGAs as well as 
CPUs.
That is only misreading my statements and missing a very large portion where 
I specifically stated that the new machine would need to be custom instead 
of semi-custom. The proposed system was not based on FPGAs, instead it would 
need to be based on ASICs engineered using modern technology, much more 
along the lines of a DSP. The primary gains available are actually from the 
larger wafers in use now, along with the transistor shrinkage. Combined 
these have approximately kept the cost in line with Moore's law, and the 
benefits of custom engineering account for the rest. So for exact details 
about how I did the calculations I assumed Moore's law for speed, and an 
additional 4x improvement from custom chips instead of of the shelf. In 
order to verify the calculations I also redid them assuming DSPs which 
should be capable of processing the data (specifically from TI), I came to a 
cost within a couple orders of magnitude although the power consumption 
would be substantially higher.
   Joe 



Re: SHA1 broken?

2005-02-18 Thread Joseph Ashwood
- Original Message - 
From: Dave Howe [EMAIL PROTECTED]
Sent: Thursday, February 17, 2005 2:49 AM
Subject: Re: SHA1 broken?


Joseph Ashwood wrote:
  I believe you are incorrect in this statement. It is a matter of public
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's 
assume they used 2^55 work to break it. Now moving to a completely custom 
design, bumping up the cost to $500,000, and moving forward 7 years, 
delivers ~2^70 work in 72 hours (give or take a couple orders of 
magnitude). This puts the 2^69 work well within the realm of realizable 
breaks, assuming your attackers are smallish businesses, and if your 
attackers are large businesses with substantial resources the break can 
be assumed in minutes if not seconds.

2^69 is completely breakable.
   Joe
  Its fine assuming that moore's law will hold forever, but without that 
you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that has 
been hashed with sha-1 can be considered breakable (but that would allow 
you to (for example) forge a digital signature given an example)
  This of course assumes that the break doesn't match the criteria from 
the previous breaks by the same team - ie, that you *can* create a 
collision, but you have little or no control over the plaintext for the 
colliding elements - there is no way to know as the paper hasn't been 
published yet.
I believe you substantially misunderstood my statements, 2^69 work is doable 
_now_. 2^55 work was performed in 72 hours in 1998, scaling forward the 7 
years to the present (and hence through known data) leads to a situation 
where the 2^69 work is achievable today in a reasonable timeframe (3 days), 
assuming reasonable quantities of available money ($500,000US). There is no 
guessing about what the future holds for this, the 2^69 work is NOW.


- Original Message - 
From: Trei, Peter [EMAIL PROTECTED]
To: Dave Howe [EMAIL PROTECTED]; Cypherpunks 
[EMAIL PROTECTED]; Cryptography cryptography@metzdowd.com


Actually, the final challenge was solved in 23 hours, about
1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding
the key after only 24% of the keyspace had been searched.
More recently, RC5-64 was solved about a year ago. It took
d.net 4 *years*.
2^69 remains non-trivial.
What you're missing in this is that Deep Crack was already a year old at the 
time it was used for this, I was assuming that the most recent technologies 
would be used, so the 1998 point for Deep Crack was the critical point. Also 
if you check the real statistics for RC5-64 you will find that 
Distributed.net suffered from a major lack of optimization on the workhorse 
of the DES cracking effort (DEC Alpha processor) even to the point where 
running the X86 code in emulation was faster than the native code. Since an 
Alpha Processor had been the breaking force for DES Challenge I and a factor 
of  1/3  for III this crippled the performance resulting in the Alphas 
running at only ~2% of their optimal speed, and the x86 systems were running 
at only about 50%. Based on just this 2^64 should have taken only 1.5 years. 
Additionally add in that virtually the entire Alpha community pulled out 
because we had better things to do with our processors (e.g. IIRC the same 
systems rendered Titanic) and Distributed.net was effectively sucked dry of 
workhorse systems, so a timeframe of 4-6 months is more likely, without any 
custom hardware and rather sad software optimization. Assuming that the new 
attacks can be pipelined (the biggest problem with the RC5-64 optimizations 
was pipeline breaking) it is entirely possible to use modern technology 
along with GaAs substrate to generate chips in the 10-20 GHz range, or about 
10x the speed available to Distributed.net. Add targetted hardware to the 
mix, deep pipelining, and massively multiprocessors and my numbers still 
hold, give or take a few orders of magnitude (the 8% of III done by Deep 
Crack in 23 hours is only a little over 2 orders of magnitude off, so within 
acceptable bounds).

2^69 is achievable, it may not be pretty, and it certainly isn't kind to the 
security of the vast majority of secure infrastructure, but it is 
achievable and while the cost bounds may have to be shifted, that is 
achievable as well.

It is still my view that everyone needs to keep a close eye on their hashes, 
make sure the numbers add up correctly, it is simply my view now that SHA-1 
needs to be put out to pasture, and the rest of the SHA line needs to be 
heavily reconsidered because of their close relation to SHA-1.

The biggest unknown surrounding this is the actual amount of work necessary 
to perform the 2^69, if the workload is all XOR then the costs and timeframe 
I gave are reasonably pessimistic

Re: SHA1 broken?

2005-02-18 Thread Joseph Ashwood
- Original Message - 
From: Joseph Ashwood [EMAIL PROTECTED]
Sent: Friday, February 18, 2005 3:11 AM

[the attack is reasonable]
Reading through the summary I found a bit of information that means my 
estimates of workload have to be re-evaluated. Page 1 Based on our 
estimation, we expect that real collisions of SHA1 reduced to 70-steps can 
be found using todays supercomputers. This is a very important statement 
for estimating the real workload, assuming there is an implicit in one 
year in there, and assuming BlueGene (Top 500 list slot 1) this represents 
22937.6 GHz*years, or slightly over 2^69 clock cycles, I am obviously still 
using gigahertz because information gives us nothing better to work from. 
This clearly indicates that the operations used for the workload span 
multiple processor clocks, and performing a gross estimation based on pure 
guesswork I'm guessing that my numbers are actually off by a factor of 
between 50 and 500, this factor will likely work cleanly in either adjusting 
the timeframe or production cost.

My suggestion though to make a switch away from SHA-1 as soon as reasonable, 
and to prepare to switch hashes very quickly in the future remains the same, 
the march of processor progress is not going to halt, and the advance of 
cryptographic attacks will not halt which will inevitably squeeze SHA-1 to 
broken. I would actually argue that the 2^80 strength it should have is 
enough to begin its retirement, 2^80 has been strong enough for a decade 
in spite of the march of technology. Under the processor speed enhancements 
that have happened over the last decade we should have increased the 
keylength already to accomodate for dual core chips running at 20 times the 
speed for a total of 40 times the prior speed (I was going to use Spec data 
for a better calculation but I couldn'd immediately find specs for a Pentium 
Pro 200) by adding at least 5 bits preferrably 8 to our necessary protection 
profile.
   Joe 



Re: SHA1 broken?

2005-02-18 Thread Joseph Ashwood
- Original Message - 
From: Dave Howe [EMAIL PROTECTED]
Sent: Thursday, February 17, 2005 2:49 AM
Subject: Re: SHA1 broken?


Joseph Ashwood wrote:
  I believe you are incorrect in this statement. It is a matter of public
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's 
assume they used 2^55 work to break it. Now moving to a completely custom 
design, bumping up the cost to $500,000, and moving forward 7 years, 
delivers ~2^70 work in 72 hours (give or take a couple orders of 
magnitude). This puts the 2^69 work well within the realm of realizable 
breaks, assuming your attackers are smallish businesses, and if your 
attackers are large businesses with substantial resources the break can 
be assumed in minutes if not seconds.

2^69 is completely breakable.
   Joe
  Its fine assuming that moore's law will hold forever, but without that 
you can't really extrapolate a future tech curve. with *todays* 
technology, you would have to spend an appreciable fraction of the 
national budget to get a one-per-year break, not that anything that has 
been hashed with sha-1 can be considered breakable (but that would allow 
you to (for example) forge a digital signature given an example)
  This of course assumes that the break doesn't match the criteria from 
the previous breaks by the same team - ie, that you *can* create a 
collision, but you have little or no control over the plaintext for the 
colliding elements - there is no way to know as the paper hasn't been 
published yet.
I believe you substantially misunderstood my statements, 2^69 work is doable 
_now_. 2^55 work was performed in 72 hours in 1998, scaling forward the 7 
years to the present (and hence through known data) leads to a situation 
where the 2^69 work is achievable today in a reasonable timeframe (3 days), 
assuming reasonable quantities of available money ($500,000US). There is no 
guessing about what the future holds for this, the 2^69 work is NOW.


- Original Message - 
From: Trei, Peter [EMAIL PROTECTED]
To: Dave Howe [EMAIL PROTECTED]; Cypherpunks 
[EMAIL PROTECTED]; Cryptography cryptography@metzdowd.com


Actually, the final challenge was solved in 23 hours, about
1/3 Deep Crack, and 2/3 Distributed.net. They were lucky, finding
the key after only 24% of the keyspace had been searched.
More recently, RC5-64 was solved about a year ago. It took
d.net 4 *years*.
2^69 remains non-trivial.
What you're missing in this is that Deep Crack was already a year old at the 
time it was used for this, I was assuming that the most recent technologies 
would be used, so the 1998 point for Deep Crack was the critical point. Also 
if you check the real statistics for RC5-64 you will find that 
Distributed.net suffered from a major lack of optimization on the workhorse 
of the DES cracking effort (DEC Alpha processor) even to the point where 
running the X86 code in emulation was faster than the native code. Since an 
Alpha Processor had been the breaking force for DES Challenge I and a factor 
of  1/3  for III this crippled the performance resulting in the Alphas 
running at only ~2% of their optimal speed, and the x86 systems were running 
at only about 50%. Based on just this 2^64 should have taken only 1.5 years. 
Additionally add in that virtually the entire Alpha community pulled out 
because we had better things to do with our processors (e.g. IIRC the same 
systems rendered Titanic) and Distributed.net was effectively sucked dry of 
workhorse systems, so a timeframe of 4-6 months is more likely, without any 
custom hardware and rather sad software optimization. Assuming that the new 
attacks can be pipelined (the biggest problem with the RC5-64 optimizations 
was pipeline breaking) it is entirely possible to use modern technology 
along with GaAs substrate to generate chips in the 10-20 GHz range, or about 
10x the speed available to Distributed.net. Add targetted hardware to the 
mix, deep pipelining, and massively multiprocessors and my numbers still 
hold, give or take a few orders of magnitude (the 8% of III done by Deep 
Crack in 23 hours is only a little over 2 orders of magnitude off, so within 
acceptable bounds).

2^69 is achievable, it may not be pretty, and it certainly isn't kind to the 
security of the vast majority of secure infrastructure, but it is 
achievable and while the cost bounds may have to be shifted, that is 
achievable as well.

It is still my view that everyone needs to keep a close eye on their hashes, 
make sure the numbers add up correctly, it is simply my view now that SHA-1 
needs to be put out to pasture, and the rest of the SHA line needs to be 
heavily reconsidered because of their close relation to SHA-1.

The biggest unknown surrounding this is the actual amount of work necessary 
to perform the 2^69, if the workload is all XOR then the costs and timeframe 
I gave are reasonably pessimistic

Re: SHA1 broken?

2005-02-17 Thread Joseph Ashwood
- Original Message - 
From: James A. Donald [EMAIL PROTECTED]
Subject: Re: SHA1 broken?


2^69 is damn near unbreakable.
I believe you are incorrect in this statement. It is a matter of public 
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's assume 
they used 2^55 work to break it. Now moving to a completely custom design, 
bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 
work in 72 hours (give or take a couple orders of magnitude). This puts the 
2^69 work well within the realm of realizable breaks, assuming your 
attackers are smallish businesses, and if your attackers are large 
businesses with substantial resources the break can be assumed in minutes if 
not seconds.

2^69 is completely breakable.
   Joe 



Re: SHA1 broken?

2005-02-16 Thread Joseph Ashwood
- Original Message - 
From: James A. Donald [EMAIL PROTECTED]
Subject: Re: SHA1 broken?


2^69 is damn near unbreakable.
I believe you are incorrect in this statement. It is a matter of public 
record that RSA Security's DES Challenge II was broken in 72 hours by 
$250,000 worth of semi-custom machine, for the sake of solidity let's assume 
they used 2^55 work to break it. Now moving to a completely custom design, 
bumping up the cost to $500,000, and moving forward 7 years, delivers ~2^70 
work in 72 hours (give or take a couple orders of magnitude). This puts the 
2^69 work well within the realm of realizable breaks, assuming your 
attackers are smallish businesses, and if your attackers are large 
businesses with substantial resources the break can be assumed in minutes if 
not seconds.

2^69 is completely breakable.
   Joe 



Re: Dell to Add Security Chip to PCs

2005-02-04 Thread Joseph Ashwood
- Original Message - 
From: Shawn K. Quinn [EMAIL PROTECTED]
Subject: Re: Dell to Add Security Chip to PCs


Isn't it possible to emulate the TCPA chip in software, using one's own
RSA key, and thus signing whatever you damn well please with it instead
of whatever the chip wants to sign? So in reality, as far as remote
attestation goes, it's only as secure as the software driver used to
talk to the TCPA chip, right?
That issue has been dealt with. They do this by initializing the chip at the 
production plant, and generating the certs there, thus the process of making 
your software TCPA work actually involves faking out the production facility 
for some chips. This prevents the re-init that I think I saw mentioned a few 
messages ago (unless there's some re-signing process within the chip to 
allow back-registering, entirely possible, but unlikely). It even gets worse 
from there because the TCPA chip actually verifies the operating system on 
load, and then the OS verifies the drivers, solid chain of verification. 
Honestly Kaminsky has the correct idea about how to get into the chip and 
break the security, one small unchecked buffer and all the security 
disappears forever.
   Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com 



Re: Dell to Add Security Chip to PCs

2005-02-04 Thread Joseph Ashwood
- Original Message - 
From: Shawn K. Quinn [EMAIL PROTECTED]
Subject: Re: Dell to Add Security Chip to PCs


Isn't it possible to emulate the TCPA chip in software, using one's own
RSA key, and thus signing whatever you damn well please with it instead
of whatever the chip wants to sign? So in reality, as far as remote
attestation goes, it's only as secure as the software driver used to
talk to the TCPA chip, right?
That issue has been dealt with. They do this by initializing the chip at the 
production plant, and generating the certs there, thus the process of making 
your software TCPA work actually involves faking out the production facility 
for some chips. This prevents the re-init that I think I saw mentioned a few 
messages ago (unless there's some re-signing process within the chip to 
allow back-registering, entirely possible, but unlikely). It even gets worse 
from there because the TCPA chip actually verifies the operating system on 
load, and then the OS verifies the drivers, solid chain of verification. 
Honestly Kaminsky has the correct idea about how to get into the chip and 
break the security, one small unchecked buffer and all the security 
disappears forever.
   Joe

Trust Laboratories
Changing Software Development
http://www.trustlaboratories.com 



Re: Mixmaster is dead, long live wardriving

2004-12-11 Thread Joseph Ashwood
- Original Message - 
From: Major Variola (ret) [EMAIL PROTECTED]
Subject: Mixmaster is dead, long live wardriving


At 07:47 PM 12/9/04 -0800, Joseph Ashwood wrote:
If the Klan doesn't have
a right to wear pillowcases what makes you think mixmaster will
survive?
Well besides the misinterprettaion of the ruling, which I will ignore,
what
makes you think MixMaster isn't already dead?
OK, substitute wardriving email injection when wardriving is otherwise
legal for Mixmastering, albeit the former is less secure since the
injection lat/long is known.  And you need to use a disposable
Wifi card or at least one with a mutable MAC.
Wardriving is also basically dead. Sure there are a handful of people that 
do it, but the number is so small as to be irrelevant. Checking the logs for 
my network (which does run WEP so the number of attacks may be reduced from 
unprotected) in the last 2 years someone (other than those authorized) has 
attempted to connect about 1000 times, of those only 4 made repeated 
attempts, 2 succeeded and hit the outside of the IPSec server (I run WEP as 
a courtesy to the rest of the connection attempts). That means that in the 
last 2 years there have been at most 4 attempts at wardriving my network, 
and I live in a population dense part of San Jose. Wardriving can also be 
declared dead. Glancing at the wireless networks visible from my computer I 
currently see 6, all using at least WEP (earlier there were 7, still all 
encrypted). I regularly drive down through Los Angeles, when I have stopped 
for gas or food and checked I rarely see an unprotected network. The WEP 
message has gotten out, and the higher security versions are getting the 
message out as well. Now all it will take is a small court ruling that 
whatever comes out of your network you are responsible for, and the 
available wardriving targets will quickly drop to almost 0.

Wardriving is either dead or dying.
Or consider a Napster-level popular app which includes mixing or
onion routing.
Now we're back to the MixMaster argument. Mixmaster was meant to be a 
Napster-level popular app for emailing, but people just don't care about 
anonymity. Such an app would need to have a seperate primary purpose. The 
problem with this is that, as we've seen with Freenet, the extra security 
layering can actually undermine the usability, leading to a functional 
collapse. If a proper medium can be struck then such an application can 
become popular, I don't expect this to happen any time soon.
   Joe 



Re: punkly current events

2004-12-11 Thread Joseph Ashwood
- Original Message - 
From: Major Variola (ret) [EMAIL PROTECTED]
Subject: punkly current events


If the Klan doesn't have
a right to wear pillowcases what makes you think mixmaster will
survive?
Well besides the misinterprettaion of the ruling, which I will ignore, what 
makes you think MixMaster isn't already dead?

MixMaster is only being used by a small percentage of individuals. Those 
individuals like to claim that everyone should send everything anonymously, 
when in truth communication cannot happen with anonymity, and trust cannot 
be built anonymously. This leaves MixMaster as only being useful for a small 
percentage of normal people, and those using it to prevent being identified 
as they communicate with other known individuals.

The result of this is rather the opposite of what MixMaster is supposed to 
create. A small group to investigate for any actions which are illegal, or 
deemed worth investigating. In fact it is arguable that for a new face in 
action it is probably easier to get away with the actions in question to 
send the information in the clear to their compatriots than it is to use 
MixMaster, simply because being a part of the group using MixMaster 
immediately flags them, as potential problems.

In short, except for those few people who have some use for MixMaster, 
MixMaster was stillborn. I'm not arguing whether such a situation should be 
the correct way things happened, but that is the way things happened.
   Joe 



Re: Mixmaster is dead, long live wardriving

2004-12-10 Thread Joseph Ashwood
- Original Message - 
From: Major Variola (ret) [EMAIL PROTECTED]
Subject: Mixmaster is dead, long live wardriving


At 07:47 PM 12/9/04 -0800, Joseph Ashwood wrote:
If the Klan doesn't have
a right to wear pillowcases what makes you think mixmaster will
survive?
Well besides the misinterprettaion of the ruling, which I will ignore,
what
makes you think MixMaster isn't already dead?
OK, substitute wardriving email injection when wardriving is otherwise
legal for Mixmastering, albeit the former is less secure since the
injection lat/long is known.  And you need to use a disposable
Wifi card or at least one with a mutable MAC.
Wardriving is also basically dead. Sure there are a handful of people that 
do it, but the number is so small as to be irrelevant. Checking the logs for 
my network (which does run WEP so the number of attacks may be reduced from 
unprotected) in the last 2 years someone (other than those authorized) has 
attempted to connect about 1000 times, of those only 4 made repeated 
attempts, 2 succeeded and hit the outside of the IPSec server (I run WEP as 
a courtesy to the rest of the connection attempts). That means that in the 
last 2 years there have been at most 4 attempts at wardriving my network, 
and I live in a population dense part of San Jose. Wardriving can also be 
declared dead. Glancing at the wireless networks visible from my computer I 
currently see 6, all using at least WEP (earlier there were 7, still all 
encrypted). I regularly drive down through Los Angeles, when I have stopped 
for gas or food and checked I rarely see an unprotected network. The WEP 
message has gotten out, and the higher security versions are getting the 
message out as well. Now all it will take is a small court ruling that 
whatever comes out of your network you are responsible for, and the 
available wardriving targets will quickly drop to almost 0.

Wardriving is either dead or dying.
Or consider a Napster-level popular app which includes mixing or
onion routing.
Now we're back to the MixMaster argument. Mixmaster was meant to be a 
Napster-level popular app for emailing, but people just don't care about 
anonymity. Such an app would need to have a seperate primary purpose. The 
problem with this is that, as we've seen with Freenet, the extra security 
layering can actually undermine the usability, leading to a functional 
collapse. If a proper medium can be struck then such an application can 
become popular, I don't expect this to happen any time soon.
   Joe 


Re: punkly current events

2004-12-09 Thread Joseph Ashwood
- Original Message - 
From: Major Variola (ret) [EMAIL PROTECTED]
Subject: punkly current events


If the Klan doesn't have
a right to wear pillowcases what makes you think mixmaster will
survive?
Well besides the misinterprettaion of the ruling, which I will ignore, what 
makes you think MixMaster isn't already dead?

MixMaster is only being used by a small percentage of individuals. Those 
individuals like to claim that everyone should send everything anonymously, 
when in truth communication cannot happen with anonymity, and trust cannot 
be built anonymously. This leaves MixMaster as only being useful for a small 
percentage of normal people, and those using it to prevent being identified 
as they communicate with other known individuals.

The result of this is rather the opposite of what MixMaster is supposed to 
create. A small group to investigate for any actions which are illegal, or 
deemed worth investigating. In fact it is arguable that for a new face in 
action it is probably easier to get away with the actions in question to 
send the information in the clear to their compatriots than it is to use 
MixMaster, simply because being a part of the group using MixMaster 
immediately flags them, as potential problems.

In short, except for those few people who have some use for MixMaster, 
MixMaster was stillborn. I'm not arguing whether such a situation should be 
the correct way things happened, but that is the way things happened.
   Joe 



Re: A National ID: AAMVA's Unique ID

2004-06-18 Thread Joseph Ashwood
- Original Message - 
From: John Gilmore [EMAIL PROTECTED]
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, June 17, 2004 10:31 AM
Subject: Re: A National ID: AAMVA's Unique ID


  The solution then is obvious, don't have a big central database. Instead
use
  a distributed database.

 Our favorite civil servants, the Departments of Motor Vehicles, are about
 to do exactly this to us.

 They call it Unique ID and their credo is: One person, one license,
 one record.  They swear that it isn't national ID, because national
 ID is disfavored by the public.  But it's the same thing in
 distributed-computing clothes.

I think you misunderstood my point. My point was that it is actually
_easier_, _cheaper_, and more _secure_ to eliminate all the silos. There is
no reason for the various silos, and there is less reason to tie them
together. My entire point was to put my entire record on my card, this
allows faster look-up (O(1) time versus O(lg(n))), greater security (I
control access to my record), it's cheaper (the cards have to be bought
anyway), it's easier (I've already done most of the work on defining them),
and administration is easier (no one has to care about duplication).

 This sure smells to me like national ID.

I think they are drawing the line a bit finer than either of us would like.
They don't call it a national ID because it being a national ID means that
it would be run by the federal government, being instead run by state
governments, it is a state ID, linked nationally.

As I said in the prior one, I disagree with any efforts to create forced ID.

 This, like the MATRIX program, is the brainchild of the federal
 Department of inJustice.  But those wolves are in the sheepskins of
 state DMV administrators, who are doing the grassroots politics and
 the actual administration.  It is all coordinated in periodic meetings
 by AAMVA, the American Association of Motor Vehicle Administrators
 (http://aamva.org/).  Draft bills to join the Unique ID Compact, the
 legally binding agreement among the states to do this, are already
 being circulated in the state legislatures by the heads of state DMVs.
 The idea is to sneak them past the public, and past the state
 legislators, before there's any serious public debate on the topic.

 They have lots of documents about exactly what they're up to.  See
 http://aamva.org/IDSecurity/.  Unfortunately for us, the real
 documents are only available to AAMVA members; the affected public is
 not invited.

 Robyn Wagner and I have tried to join AAMVA numerous times, as
 freetotravel.org.  We think that we have something to say about the
 imposition of Unique ID on an unsuspecting public.  They have rejected
 our application every time -- does this remind you of the Hollywood
 copy-prevention standards committees?  Here is their recent
 rejection letter:

   Thank you for submitting an application for associate membership in
AAMVA.
   Unfortunately, the application was denied again. The Board is not clear
as
   to how FreeToTravel will further enhance AAMVA's mission and service to
our
   membership. We will be crediting your American Express for the full
amount
   charged.

   Please feel free to contact Linda Lewis at (703) 522-4200 if you would
like
   to discuss this further.

   Dianne
   Dianne E. Graham
   Director, Member and Conference Services
   AAMVA
   4301 Wilson Boulevard, Suite 400
   Arlington, VA 22203
   T: (703) 522-4200 | F: (703) 908-5868
   www.aamva.org http://www.aamva.org/

 At the same time, they let in a bunch of vendors of high security ID
 cards as associate members.

Well then create a High-Security ID card company, build it on the technology
I've talked about. It's fairly simple, file the paperwork to create an LLC
with you and Robyn, the LLC acquires a website, it can be co-located at your
current office location, the website talks about my technology, how it
allows the unique and secure identification of every individual, blah, blah,
blah, get a credit card issued in the correct name. They'll almost certainly
let you in, you'll look and smell like a valid alternative (without lying
because you could certainly offer the technology), if you really want to
make it look really good I'm even willing to work with you on filing a
patent, something that they'd almost certainly appreciate.

 AAMVA, the 'guardians' of our right to travel and of our identity
 records, doesn't see how listening to citizens concerned with the
 erosion of exactly those rights and records would enhance their
 mission and service.

Of course it won't, their mission and service is to offer the strongest
identity link possible in the ID cards issued nation-wide, as such the
citizen's course of action has to be to govern the states issuing these
identication papers. However, if you offer them technology to actually make
their mission and service cheaper, more effective, and as a side-benefit
better for their voters. Besides, if you can't beat them (you 

Re: A National ID: AAMVA's Unique ID

2004-06-18 Thread Joseph Ashwood
- Original Message - 
From: John Gilmore [EMAIL PROTECTED]
[EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Thursday, June 17, 2004 10:31 AM
Subject: Re: A National ID: AAMVA's Unique ID


  The solution then is obvious, don't have a big central database. Instead
use
  a distributed database.

 Our favorite civil servants, the Departments of Motor Vehicles, are about
 to do exactly this to us.

 They call it Unique ID and their credo is: One person, one license,
 one record.  They swear that it isn't national ID, because national
 ID is disfavored by the public.  But it's the same thing in
 distributed-computing clothes.

I think you misunderstood my point. My point was that it is actually
_easier_, _cheaper_, and more _secure_ to eliminate all the silos. There is
no reason for the various silos, and there is less reason to tie them
together. My entire point was to put my entire record on my card, this
allows faster look-up (O(1) time versus O(lg(n))), greater security (I
control access to my record), it's cheaper (the cards have to be bought
anyway), it's easier (I've already done most of the work on defining them),
and administration is easier (no one has to care about duplication).

 This sure smells to me like national ID.

I think they are drawing the line a bit finer than either of us would like.
They don't call it a national ID because it being a national ID means that
it would be run by the federal government, being instead run by state
governments, it is a state ID, linked nationally.

As I said in the prior one, I disagree with any efforts to create forced ID.

 This, like the MATRIX program, is the brainchild of the federal
 Department of inJustice.  But those wolves are in the sheepskins of
 state DMV administrators, who are doing the grassroots politics and
 the actual administration.  It is all coordinated in periodic meetings
 by AAMVA, the American Association of Motor Vehicle Administrators
 (http://aamva.org/).  Draft bills to join the Unique ID Compact, the
 legally binding agreement among the states to do this, are already
 being circulated in the state legislatures by the heads of state DMVs.
 The idea is to sneak them past the public, and past the state
 legislators, before there's any serious public debate on the topic.

 They have lots of documents about exactly what they're up to.  See
 http://aamva.org/IDSecurity/.  Unfortunately for us, the real
 documents are only available to AAMVA members; the affected public is
 not invited.

 Robyn Wagner and I have tried to join AAMVA numerous times, as
 freetotravel.org.  We think that we have something to say about the
 imposition of Unique ID on an unsuspecting public.  They have rejected
 our application every time -- does this remind you of the Hollywood
 copy-prevention standards committees?  Here is their recent
 rejection letter:

   Thank you for submitting an application for associate membership in
AAMVA.
   Unfortunately, the application was denied again. The Board is not clear
as
   to how FreeToTravel will further enhance AAMVA's mission and service to
our
   membership. We will be crediting your American Express for the full
amount
   charged.

   Please feel free to contact Linda Lewis at (703) 522-4200 if you would
like
   to discuss this further.

   Dianne
   Dianne E. Graham
   Director, Member and Conference Services
   AAMVA
   4301 Wilson Boulevard, Suite 400
   Arlington, VA 22203
   T: (703) 522-4200 | F: (703) 908-5868
   www.aamva.org http://www.aamva.org/

 At the same time, they let in a bunch of vendors of high security ID
 cards as associate members.

Well then create a High-Security ID card company, build it on the technology
I've talked about. It's fairly simple, file the paperwork to create an LLC
with you and Robyn, the LLC acquires a website, it can be co-located at your
current office location, the website talks about my technology, how it
allows the unique and secure identification of every individual, blah, blah,
blah, get a credit card issued in the correct name. They'll almost certainly
let you in, you'll look and smell like a valid alternative (without lying
because you could certainly offer the technology), if you really want to
make it look really good I'm even willing to work with you on filing a
patent, something that they'd almost certainly appreciate.

 AAMVA, the 'guardians' of our right to travel and of our identity
 records, doesn't see how listening to citizens concerned with the
 erosion of exactly those rights and records would enhance their
 mission and service.

Of course it won't, their mission and service is to offer the strongest
identity link possible in the ID cards issued nation-wide, as such the
citizen's course of action has to be to govern the states issuing these
identication papers. However, if you offer them technology to actually make
their mission and service cheaper, more effective, and as a side-benefit
better for their voters. Besides, if you can't beat them (you 

Re: [cdr] Re: Digital cash and campaign finance reform

2003-09-10 Thread Joseph Ashwood
- Original Message - 
From: Tim May [EMAIL PROTECTED]
Subject: [cdr] Re: Digital cash and campaign finance reform


 There are too many loopholes to close.

I think that's the smartest thing any one of us has said on this topic. 
Joe



Re: Re: An attack on paypal -- secure UI for browsers

2003-06-12 Thread Joseph Ashwood
- Original Message - 
From: Anonymous [EMAIL PROTECTED]
Subject: CDR: Re: An attack on paypal -- secure UI for browsers


 You clearly know virtually nothing about Palladium.

Actually, properly designed Palladium would be little more than a smart card
welded to the motherboard. As currently designed it is a complete second
system that is allowed to take over the main processor. It has a few aspects
of what it should be, but not many. It does include the various aspects of
the smart card, but it also makes room for those aspects to take over the
main system, properly designed this would not be an option, of course
properly designed it could also be a permanently attached $1 smart card that
internally hangs off the USB controller instead of a mammoth undertaking.

I still stand by, Arbitrarily trusting anyone to write a secure program
simply doesn't work regardless of how many times MS says trust us any
substantially educated person should as well be prepared to either trust a
preponderance of evidence, or perform their own examination, neither of
these options is available. The information available does not cover the
technical information, in fact their Technical FAQ about it actually has
the following:
Q: Does this technology require an online connection to be used?

A: No. 

That is just so enlightening, and is about as far from a useful answer
as possible.


 NCAs do not have
 complete access to private information.  Quite the opposite.  Rather,
 NCAs have the power to protect private information such that no other
 software on the machine can access it.  They do so by using the Palladium
 software and hardware to encrypt the private data.  The encryption is
 done in such a way that it is sealed to the particular NCA, and no other
 software is allowed to use the Palladium crypto hardware to decrypt it.

This applies only under the condition that the software in Palladium is
perfectly secure. Again I point to the issues with ActiveX, where a wide
variety of hoels have been found, I point to the newest MS operating system
which has it even been out a month yet? and already has a security patch
available, in spite of their secure by default process. Again I don't
believe this is because MS is inherently bad, it is because writing secure
programs is extremely difficult, MS just has the most feature bloat so they
have the most problems. If the Palladium software is actually secure
(unlikely), then there is the issue of how the (foolishly trusted) NCAs are
determined to be the same, this is an easy problem to solve if no one ever
added features, but a hard one to solve where the program evolves, once MS
shows the solution for this, I will point to the same information and show
you a security hole.

 In the proposed usage, an NCA associated with an ecommerce site would seal
 the data which is used by the user to authenticate to the remote site.

After running unattended on your computer, a sarcasmbrilliant/sarcasm
idea, hasn't anyone learned?

 The authentication data doesn't actually have to be a certificate with
 associated key, but that would be one possibility.  Only NCAs signed by
 that ecommerce site's key would be able to unseal and access the user's
 authentication credentials.  This prevents rogue software from stealing
 them and impersonating the user.

Not in the slightest, a single compromise of a single ecommerce site
(remember they're trusted) will remove all this pretend security. Let's
use a particularly popular example on here right now www.e-go1d.com, they
could easily apply to be an ecommerce site, they collect money, they offer a
service, clearly they are an ecommerce site. Are you really gullible enough
to believe that they won't do everything in their power to exploit the data
transfer problem above, as well as any other holes in Palladium? I should
hope not.


 Seriously, have you read any
 of the documents linked from http://www.microsoft.com/resources/ngscb/?

Yes I have, in fact at this point I think it is safe to say that you have
not, or you didn't understand the implications of the small amount of
information it actually contains.
Joe



Re: Re: An attack on paypal -- secure UI for browsers

2003-06-10 Thread Joseph Ashwood
- Original Message - 
From: Anonymous [EMAIL PROTECTED]
Subject: CDR: Re: An attack on paypal -- secure UI for browsers


 In short, if Palladium comes with the ability to download site-specific
 DLLs that can act as NCAs

Ok what flavor of crack are you smoking? Because I can tell from here that's
some strong stuff. Downloading random DLLs that are given complete access to
private information is one of the worst concepts that anyone has ever come
up with, even if they are signed by a trusted source. Just look at the
horrifically long list of issues with ActiveX, even with WindowsXP (which
hasn't been around that long) you're already looking at more than half a
dozen, and IIRC win95 had about 50. This has less to do with windows is
bad than with secure programming is hard. Arbitrarily trusting anyone to
write a secure program simply doesn't work, especially when it's something
sophisticated.

Now for the much more fundamental issue of your statement. Palladium will
never download site-specific anything. Palladium is a hardware technology,
not a web browser.

I will refrain from saying Paladium is a bad idea, simply because I see some
potentially very lucrative (for me) options for it's use.
Joe



Re: Batter Up! (Was Re: Ex-Intel VP Fights for Detainee)

2003-04-04 Thread Joseph Ashwood
First let me say that I am anti-war. Maybe it is just because I've changed
from being purely a tech player to now owning Trust Laboratories, and so
primarily being a businessman, but I see things slightly differently from
the WSJ.

 http://online.wsj.com/article_print/0,,SB1049616100,00.html
excerpts
 Of course, the largest benefit -- a more stable Mideast -- is huge
 but unquantifiable. A second plus, lower oil prices, is somewhat more
 measurable. (Oil prices fell again yesterday on the prospect of
 victory.) The premium on 11.5 million barrels imported every day by
 the U.S. is a transfer from us to producing countries. Postwar, with
 Iraqi production back in the pipeline and calmer markets, oil prices
 will fall even further. If they drop to an average in the low $20s,
 the U.S. economy will get a boost of $55 billion to $60 billion a
 year.

I don't think the stable Mideast is the largest benefit. The largest benefit
comes from having a US-friendly government in the Mideast. This has several
benefits the most important of which are; it provides a stable center of
power for the US in the Mideast, and provides the US with priority oil.

The center of power is not currently important, but with the growing
disruption that the Israel-Palestine problem presents I have a strong
suspicion that military force in the middle east will become increasingly
necessary. The foundation for this is rather simple to find, it was bin
Laden himself that said something like until the people of Palestine know
safety, the US will not. To counter this we need only have a friendly
country in the middle east where we can temporarily position our armaments,
this will vastly reduce the cost of troop movement the next time our
presence will be felt.

The priority oil is not a current problem but with the world oil supply
quickly becoming depleted (some estimates put us at only 30 years left) the
availability of a conisistent oil supply can be economically justified
rather easily. Not that this will make much difference for your average
person, but military purposes of oil are many.

Militarily, these end benefits are enormous. The interim general populous
benfits are substantial as well, but I don't feel they are as impactual.
Already the general populous is beginning to see hybrid cars, fuel-cell cars
are only a few-years away, and at least GM and BMW are experimenting with
internal-combustion hydrogen engines (a few years ago BMW had running
experimental 7 series that used internal combustion hydrogen that travelled
parts of Europe). With these advances the general usage of oil is likely to
diminish over the next couple decades spurred on by the vastly increasing
cost of purchasing gasoline. There will of course be the necessary,
temporary, dip in oil pricing as the Iraqi oil fields are pushed into higher
production. Over time though this dip will mysteriously disappear, blamed on
market forces if anyone actually notices.

 But perhaps the best way to look at the economics of the war has been
 suggested by John Cogan. The Hoover Institution economist says the
 war is an investment. The proper question then becomes what resources
 are we willing to invest to achieve peace and stability, and a
 diminished threat from terrorism and terrorist-supporting states. At
 1% of GDP, the war looks like a bargain.

I very much agree with John Cogan, this war is an investment. I disagree
though with the WSJ conclusion that this is an investment in the stability
of the Middle East/ending Iraqi containment. Instead I believe that this is
an investment in US stability, and military ability. As such it will pay off
enormously, but I believe the costs to be far in excess of helping the
middle east address the Israel problem in a diplomatic way which would cost
less, undermine much of the terrorist actions, make the US look like more of
a beneficial monopoly, and certainly put us in better favor throughout the
Middle East.

Before anyone feels free to jump on me about this, I would like to remind
everyone that I am anti-war. I believe that war should only be used in
situations where it is truly unavoidable.
Joe

Trust Laboratories
http://www.trustlaboratories.com



Digital Certificates

2003-02-19 Thread Joseph Ashwood
I was just wondering if anyone has a digital certificate issuing system I
could get a few certificates issued from. Trust is not an issue since these
are development-only certs, and won't be used for anything except testing
purposes.

The development is for an open source PKCS #11 test suite.
Joe

Trust Laboratories
http://www.trustlaboratories.com




Re: Re: Digital Certificates

2003-02-19 Thread Joseph Ashwood
- Original Message -
From: Eric Murray [EMAIL PROTECTED]
Subject: CDR: Re: Digital Certificates


 On Tue, Feb 18, 2003 at 01:22:21PM -0800, Joseph Ashwood wrote:
  I was just wondering if anyone has a digital certificate issuing system
I
  could get a few certificates issued from. Trust is not an issue since
these
  are development-only certs, and won't be used for anything except
testing
  purposes.

 Whenever I need some test certs I use openssl to generate them.
 (Or an ingrian box, but not many people have one of those.)
 There's instructions in the openssl docs.  For test purposes
 you don't need openca, its only needed if you want to
 issue a lot of certs automagically.

Thank you for the input. I think I've got that working well enough to do it.


  The development is for an open source PKCS #11 test suite.

 Let me know when its done, I could use it.

The next hurdle I have to overcome is getting a reference PKCS #11 module,
although this shouldn't take too long if I can ever get the Gnu PKCS #11 to
compile.

I'll make sure I tell you when it's done.
Joe




Digital Certificates

2003-02-18 Thread Joseph Ashwood
I was just wondering if anyone has a digital certificate issuing system I
could get a few certificates issued from. Trust is not an issue since these
are development-only certs, and won't be used for anything except testing
purposes.

The development is for an open source PKCS #11 test suite.
Joe

Trust Laboratories
http://www.trustlaboratories.com




Re: Re: Digital Certificates

2003-02-18 Thread Joseph Ashwood
- Original Message -
From: Eric Murray [EMAIL PROTECTED]
Subject: CDR: Re: Digital Certificates


 On Tue, Feb 18, 2003 at 01:22:21PM -0800, Joseph Ashwood wrote:
  I was just wondering if anyone has a digital certificate issuing system
I
  could get a few certificates issued from. Trust is not an issue since
these
  are development-only certs, and won't be used for anything except
testing
  purposes.

 Whenever I need some test certs I use openssl to generate them.
 (Or an ingrian box, but not many people have one of those.)
 There's instructions in the openssl docs.  For test purposes
 you don't need openca, its only needed if you want to
 issue a lot of certs automagically.

Thank you for the input. I think I've got that working well enough to do it.


  The development is for an open source PKCS #11 test suite.

 Let me know when its done, I could use it.

The next hurdle I have to overcome is getting a reference PKCS #11 module,
although this shouldn't take too long if I can ever get the Gnu PKCS #11 to
compile.

I'll make sure I tell you when it's done.
Joe




Re: Re: Shuttle Diplomacy

2003-02-01 Thread Joseph Ashwood
- Original Message -
From: Thomas Shaddack [EMAIL PROTECTED]
To: Harmon Seaver [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Saturday, February 01, 2003 4:42 PM
Subject: CDR: Re: Shuttle Diplomacy


[snip conspiracy theory]
 Especially in this case, I'd bet my shoes on Murphy; Columbia was an old
 lady that had her problems even before the launch itself. I'd bet on
 something stupid, like loosened tiles or computer malfunction (though more
 likely the tiles, as the computers are backed up). Remember Challenger,
 where the fault was a stupid O-ring.

One of the current theories floating around has to do with a piece of debris
that flew off the booster rocket during take-off and collided with the left
wing (where the problems began). The video of the take-off was reviewed in
great detail and it was determined that it was innocent, considering the
proximity of the problems and the debris there appears to be at least
something worth investigating.
Joe

Trust Laboratories
http://www.trustlaboratories.com




Re: Re: Shuttle Diplomacy

2003-02-01 Thread Joseph Ashwood
- Original Message -
From: Thomas Shaddack [EMAIL PROTECTED]
To: Harmon Seaver [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Saturday, February 01, 2003 4:42 PM
Subject: CDR: Re: Shuttle Diplomacy


[snip conspiracy theory]
 Especially in this case, I'd bet my shoes on Murphy; Columbia was an old
 lady that had her problems even before the launch itself. I'd bet on
 something stupid, like loosened tiles or computer malfunction (though more
 likely the tiles, as the computers are backed up). Remember Challenger,
 where the fault was a stupid O-ring.

One of the current theories floating around has to do with a piece of debris
that flew off the booster rocket during take-off and collided with the left
wing (where the problems began). The video of the take-off was reviewed in
great detail and it was determined that it was innocent, considering the
proximity of the problems and the debris there appears to be at least
something worth investigating.
Joe

Trust Laboratories
http://www.trustlaboratories.com




Re: Re: Secure voice app: FEATURE REQUEST: RECORD IPs

2003-01-27 Thread Joseph Ashwood
- Original Message -
From: Harmon Seaver [EMAIL PROTECTED]
 On Mon, Jan 27, 2003 at 08:23:15AM -0800, Major Variola (ret) wrote:
  The versions of all the secure phones I've evaluated needed this
  feature:
  a minimal answering machine.  With just the ability to record IPs of
  hosts that
  tried to call.
 
  (A local table can map these to your friends or their faces.
  Of course, this table should be encrypted when not in use.)

Pretty hard to do if people are using dialup. Or even dsl, unless they
run a
 linux box they don't ever reboot -- although I've found my dsl ip changing
 sometimes on it's own, and with no rhyme or reason. Cable is a little more
 stable, when I had a cable modem it didn't change ip unless I shut off the
modem
 for awhile, and not even always then.

The obvious solution is then to take it one step further, rebuild the
protocol so that there is a cryptographic identifier (probably a public key,
hopefully for ECC to save space). In a fully developed system that
identifier could also be used to make the call in the first place.
Admittedly this is unlikely to happen for quite some time, but if people
start asking for it, they'll start considering it.
Joe

Trust Laboratories
http://www.trustlaboratories.com




Re: Clarification of challenge to Joseph Ashwood:

2002-11-03 Thread Joseph Ashwood
Sorry, I didn't bother reading the first message, and I won't bother reading
any of the messages further in this thread either. Kong lacks critical
functionality, and is fatally insecure for a wide variety of uses, in short
it is beyond worthless, ranging into being a substantial risk to the
security of anyone/group that makes use of it.

- Original Message -
From: James A. Donald [EMAIL PROTECTED]
Subject:  Clarification of challenge to Joseph Ashwood:


 Joseph Ashwood:
   So it's going to be broken by design. These are critical
   errors that will eliminate any semblance of security in
   your program.

 James A. Donald:
   I challenge you to fool my canonicalization algorithm by
   modifying a message to as to  change the apparent meaning
   while preserving the signature, or  by producing a message
   that verifies as signed by me, while in fact a meaningfully
   different message to any that was genuinely  signed by me.

That's easy, remember that you didn't limit the challenge to text files. It
should be a fairly simple matter to create a JPEG file with a number of 0xA0
and 0x20 bytes, by simply swapping the value of those byte one can create a
file that will pass your verification, but will obviously be corrupt. Your
canonicalization is clearly and fatally flawed.

 Three quarters of the user hostility of other programs comes
 from their attempt to support true names, and the rest comes
 from the cleartext signature problem.  Kong fixes both
 problems.

Actually Kong pretends the first problem doesn't exist, and corrects the
second one in such a way as to make it fatally broken.

  Joseph Ashwood must produce a message that is meaningfully
  different from any of the numerous messages that I have sent
  to cypherpunks, but which verifies as sent by the same person
  who sent past messages.

 Thus for Kong to be broken one must store a past message from
 that proflic poster supposed called James Donald, in the Kong
 database, and bring up a new message hacked up by Joseph
 Ashwood, and have Kong display in the signature verification
 screen

To verify that I would of course have to download and install Kong,
something that I will never do, I don't install software I already know is
broken, and fails to address even the most basic of problems.
Joe




Re: What email encryption is actually in use?

2002-09-30 Thread Joseph Ashwood

- Original Message -
From: James A. Donald [EMAIL PROTECTED]
 What email encryption is actually in use?

In my experience PGP is the most used.

 When I get a PGP encrypted message, I usually cannot read it --
 it is sent to my dud key or something somehow goes wrong.

Then you are obviously using PGP wrong. When you choose your 768-bit key in
1996 (I checked the key servers) you should have considered the actual
lifetime that the key was going to have. In 1996 a 768-bit key was
considered borderline secure, and it was just about time to retire them.
Instead of looking at this and setting an expiration date on your key, you
instead choose to make it live forever. Your other alternative would have
been to revoke that key before you retired it. You made critical mistakes,
and you blame it on PGP.

As to it's dependability. I've seen two problems when someone could not
decrypt the PGP message; 1) They shouldn't have access to it (someone elses
key, forgot passphrase, etc), 2) They didn't have any clue how to use PGP,
these people generally have trouble turning on their computer. On rare
occassions there will be issues with versions, but in my experience these
are exceptionally rare.

 Kong encrypted messages usually
 work, because there is only one version of the program, and key
 management is damn near non existent by design, since my
 experience as key manager for various companies shows that in
 practice keys just do not get managed. After I release the next
 upgrade, doubtless fewer messages will work.

Maybe you should have considered designing the system so that it could be
upgraded. A properly designed system can detect when an incompatible version
was used for encryption, and can inform the user of the problem.
Additionally I think there is one core reason why Kong decryptions always
work, no one uses it, without key management it is basically worthless.
Fortunately because there is no userbase you can change it dramatically for
the next release, maybe this time it'll be worth using.

 The most widely deployed encryption is of course that which is
 in outlook -- which we now know to be broken, since
 impersonation is trivial, making it fortunate that seemingly no
 one uses it.

If you did some research, you'd find that it is called S/MIME, it is a
standard, a broken standard, but a standard (admittedly Outlook implemented
it poorly and that is a major source of the breakage). The only non-standard
encryption outlook uses is in the file storage, which has nothing to do with
email.

 Repeating the question, so that it does not get lost in the
 rant.  To the extent that real people are using digitally
 signed and or encrypted messages for real purposes, what is the
 dominant technology, or is use so sporadic that no network
 effect is functioning, so nothing can be said to be dominant?

The two big players are PGP and S/MIME.


 The chief barrier to use of outlook's email encryption, aside
 from the fact that is broken, is the intolerable cost and
 inconvenience of certificate management.

Actually the chief barrier is psychological, people don't feel they should
side with the criminals by using encryption. Certificate management is
actually quite easy and cheap. It is the mistakes of people who lack any
understanding of how the system actually works that make it expensive and
inconvenient. The same applies to PGP.

 We have tools to
 construct any certificates we damn well please,

The same applies everywhere, in fact in your beloved Kong, the situation is
worse because the identities can't be managed.

 though the root
 signatures will not be recognized unless the user chooses to
 put them in.

That's right, blame your own inadequacies on everyone else, that seems to be
the standard American way now.

 Is it practical for a particular group, for
 example a corporation or a conspiracy, to whip up its own
 damned root certificate, without buggering around with
 verisign?

Of course it is, in fact there are about 140 root certificates that Internet
Explorer recognises, the majority of these have absolutely nothing to do
with Verisign. Getting it into the systems is a big more problematic.

 (Of course fixing Microsoft's design errors is
 never useful, since they will rebreak their products in new
 ways that are more ingenious and harder to fix.)

And this has nothing whatsoever to do with root certificates.

 I intended to sign this using Network Associates command line
 pgp, only to discover that pgp -sa file produced unintellible
 gibberish, that could only be made sense of by pgp, so that no
 one would be able to read it without first checking my
 signature.

Which would of course demonstrate once more that you have no clue how to use
PGP. It also demonstrates what is probably your primary source of I can't
decrypt it you are using a rather old version of PGP. While the rest of the
world has updated PGP to try to remain secure, you have managed to forgo all
semblance of security, in favor of 

Re: Re: Startups, Bubbles, and Unemployment

2002-08-25 Thread Joseph Ashwood

- Original Message -
From: Eric Cordian [EMAIL PROTECTED]

Although I appear to have been the final catalyst for the discussion of
unemployment. I agree with pretty much everything Eric Cordian said. In fact
my current state of lack of work, has little to do with lack of employment,
I am officially employed as a Substitute Teacher in the Gilroy Unified
School District (Gilroy, Ca), but due to summer I am lacking in work. If you
simply need a job, and have a bachelor's degree (or higher), sub teaching
pays, barely enough to live on, but it does pay. Additionally I have several
adding factors in this, of course I do consulting when available, have some
investments from when I was more enjoyably employed, but most importantly
I'm using the spare time I do have (sub teaching doesn't take much actual
work) to actually do #4:
 4.  Don't expect anyone to pay you to sit around and figure out what the
 Next Big Thing is going to be.

At which point I will attempt to get funding (if needed), work to make the
world a better place, and most importantly make a great deal of money for
the people funding the product/productline.

A quick word of warning, in California it takes on the order of 6 months to
get the piece of paper that says you can sub teach, so be prepared for a bit
of a wait, find a job at McDonald's or whatever for a while.
Joe




Re: Re: Overcoming the potential downside of TCPA

2002-08-15 Thread Joseph Ashwood

- Original Message -
From: Ben Laurie [EMAIL PROTECTED]
  The important part for this, is that TCPA has no key until it has an
owner,
  and the owner can wipe the TCPA at any time. From what I can tell this
was
  designed for resale of components, but is perfectly suitable as a point
of
  attack.

 If this is true, I'm really happy about it, and I agree it would allow
 virtualisation. I'm pretty sure it won't be for Palladium, but I don't
 know about TCPA - certainly it fits the bill for what TCPA is supposed
 to do.

I certainly don't believe many people to believe me simply because I say it
is so. Instead I'll supply a link to the authority of TCPA, the 1.1b
specification, it is available at
http://www.trustedcomputing.org/docs/main%20v1_1b.pdf . There are other
documents, unfortunately the main spec gives substantial leeway, and I
haven't had time to read the others (I haven't fully digested the main spec
yet either). From that spec, all 332 pages of it, I encourage everyone that
wants to decide for themselves to read the spec. If you reach different
conclusions than I have, feel free to comment, I'm sure there are many
people on these lists that would be interested in justification for either
position.

Personally, I believe I've processed enough of the spec to state that TCPA
is a tool, and like any tool it has both positive and negative aspects.
Provided the requirement to be able to turn it off (and for my preference
they should add a requirement that the motherboard continue functioning even
under the condition that the TCPA module(s) is/are physically removed from
the board). The current spec though does seem to have a bend towards being
as advertised, being primarily a tool for the user. Whether this will remain
in the version 2.0 that is in the works, I cannot say as I have no access to
it, although if someone is listening with an NDA nearby, I'd be more than
happy to review it.
Joe




Re: Overcoming the potential downside of TCPA

2002-08-15 Thread Joseph Ashwood

- Original Message -
From: Ben Laurie [EMAIL PROTECTED]
 Joseph Ashwood wrote:
  There is nothing stopping a virtualized version being created.

 What prevents this from being useful is the lack of an appropriate
 certificate for the private key in the TPM.

Actually that does nothing to stop it. Because of the construction of TCPA,
the private keys are registered _after_ the owner receives the computer,
this is the window of opportunity against that as well. The worst case for
cost of this is to purchase an additional motherboard (IIRC Fry's has them
as low as $50), giving the ability to present a purchase. The
virtual-private key is then created, and registered using the credentials
borrowed from the second motherboard. Since TCPA doesn't allow for direct
remote queries against the hardware, the virtual system will actually have
first shot at the incoming data. That's the worst case. The expected case;
you pay a small registration fee claiming that you accidentally wiped your
TCPA. The best case, you claim you accidentally wiped your TCPA, they
charge you nothing to remove the record of your old TCPA, and replace it
with your new (virtualized) TCPA. So at worst this will cost $50. Once
you've got a virtual setup, that virtual setup (with all its associated
purchased rights) can be replicated across an unlimited number of computers.

The important part for this, is that TCPA has no key until it has an owner,
and the owner can wipe the TCPA at any time. From what I can tell this was
designed for resale of components, but is perfectly suitable as a point of
attack.
Joe




Overcoming the potential downside of TCPA

2002-08-14 Thread Joseph Ashwood

Lately on both of these lists there has been quite some discussion about
TCPA and Palladium, the good, the bad, the ugly, and the anonymous. :)
However there is something that is very much worth noting, at least about
TCPA.

There is nothing stopping a virtualized version being created.

There is nothing that stops say VMWare from synthesizing a system view that
includes a virtual TCPA component. This makes it possible to (if desired)
remove all cryptographic protection.

Of course such a software would need to be sold as a development tool but
we all know what would happen. Tools like VMWare have been developed by
others, and as I recall didn't take all that long to do. As such they can be
anonymously distributed, and can almost certainly be stored entirely on a
boot CD, using the floppy drive to store the keys (although floppy drives
are no longer a cool thing to have in a system), boot from the CD, it runs
a small kernel that virtualizes and allows debugging of the TPM/TSS which
allows the viewing, copying and replacement of private keys on demand.

Of course this is likely to quickly become illegal, or may already, but that
doesn't stop the possibility of creating such a system. For details on how
to create this virtualized TCPA please refer to the TCPA spec.
Joe




Re: Overcoming the potential downside of TCPA

2002-08-14 Thread Joseph Ashwood

- Original Message -
From: Ben Laurie [EMAIL PROTECTED]
 Joseph Ashwood wrote:
  There is nothing stopping a virtualized version being created.

 What prevents this from being useful is the lack of an appropriate
 certificate for the private key in the TPM.

Actually that does nothing to stop it. Because of the construction of TCPA,
the private keys are registered _after_ the owner receives the computer,
this is the window of opportunity against that as well. The worst case for
cost of this is to purchase an additional motherboard (IIRC Fry's has them
as low as $50), giving the ability to present a purchase. The
virtual-private key is then created, and registered using the credentials
borrowed from the second motherboard. Since TCPA doesn't allow for direct
remote queries against the hardware, the virtual system will actually have
first shot at the incoming data. That's the worst case. The expected case;
you pay a small registration fee claiming that you accidentally wiped your
TCPA. The best case, you claim you accidentally wiped your TCPA, they
charge you nothing to remove the record of your old TCPA, and replace it
with your new (virtualized) TCPA. So at worst this will cost $50. Once
you've got a virtual setup, that virtual setup (with all its associated
purchased rights) can be replicated across an unlimited number of computers.

The important part for this, is that TCPA has no key until it has an owner,
and the owner can wipe the TCPA at any time. From what I can tell this was
designed for resale of components, but is perfectly suitable as a point of
attack.
Joe




Re: Re: Overcoming the potential downside of TCPA

2002-08-14 Thread Joseph Ashwood

- Original Message -
From: Ben Laurie [EMAIL PROTECTED]
  The important part for this, is that TCPA has no key until it has an
owner,
  and the owner can wipe the TCPA at any time. From what I can tell this
was
  designed for resale of components, but is perfectly suitable as a point
of
  attack.

 If this is true, I'm really happy about it, and I agree it would allow
 virtualisation. I'm pretty sure it won't be for Palladium, but I don't
 know about TCPA - certainly it fits the bill for what TCPA is supposed
 to do.

I certainly don't believe many people to believe me simply because I say it
is so. Instead I'll supply a link to the authority of TCPA, the 1.1b
specification, it is available at
http://www.trustedcomputing.org/docs/main%20v1_1b.pdf . There are other
documents, unfortunately the main spec gives substantial leeway, and I
haven't had time to read the others (I haven't fully digested the main spec
yet either). From that spec, all 332 pages of it, I encourage everyone that
wants to decide for themselves to read the spec. If you reach different
conclusions than I have, feel free to comment, I'm sure there are many
people on these lists that would be interested in justification for either
position.

Personally, I believe I've processed enough of the spec to state that TCPA
is a tool, and like any tool it has both positive and negative aspects.
Provided the requirement to be able to turn it off (and for my preference
they should add a requirement that the motherboard continue functioning even
under the condition that the TCPA module(s) is/are physically removed from
the board). The current spec though does seem to have a bend towards being
as advertised, being primarily a tool for the user. Whether this will remain
in the version 2.0 that is in the works, I cannot say as I have no access to
it, although if someone is listening with an NDA nearby, I'd be more than
happy to review it.
Joe




Overcoming the potential downside of TCPA

2002-08-14 Thread Joseph Ashwood

Lately on both of these lists there has been quite some discussion about
TCPA and Palladium, the good, the bad, the ugly, and the anonymous. :)
However there is something that is very much worth noting, at least about
TCPA.

There is nothing stopping a virtualized version being created.

There is nothing that stops say VMWare from synthesizing a system view that
includes a virtual TCPA component. This makes it possible to (if desired)
remove all cryptographic protection.

Of course such a software would need to be sold as a development tool but
we all know what would happen. Tools like VMWare have been developed by
others, and as I recall didn't take all that long to do. As such they can be
anonymously distributed, and can almost certainly be stored entirely on a
boot CD, using the floppy drive to store the keys (although floppy drives
are no longer a cool thing to have in a system), boot from the CD, it runs
a small kernel that virtualizes and allows debugging of the TPM/TSS which
allows the viewing, copying and replacement of private keys on demand.

Of course this is likely to quickly become illegal, or may already, but that
doesn't stop the possibility of creating such a system. For details on how
to create this virtualized TCPA please refer to the TCPA spec.
Joe




Re: Is TCPA broken?

2002-08-13 Thread Joseph Ashwood

I need to correct myself.
- Original Message -
From: Joseph Ashwood [EMAIL PROTECTED]

 Suspiciously absent though is the requirement for symmetric encryption
(page
 4 is easiest to see this). This presents a potential security issue, and
 certainly a barrier to its use for non-authentication/authorization
 purposes. This is by far the biggest potential weak point of the system.
No
 server designed to handle the quantity of connections necessary to do this
 will have the ability to decrypt/sign/encrypt/verify enough data for the
 purely theoretical universal DRM application.

I need to correct this DES, and 3DES are requirements, AES is optional. This
functionality appears to be in the TSS. However I can find very few
references to the usage, and all of those seem to be thoroughly wrapped in
numerous layers of SHOULD and MAY. Since is solely the realm of the TSS
(which had it's command removed July 12, 2001 making this certainly
incomplete), it is only accessible through few commands (I won't bother with
VerifySignature). However looking at the TSS_Bind it says explicitly on page
157 To bind data that is larger than the RSA public key modulus it is the
responsibility of the caller to perform the blocking indicating that the
expected implementation is RSA only. The alternative is wrapping the key,
but that is clearly targeted at using RSA to encrypt a key. The Identity
commands, this appears to use a symmetric key, but deals strictly with
TPM_IDENTITY_CREDENTIAL. Regardless the TSS is a software entity (although
it may be assisted by hardware), this is and of itself presents some
interesting side-effects on security.
Joe




Is TCPA broken?

2002-08-13 Thread Joseph Ashwood

- Original Message -
From: Mike Rosing [EMAIL PROTECTED]
 Are you now admitting TCPA is broken?

I freely admit that I haven't made it completely through the TCPA
specification. However it seems to be, at least in effect although not
exactly, a motherboard bound smartcard.

Because it is bound to the motherboard (instead of the user) it can be used
for various things, but at the heart it is a smartcard. Also because it
supports the storage and use of a number of private RSA keys (no other type
supported) it provides some interesting possibilities.

Because of this I believe that there is a core that is fundamentally not
broken. It is the extensions to this concept that pose potential breakage.
In fact looking at Page 151 of the TCPA 1.1b spec it clearly states (typos
are mine) the OS can be attacked by a second OS replacing both the
SEALED-block encryption key, and the user database itself. There are
measures taken to make such an attack cryptographically hard, but it
requires the OS to actually do something.

Suspiciously absent though is the requirement for symmetric encryption (page
4 is easiest to see this). This presents a potential security issue, and
certainly a barrier to its use for non-authentication/authorization
purposes. This is by far the biggest potential weak point of the system. No
server designed to handle the quantity of connections necessary to do this
will have the ability to decrypt/sign/encrypt/verify enough data for the
purely theoretical universal DRM application.

The second substantial concern is that the hardware is substantially limited
in the size of the private keys, being limited to 2048 bits, the second
concern is that it is additionally bound to SHA-1. Currently these are both
sufficient for security, but in the last year we have seen realistic claims
that 1500 bit RSA may be subject to viable attack (or alternately may not
depending on who you believe). While attacks on RSA tend to be spread a fair
distance apart, this never the less puts 2048 bit RSA at fairly close to the
limit of security, it would be much preferable to support 4096-bit RSA from
a security standpoint. SHA-1 is also currently near its limit. SHA-1 offer
2^80 security, a value that it can be argued may be too small for long term
security.

For the time being TCPA seems to be unbroken, 2048-bit RSA is sufficient,
and SHA-1 is used as a MAC for important points. For the future though I
believe these choices may prove to be a weak point in the system, for those
that would like to attack the system, these are the prime targets. The
secondary targets would be forcing debugging to go unaddressed by the OS,
which since there is no provision for smartcard execution (except in
extremely small quantities just as in a smartcard) would reveal very nearly
everything (including the data desired).
Joe




Is TCPA broken?

2002-08-12 Thread Joseph Ashwood

- Original Message -
From: Mike Rosing [EMAIL PROTECTED]
 Are you now admitting TCPA is broken?

I freely admit that I haven't made it completely through the TCPA
specification. However it seems to be, at least in effect although not
exactly, a motherboard bound smartcard.

Because it is bound to the motherboard (instead of the user) it can be used
for various things, but at the heart it is a smartcard. Also because it
supports the storage and use of a number of private RSA keys (no other type
supported) it provides some interesting possibilities.

Because of this I believe that there is a core that is fundamentally not
broken. It is the extensions to this concept that pose potential breakage.
In fact looking at Page 151 of the TCPA 1.1b spec it clearly states (typos
are mine) the OS can be attacked by a second OS replacing both the
SEALED-block encryption key, and the user database itself. There are
measures taken to make such an attack cryptographically hard, but it
requires the OS to actually do something.

Suspiciously absent though is the requirement for symmetric encryption (page
4 is easiest to see this). This presents a potential security issue, and
certainly a barrier to its use for non-authentication/authorization
purposes. This is by far the biggest potential weak point of the system. No
server designed to handle the quantity of connections necessary to do this
will have the ability to decrypt/sign/encrypt/verify enough data for the
purely theoretical universal DRM application.

The second substantial concern is that the hardware is substantially limited
in the size of the private keys, being limited to 2048 bits, the second
concern is that it is additionally bound to SHA-1. Currently these are both
sufficient for security, but in the last year we have seen realistic claims
that 1500 bit RSA may be subject to viable attack (or alternately may not
depending on who you believe). While attacks on RSA tend to be spread a fair
distance apart, this never the less puts 2048 bit RSA at fairly close to the
limit of security, it would be much preferable to support 4096-bit RSA from
a security standpoint. SHA-1 is also currently near its limit. SHA-1 offer
2^80 security, a value that it can be argued may be too small for long term
security.

For the time being TCPA seems to be unbroken, 2048-bit RSA is sufficient,
and SHA-1 is used as a MAC for important points. For the future though I
believe these choices may prove to be a weak point in the system, for those
that would like to attack the system, these are the prime targets. The
secondary targets would be forcing debugging to go unaddressed by the OS,
which since there is no provision for smartcard execution (except in
extremely small quantities just as in a smartcard) would reveal very nearly
everything (including the data desired).
Joe




Re: Is TCPA broken?

2002-08-12 Thread Joseph Ashwood

I need to correct myself.
- Original Message -
From: Joseph Ashwood [EMAIL PROTECTED]

 Suspiciously absent though is the requirement for symmetric encryption
(page
 4 is easiest to see this). This presents a potential security issue, and
 certainly a barrier to its use for non-authentication/authorization
 purposes. This is by far the biggest potential weak point of the system.
No
 server designed to handle the quantity of connections necessary to do this
 will have the ability to decrypt/sign/encrypt/verify enough data for the
 purely theoretical universal DRM application.

I need to correct this DES, and 3DES are requirements, AES is optional. This
functionality appears to be in the TSS. However I can find very few
references to the usage, and all of those seem to be thoroughly wrapped in
numerous layers of SHOULD and MAY. Since is solely the realm of the TSS
(which had it's command removed July 12, 2001 making this certainly
incomplete), it is only accessible through few commands (I won't bother with
VerifySignature). However looking at the TSS_Bind it says explicitly on page
157 To bind data that is larger than the RSA public key modulus it is the
responsibility of the caller to perform the blocking indicating that the
expected implementation is RSA only. The alternative is wrapping the key,
but that is clearly targeted at using RSA to encrypt a key. The Identity
commands, this appears to use a symmetric key, but deals strictly with
TPM_IDENTITY_CREDENTIAL. Regardless the TSS is a software entity (although
it may be assisted by hardware), this is and of itself presents some
interesting side-effects on security.
Joe




Re: Seth on TCPA at Defcon/Usenix

2002-08-11 Thread Joseph Ashwood

- Original Message -
From: AARG! Anonymous [EMAIL PROTECTED]
[brief description of Document Revocation List]

Seth's scheme doesn't rely on TCPA/Palladium.

Actually it does, in order to make it valuable. Without a hardware assist,
the attack works like this:
Hack your software (which is in many ways almost trivial) to reveal it's
private key.
Watch the protocol.
Decrypt protocol
Grab decryption key
use decryption key
problem solved

With hardware assist, trusted software, and a trusted execution environment
it (doesn't) work like this:
Hack you software.
DOH! the software won't run
revert back to the stored software.
Hack the hardware (extremely difficult).
Virtualize the hardware at a second layer, using the grabbed private key
Hack the software
Watch the protocol.
Decrypt protocol
Grab decryption key
use decryption key
Once the file is released the server revokes all trust in your client,
effectively removing all files from your computer that you have not
decrypted yet
problem solved? only for valuable files

Of course if you could find some way to disguise which source was hacked,
things change.

Now about the claim that MS Word would not have this feature. It almost
certainly would. The reason being that business customers are of particular
interest to MS, since they supply a large portion of the money for Word (and
everything else). Businesses would want to be able to configure their
network in such a way that critical business information couldn't be leaked
to the outside world. Of course this removes the advertising path of
conveniently leaking carefully constructed documents to the world, but for
many companies that is a trivial loss.
Joe




Re: Re: Challenge to TCPA/Palladium detractors

2002-08-11 Thread Joseph Ashwood

- Original Message -
From: Eugen Leitl [EMAIL PROTECTED]
 Can anyone shed some light on this?

Because of the sophistication of modern processors there are too many
variables too be optimized easily, and doing so can be extremely costly.
Because of this diversity, many compilers use semi-random exploration.
Because of this random exploration the compiler will typically compile the
same code into a different executable. With small programs it is likely to
find the same end-point, because of the simplicity. The larger the program
the more points for optimization, so for something as large as say PGP you
are unlikely to find the same point twice, however the performance is likely
to be eerily similar.

There are bound to be exceptions, and sometimes the randomness in the
exploration appears non-existent, but I've been told that some versions the
DEC GEM
compiler used semi-randomness a surprising amount because it was a very fast
way to narrow down to an approximate best (hence the extremely fast
compilation and execution). It is likely that MS VC uses such techniques.
Oddly extremely high level languages don't have as many issues, each command
spans so many instructions that a pretuned set of command instructions will
often provide very close to optimal performance.

I've been told that gcc does not apparently use randomness to any
significant degree, but I admit I have not examined the source code to
confirm or deny this.
Joe





Re: Re: Challenge to TCPA/Palladium detractors

2002-08-10 Thread Joseph Ashwood

- Original Message -
From: Eugen Leitl [EMAIL PROTECTED]
 Can anyone shed some light on this?

Because of the sophistication of modern processors there are too many
variables too be optimized easily, and doing so can be extremely costly.
Because of this diversity, many compilers use semi-random exploration.
Because of this random exploration the compiler will typically compile the
same code into a different executable. With small programs it is likely to
find the same end-point, because of the simplicity. The larger the program
the more points for optimization, so for something as large as say PGP you
are unlikely to find the same point twice, however the performance is likely
to be eerily similar.

There are bound to be exceptions, and sometimes the randomness in the
exploration appears non-existent, but I've been told that some versions the
DEC GEM
compiler used semi-randomness a surprising amount because it was a very fast
way to narrow down to an approximate best (hence the extremely fast
compilation and execution). It is likely that MS VC uses such techniques.
Oddly extremely high level languages don't have as many issues, each command
spans so many instructions that a pretuned set of command instructions will
often provide very close to optimal performance.

I've been told that gcc does not apparently use randomness to any
significant degree, but I admit I have not examined the source code to
confirm or deny this.
Joe





Re: Seth on TCPA at Defcon/Usenix

2002-08-10 Thread Joseph Ashwood

- Original Message -
From: AARG! Anonymous [EMAIL PROTECTED]
[brief description of Document Revocation List]

Seth's scheme doesn't rely on TCPA/Palladium.

Actually it does, in order to make it valuable. Without a hardware assist,
the attack works like this:
Hack your software (which is in many ways almost trivial) to reveal it's
private key.
Watch the protocol.
Decrypt protocol
Grab decryption key
use decryption key
problem solved

With hardware assist, trusted software, and a trusted execution environment
it (doesn't) work like this:
Hack you software.
DOH! the software won't run
revert back to the stored software.
Hack the hardware (extremely difficult).
Virtualize the hardware at a second layer, using the grabbed private key
Hack the software
Watch the protocol.
Decrypt protocol
Grab decryption key
use decryption key
Once the file is released the server revokes all trust in your client,
effectively removing all files from your computer that you have not
decrypted yet
problem solved? only for valuable files

Of course if you could find some way to disguise which source was hacked,
things change.

Now about the claim that MS Word would not have this feature. It almost
certainly would. The reason being that business customers are of particular
interest to MS, since they supply a large portion of the money for Word (and
everything else). Businesses would want to be able to configure their
network in such a way that critical business information couldn't be leaked
to the outside world. Of course this removes the advertising path of
conveniently leaking carefully constructed documents to the world, but for
many companies that is a trivial loss.
Joe




Re: Closed source more secure than open source

2002-07-06 Thread Joseph Ashwood

- Original Message -
From: Anonymous [EMAIL PROTECTED]

 Ross Anderson's paper at
 http://www.ftp.cl.cam.ac.uk/ftp/users/rja14/toulouse.pdf
 has been mostly discussed for what it says about the TCPA.  But the
 first part of the paper is equally interesting.

Ross Andseron's approximate statements:
Closed Source:
 the system's failure rate has just
 dropped by a factor of L, just as we would expect.

Open Source:
bugs remain equally easy to find.

Anonymous's Statements:
For most programs, source code will be of
 no benefit to external testers, because they don't know how to program.

 Therefore the rate at which (external) testers find bugs does not vary
 by a factor of L between the open and closed source methodologies,
 as assumed in the model.  In fact the rates will be approximately equal.

 The result is that once a product has gone into beta testing and then into
 field installations, the rate of finding bugs by authorized testers will
 be low, decreased by a factor of L, regardless of open or closed source.

I disagree, actually I agree and disagree with both, due in part to the
magnitudes involved. It is certainly true that once Beta testing (or some
semblance of it) begins there will be users that cannot make use of source
code, but what Anonymous fails to realize is that there will be beta testers
that can make use of the source code.

Additionally there are certain tendencies in the open and closed source
communities that Anonymous and Anderson have not addressed in their models.
The most important tendencies are that in closed source beta testing is
generally handed off to a separate division and the original author does
little if any testing, and in open source the authors have a much stronger
connection with the testing, with the authors' duty extending through the
entire testing cycle. These tendencies lead to two very different positions
than generally realized.

First, closed source testing, beginning in the late Alpha testing stage, is
generally done without any assistance from source code, by _anyone_, this
significantly hampers the testing. This has led to observed situations where
QA engineers sign off on products that don't even function, let alone have
close to 0 bugs. With the software engineers believing that because the code
was signed off, it must be bug-free. This is a rather substantial problem.
To address this problem one must actually correct the number of testers for
the ones that are effectively doing nothing. So while L is the extra
difficulty in finding bugs without source code, it is magnified by something
approximating (testers)/(testers not doing anything). It's worth noting that
(testers)  (testers not doing anything) causing the result K =
L*(testers)/(testers not doing anything), to tend towards infinite values.

In open source we have very much the opposite situation. The authors are
involved in all stages of testing, giving another value. This value is used
to adjust L as before, but the quantities involved are substantially
different. It must be observed, as was done by Anonymous, that there are
testers that have no concept what source code is, and certainly no idea how
to read it, call these harassers. In addition though there are also testers
who read source code, and even the authors themselves are doing testing,
call these coders. So in this case K = L*(harassers)/(harassers+coders).
Where it's worth noting that K will now tend towards 0.

It is also very much the case that different projects have different
quantities of testers. In fact as the number of beta testers grows, the
MTBD(iscovery) of a bug must not increase, and will almost certainly
decrease. In this case each project must be treated separately, since
obviously WindowsXP will have more people testing it (thanks to bug
reporting features) than QFighter3
(http://sourceforge.net/projects/qfighter3/ the lest active development on
sourceforge). This certainly leads to problems in comparison. It is also
worth noting that it is likely that actual difficulty in locating bugs is
probably related to the maximum of (K/testers) and the (testers root of K).
Meaning that WindowsXP is likely to have a higher ratio of bugs uncovered in
a given time period T than QFighter3. However due to the complexity of the
comparisons, QFighter3 is likely to have fewer bugs than WindowsXP, simply
because WindowsXP is several orders of magnitude more complex.

So while the belief that source code makes bug hunting easier on everyone,
is certainly not purely the case (Anonymous's observation), it is also not
the case that the tasks are equivalent (Anonymous's claim), with the
multiplier in closed source approaching infinite, and open source towards 0.
Additionally the quantity of testers appears to have more of an impact on
bug-finding than the discussion of open or closed source. However as always
complexity plays an enormous role in the number of bugs available to find,
anybody with a few days programming experience 

Re: Closed source more secure than open source

2002-07-06 Thread Joseph Ashwood

- Original Message -
From: Anonymous [EMAIL PROTECTED]

 Ross Anderson's paper at
 http://www.ftp.cl.cam.ac.uk/ftp/users/rja14/toulouse.pdf
 has been mostly discussed for what it says about the TCPA.  But the
 first part of the paper is equally interesting.

Ross Andseron's approximate statements:
Closed Source:
 the system's failure rate has just
 dropped by a factor of L, just as we would expect.

Open Source:
bugs remain equally easy to find.

Anonymous's Statements:
For most programs, source code will be of
 no benefit to external testers, because they don't know how to program.

 Therefore the rate at which (external) testers find bugs does not vary
 by a factor of L between the open and closed source methodologies,
 as assumed in the model.  In fact the rates will be approximately equal.

 The result is that once a product has gone into beta testing and then into
 field installations, the rate of finding bugs by authorized testers will
 be low, decreased by a factor of L, regardless of open or closed source.

I disagree, actually I agree and disagree with both, due in part to the
magnitudes involved. It is certainly true that once Beta testing (or some
semblance of it) begins there will be users that cannot make use of source
code, but what Anonymous fails to realize is that there will be beta testers
that can make use of the source code.

Additionally there are certain tendencies in the open and closed source
communities that Anonymous and Anderson have not addressed in their models.
The most important tendencies are that in closed source beta testing is
generally handed off to a separate division and the original author does
little if any testing, and in open source the authors have a much stronger
connection with the testing, with the authors' duty extending through the
entire testing cycle. These tendencies lead to two very different positions
than generally realized.

First, closed source testing, beginning in the late Alpha testing stage, is
generally done without any assistance from source code, by _anyone_, this
significantly hampers the testing. This has led to observed situations where
QA engineers sign off on products that don't even function, let alone have
close to 0 bugs. With the software engineers believing that because the code
was signed off, it must be bug-free. This is a rather substantial problem.
To address this problem one must actually correct the number of testers for
the ones that are effectively doing nothing. So while L is the extra
difficulty in finding bugs without source code, it is magnified by something
approximating (testers)/(testers not doing anything). It's worth noting that
(testers)  (testers not doing anything) causing the result K =
L*(testers)/(testers not doing anything), to tend towards infinite values.

In open source we have very much the opposite situation. The authors are
involved in all stages of testing, giving another value. This value is used
to adjust L as before, but the quantities involved are substantially
different. It must be observed, as was done by Anonymous, that there are
testers that have no concept what source code is, and certainly no idea how
to read it, call these harassers. In addition though there are also testers
who read source code, and even the authors themselves are doing testing,
call these coders. So in this case K = L*(harassers)/(harassers+coders).
Where it's worth noting that K will now tend towards 0.

It is also very much the case that different projects have different
quantities of testers. In fact as the number of beta testers grows, the
MTBD(iscovery) of a bug must not increase, and will almost certainly
decrease. In this case each project must be treated separately, since
obviously WindowsXP will have more people testing it (thanks to bug
reporting features) than QFighter3
(http://sourceforge.net/projects/qfighter3/ the lest active development on
sourceforge). This certainly leads to problems in comparison. It is also
worth noting that it is likely that actual difficulty in locating bugs is
probably related to the maximum of (K/testers) and the (testers root of K).
Meaning that WindowsXP is likely to have a higher ratio of bugs uncovered in
a given time period T than QFighter3. However due to the complexity of the
comparisons, QFighter3 is likely to have fewer bugs than WindowsXP, simply
because WindowsXP is several orders of magnitude more complex.

So while the belief that source code makes bug hunting easier on everyone,
is certainly not purely the case (Anonymous's observation), it is also not
the case that the tasks are equivalent (Anonymous's claim), with the
multiplier in closed source approaching infinite, and open source towards 0.
Additionally the quantity of testers appears to have more of an impact on
bug-finding than the discussion of open or closed source. However as always
complexity plays an enormous role in the number of bugs available to find,
anybody with a few days programming experience 

Re: Re: maximize best case, worst case, or average case? (TCPA

2002-07-01 Thread Joseph Ashwood

- Original Message -
From: Ryan Lackey [EMAIL PROTECTED]

 I consider DRM systems (even the not-secure, not-mandated versions)
 evil due to the high likelyhood they will be used as technical
 building blocks upon which to deploy mandated, draconian DRM systems.

The same argument can be applied to just about any tool.

A knife has a high likelihood of being used in such a manner that it causes
physical damage to an individual (e.g. you cut yourself while slicing your
dinner) at some point in its useful lifetime. Do we declare knives evil?

A hammer has a high likelihood of at some point in its useful life causing
physical damage to both an individual and property. Do we declare hammers
evil?

DRM is a tool. Tools can be used for good, and tools can be used for evil,
but that does not make a tool inherently good or evil. DRM has a place where
it is a suitable tool, but one should not declare a tool evil simply because
an individual or group uses the tool for purposes that have been declared
evil.
Joe




Re: Re: maximize best case, worst case, or average case? (TCPA

2002-07-01 Thread Joseph Ashwood

- Original Message -
From: Ryan Lackey [EMAIL PROTECTED]

 I consider DRM systems (even the not-secure, not-mandated versions)
 evil due to the high likelyhood they will be used as technical
 building blocks upon which to deploy mandated, draconian DRM systems.

The same argument can be applied to just about any tool.

A knife has a high likelihood of being used in such a manner that it causes
physical damage to an individual (e.g. you cut yourself while slicing your
dinner) at some point in its useful lifetime. Do we declare knives evil?

A hammer has a high likelihood of at some point in its useful life causing
physical damage to both an individual and property. Do we declare hammers
evil?

DRM is a tool. Tools can be used for good, and tools can be used for evil,
but that does not make a tool inherently good or evil. DRM has a place where
it is a suitable tool, but one should not declare a tool evil simply because
an individual or group uses the tool for purposes that have been declared
evil.
Joe




Re: Piracy is wrong

2002-06-29 Thread Joseph Ashwood

Subject: CDR: Piracy is wrong
 This shouldn't have to be said, but apparently it is necessary.

Which is a correct statement, but an incorrect line of thinking. Piracy is
an illegitimate use of a designed in hole in the security, the ability to
copy. This right to copy for personal use is well founded, and there are
even supreme court cases to support it. DRM removes this right, without due
representation, and it is thinking like yours that leads down this poorly
chosen path. The other much more harsh reality involved is that DRM cannot
work, all it can do is inconvenience legitimate consumers. There is massive
evidence of this, and you are free to examine them in any way you choose.

 Piracy - unauthorized copying of copyrighted material - is wrong.
 It inherently involves lying, cheating and taking unfair advantage
 of others.  Systems like DRM are therefore beneficial when they help to
 reduce piracy.  We should all support them, to the extent that this is
 their purpose.

 When an artist releases a song or some other creative product to the
 world, they typically put some conditions on it.

These include the expectation that the artist will be paid according to
whatever deal they have signed with their label. Inherent in this deal is
the consumer's right to copy for personal use, and to resell their purchased
copy, as long as all copies that the consumer has made are destroyed. DRM
attempts to revoke this right to personal copying, and resale.

 If you want to listen
 to and enjoy the song, you are obligated to agree to those conditions.
 If you can't accept the conditions, you shouldn't take the creative work.

And if the artist cannot accept the fundamental rights specifically granted,
they should not produce art.

 The artist is under no obligation to release their work.  It is like a
 gift to the world.  They are free to put whatever conditions they like
 on that gift, and you are free to accept them or not.

Last time I checked the giver is supposed to remove the pricetag from the
gift before giving it. By a similar argument, everyone should be happy that
the WTC flying occured, after all they were kind enough not to kill anyone
that's still alive. The logic simply doesn't hold.

 If you take the gift, you are agreeing to the conditions.  If you then
 violate the stated conditions, such as by sharing the song with others,
 you are breaking your agreement.  You become a liar and a cheat.

In fact one of the specifically granted rights is the right to share the
music with friends and family, so this has nothing to do with being a liar
and a cheat it has to do with excercising not just rights, but rights that
have been specifically granted.

 If you take the song without paying for it, you are again receiving this
 gift without following the conditions that were placed on it as part
 of the gift being offered.  You are taking advantage of the artist's
 creativity without them receiving the compensation they required.

Because of that specifically granted right, that copies can be made for
friends and family, it is also a specifically granted right to accept those
copies. So it is merely excercising a specifically granted right. You
clearly have not read or understood the implications and complexities of
your statements, with regard to either logic or the law.

 This isn't complicated.

Apparently it is too complicated for you.

 It's just basic ethics.

It's just basic rights and excercising of those rights.

 It's a matter of honesty
 and trust.

If the record companies were prepared to trust, why do they employ a
substantial army of lawyers? Why do they pursue every p2p network? Why are
they pushing for DRM? Trust is not a one-way street. The recording labels
have demonstrated that they cannot be trusted in any form, what delusion
makes you think they can be trusted now?

 When someone makes you an offer and you don't find the terms
 acceptable, you simply refuse.

Exactly, I refuse to accept a DRM -limited environment which does not allow
me full ownership of something I purchased.

 You don't take advantage by taking what
 they provide and refusing to do your part.  That's cheating.

No, that's a fundamental misunderstanding of everything involved, from law
to basic logic you have misunderstood it all.
Joe




Re: RE: Harry Potter released unprotected

2002-06-18 Thread Joseph Ashwood

- Original Message -
From: Lucky Green [EMAIL PROTECTED]
 Joseph Ashwood wrote:
  This looks like just a
  pilot program. Watch the normal piracy channels though, if
  Harry Potter shows up stronger than other releases
  Macrovision will be around a while. But if Harry Potter isn't
  substantially hit by piracy, then you might want to start
  shorting Macrovision, they'll start losing customers.

 I am confused. AFAICT, the majority of movie piracy today takes place
 via DivX from DVD's. How does Macrovision even play a role in this?

In it's realistic form, Macrovision has nothing to do with any of it.
However since it is current industry protocol to use Macrovision
copy-protection, Macrovision is of interest. In truth, this isn't even a
question of copy-protection, there's plenty of evidence that none of that
works. Instead this is about a technology, and a company, the technology is
the Macrovision copy-protection technology, and the company explicitly
involved is Macrovision. Macrovision makes the bulk of their profits from
this copy-protection technology, and since it is a copy-protection
technology it is of general interest to many cypherpunks, even if not in any
real way. (see the other reply regarding picture corrections). Because of
Macrovision's heavy reliance on the copy-protection technology for profits,
an undermining of that critical asset will greatly diminish the value of the
company, and so diminish the stock price. For any other purpose, there's
basically no reason for this thread at all. Hope this helped a bit.
Joe




Re: Harry Potter released unprotected

2002-06-15 Thread Joseph Ashwood

- Original Message -
From: Steve Schear [EMAIL PROTECTED]

 Harry Potter released unprotected

 So, is this just a test or has at least one industry giant decided, as the
 software industry learned long ago, that the cost of copy protection often
 exceeds its value.

I believe it's a test. The studio has determined that Harry Potter has
already made a (sizable) profit, so using it for an experiment is
acceptable. By testing on a big budget target they can now determine if
copy-protection costs exceed value.

 Time to short Macrovision (MVSN, NASDAQ NM)?  In the past year the stock
 has dropped from about $72 to about $14.  I wonder if their $1.00 drop in
 price on today's opening reflects this news?

I don't think so, not yet at least. This looks like just a pilot program.
Watch the normal piracy channels though, if Harry Potter shows up stronger
than other releases Macrovision will be around a while. But if Harry Potter
isn't substantially hit by piracy, then you might want to start shorting
Macrovision, they'll start losing customers.
Joe




Re: CDR: RE: Degrees of Freedom vs. Hollywood Control Freaks

2002-06-05 Thread Joseph Ashwood

- Original Message -
From: [EMAIL PROTECTED]
Subject: Re: CDR: RE: Degrees of Freedom vs. Hollywood Control Freaks


 Ok, somebody correct me if I'm wrong here, but didn't they officially
cease
 production of vinyl pressings several years ago?  As in *all* vinyl
 pressings???

They stopped selling them to the general public, but you only have to stop
by a DJ record shop (as opposed to the consumer shops) to see a wide
selection of vinyl albums. DJs prefer vinyl primarily because it allows beat
matching by hand, scratching, etc. The only disadvantage I know of for vinyl
is that it degrades as it is played, for a DJ this isn't much of a problem
since tracks have a lifespan that's measured in days or weeks the vinyl
becomes useless after a few weeks, which is how long it lasts at good
quality.
Joe




Re: CDR: RE: Degrees of Freedom vs. Hollywood Control Freaks

2002-06-05 Thread Joseph Ashwood

- Original Message -
From: [EMAIL PROTECTED]
Subject: Re: CDR: RE: Degrees of Freedom vs. Hollywood Control Freaks


 Ok, somebody correct me if I'm wrong here, but didn't they officially
cease
 production of vinyl pressings several years ago?  As in *all* vinyl
 pressings???

They stopped selling them to the general public, but you only have to stop
by a DJ record shop (as opposed to the consumer shops) to see a wide
selection of vinyl albums. DJs prefer vinyl primarily because it allows beat
matching by hand, scratching, etc. The only disadvantage I know of for vinyl
is that it degrades as it is played, for a DJ this isn't much of a problem
since tracks have a lifespan that's measured in days or weeks the vinyl
becomes useless after a few weeks, which is how long it lasts at good
quality.
Joe




Re: FC: Hollywood wants to plug analog hole, regulate A-D

2002-06-03 Thread Joseph Ashwood


- Original Message -
From: Neil Johnson [EMAIL PROTECTED]
To: Joseph Ashwood [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Friday, May 31, 2002 6:59 PM
Subject: Re: FC: Hollywood wants to plug analog hole, regulate A-D


 On Sunday 02 June 2002 08:24 pm, Joseph Ashwood wrote:
 
  The MPAA has not asked that all ADCs be forced to comply, only that
those
  in a position to be used for video/audio be controlled by a cop-chip.
While
  the initial concept for this is certainly to bloat the ADC to include
the
  watermark detection on chip, there are alternatives, and at least one
that
  is much simpler to create, as well as more benficial for most involved
  (although not for the MPAA). Since I'm writing this in text I cannot
supply
  a wonderful diagram, but I will attempt anyway. The idea looks somewhat
  like this:
 
  analog source --ADC--CopGate-digital
 
  Where the ADC is the same ADC that many of us have seen in undergrad
  electrical engineering, or any suitable replacement. The CopGate is the
new
  part, and will not be normally as much of a commodity as the ADC. The
  purpose of the CopGate is to search for watermarks, and if found,
disable
  the bus that the information is flowing across, this bus disabling is
again
  something that is commonly seen in undergrad EE courses, the complexity
is
  in the watermark detection itself.
 
  The simplest design for the copgate looks somewhat like this (again bad
  diagram):
 
  in|---buffergatesout
  CopChip-|
 
  Where the buffer gates are simply standard buffer gates.
 
  This overall design is beneficial for the manufacturer because the ADC
does
  not require redesign, and may already include the buffergates. In the
event
  that the buffer needs to be offchip the gate design is well understood
and
  commodity parts are already available that are suitable. For the
consumer
  there are two advantages to this design; 1) the device will be cheaper,
2)
  the CopChip can be disabled easily. In fact disabling the CopChip can be
  done by simply removing the chip itself, and tying the output bit to
either
  PWR or GND. As an added bonus for manufacturing this leaves only a very
  small deviation in the production lines for inside and outside the US.
This
  seems to be a reasonable way to design to fit the requirements, without
  allowing for software disablement (since it is purely hardware).
  Joe


 Bz! Wrong Answer !

 How do you prevent some  hacker/pirate (digital rights freedom fighter)
from
 disabling the CopGate (by either removing the CopChip, finding a way to
 bypass it, or figure out how to make it think it's in, Government Snoop
 mode ) ?

To quote myself the CopChip can be disabled easily, last paragraph
sentence begins with For the consumer . . .  as has been pointed out by
numerous people, there is no solution to this. With a minimal amount of
electrical engineering knowledge it is possible for individuals to easily
construct a new ADC anyway.


 Then the watermark can be removed.

Which can and should be done after conversion.

 Remember it only requires ONE high-quality non-watermarked analog to
digital
 copy to make it on the net and it's all over.

You seem to be of the mistaken opinion that I believe this to be a good
thing, when the design I presented was designed to minimize cost, of design,
manufacture, and removal. I am of the fundamental opinion that this is not a
legal problem, this is a problem of the MPAA and anyone else that requires a
law like this to remain profitable is advertising incorrectly. The Hollywood
studios have already found the basic solution, sell advertising space
_within_ the program. In fact some movies are almost completely subsidized
by the ad space within the movie. By moving to that model for primary
revenue it is easy to accept that a massive number of copies will be made
since that improves the value of the ad space in your next movie/episode. Of
course I'm not involved with any studio so they don't ask my opinion.
Joe




Re: FC: Hollywood wants to plug analog hole, regulate A-D

2002-06-03 Thread Joseph Ashwood


- Original Message -
From: Neil Johnson [EMAIL PROTECTED]
To: Joseph Ashwood [EMAIL PROTECTED]; [EMAIL PROTECTED]
Sent: Friday, May 31, 2002 6:59 PM
Subject: Re: FC: Hollywood wants to plug analog hole, regulate A-D


 On Sunday 02 June 2002 08:24 pm, Joseph Ashwood wrote:
 
  The MPAA has not asked that all ADCs be forced to comply, only that
those
  in a position to be used for video/audio be controlled by a cop-chip.
While
  the initial concept for this is certainly to bloat the ADC to include
the
  watermark detection on chip, there are alternatives, and at least one
that
  is much simpler to create, as well as more benficial for most involved
  (although not for the MPAA). Since I'm writing this in text I cannot
supply
  a wonderful diagram, but I will attempt anyway. The idea looks somewhat
  like this:
 
  analog source --ADC--CopGate-digital
 
  Where the ADC is the same ADC that many of us have seen in undergrad
  electrical engineering, or any suitable replacement. The CopGate is the
new
  part, and will not be normally as much of a commodity as the ADC. The
  purpose of the CopGate is to search for watermarks, and if found,
disable
  the bus that the information is flowing across, this bus disabling is
again
  something that is commonly seen in undergrad EE courses, the complexity
is
  in the watermark detection itself.
 
  The simplest design for the copgate looks somewhat like this (again bad
  diagram):
 
  in|---buffergatesout
  CopChip-|
 
  Where the buffer gates are simply standard buffer gates.
 
  This overall design is beneficial for the manufacturer because the ADC
does
  not require redesign, and may already include the buffergates. In the
event
  that the buffer needs to be offchip the gate design is well understood
and
  commodity parts are already available that are suitable. For the
consumer
  there are two advantages to this design; 1) the device will be cheaper,
2)
  the CopChip can be disabled easily. In fact disabling the CopChip can be
  done by simply removing the chip itself, and tying the output bit to
either
  PWR or GND. As an added bonus for manufacturing this leaves only a very
  small deviation in the production lines for inside and outside the US.
This
  seems to be a reasonable way to design to fit the requirements, without
  allowing for software disablement (since it is purely hardware).
  Joe


 Bz! Wrong Answer !

 How do you prevent some  hacker/pirate (digital rights freedom fighter)
from
 disabling the CopGate (by either removing the CopChip, finding a way to
 bypass it, or figure out how to make it think it's in, Government Snoop
 mode ) ?

To quote myself the CopChip can be disabled easily, last paragraph
sentence begins with For the consumer . . .  as has been pointed out by
numerous people, there is no solution to this. With a minimal amount of
electrical engineering knowledge it is possible for individuals to easily
construct a new ADC anyway.


 Then the watermark can be removed.

Which can and should be done after conversion.

 Remember it only requires ONE high-quality non-watermarked analog to
digital
 copy to make it on the net and it's all over.

You seem to be of the mistaken opinion that I believe this to be a good
thing, when the design I presented was designed to minimize cost, of design,
manufacture, and removal. I am of the fundamental opinion that this is not a
legal problem, this is a problem of the MPAA and anyone else that requires a
law like this to remain profitable is advertising incorrectly. The Hollywood
studios have already found the basic solution, sell advertising space
_within_ the program. In fact some movies are almost completely subsidized
by the ad space within the movie. By moving to that model for primary
revenue it is easy to accept that a massive number of copies will be made
since that improves the value of the ad space in your next movie/episode. Of
course I'm not involved with any studio so they don't ask my opinion.
Joe




Re: RE: FC: Hollywood wants to plug analog hole, regulate A-D

2002-06-02 Thread Joseph Ashwood

Everything I'm about to say should be taken purely as an analytical
discussion of possible solutions in light of the possibilities for the
future. For various reasons I discourage performing the analyzed alterations
to any electronic device, it will damage certain parts of the functionality
of the device, and may cause varying amounts of physical, psychological,
monetary and legal damages to a wide variety of things.

There seems to be a rather siginficant point that is being missed by a large
portion of this conversation.

The MPAA has not asked that all ADCs be forced to comply, only that those in
a position to be used for video/audio be controlled by a cop-chip. While the
initial concept for this is certainly to bloat the ADC to include the
watermark detection on chip, there are alternatives, and at least one that
is much simpler to create, as well as more benficial for most involved
(although not for the MPAA). Since I'm writing this in text I cannot supply
a wonderful diagram, but I will attempt anyway. The idea looks somewhat like
this:

analog source --ADC--CopGate-digital

Where the ADC is the same ADC that many of us have seen in undergrad
electrical engineering, or any suitable replacement. The CopGate is the new
part, and will not be normally as much of a commodity as the ADC. The
purpose of the CopGate is to search for watermarks, and if found, disable
the bus that the information is flowing across, this bus disabling is again
something that is commonly seen in undergrad EE courses, the complexity is
in the watermark detection itself.

The simplest design for the copgate looks somewhat like this (again bad
diagram):

in|---buffergatesout
CopChip-|

Where the buffer gates are simply standard buffer gates.

This overall design is beneficial for the manufacturer because the ADC does
not require redesign, and may already include the buffergates. In the event
that the buffer needs to be offchip the gate design is well understood and
commodity parts are already available that are suitable. For the consumer
there are two advantages to this design; 1) the device will be cheaper, 2)
the CopChip can be disabled easily. In fact disabling the CopChip can be
done by simply removing the chip itself, and tying the output bit to either
PWR or GND. As an added bonus for manufacturing this leaves only a very
small deviation in the production lines for inside and outside the US. This
seems to be a reasonable way to design to fit the requirements, without
allowing for software disablement (since it is purely hardware).
Joe




Re: RE: FC: Hollywood wants to plug analog hole, regulate A-D

2002-06-02 Thread Joseph Ashwood

Everything I'm about to say should be taken purely as an analytical
discussion of possible solutions in light of the possibilities for the
future. For various reasons I discourage performing the analyzed alterations
to any electronic device, it will damage certain parts of the functionality
of the device, and may cause varying amounts of physical, psychological,
monetary and legal damages to a wide variety of things.

There seems to be a rather siginficant point that is being missed by a large
portion of this conversation.

The MPAA has not asked that all ADCs be forced to comply, only that those in
a position to be used for video/audio be controlled by a cop-chip. While the
initial concept for this is certainly to bloat the ADC to include the
watermark detection on chip, there are alternatives, and at least one that
is much simpler to create, as well as more benficial for most involved
(although not for the MPAA). Since I'm writing this in text I cannot supply
a wonderful diagram, but I will attempt anyway. The idea looks somewhat like
this:

analog source --ADC--CopGate-digital

Where the ADC is the same ADC that many of us have seen in undergrad
electrical engineering, or any suitable replacement. The CopGate is the new
part, and will not be normally as much of a commodity as the ADC. The
purpose of the CopGate is to search for watermarks, and if found, disable
the bus that the information is flowing across, this bus disabling is again
something that is commonly seen in undergrad EE courses, the complexity is
in the watermark detection itself.

The simplest design for the copgate looks somewhat like this (again bad
diagram):

in|---buffergatesout
CopChip-|

Where the buffer gates are simply standard buffer gates.

This overall design is beneficial for the manufacturer because the ADC does
not require redesign, and may already include the buffergates. In the event
that the buffer needs to be offchip the gate design is well understood and
commodity parts are already available that are suitable. For the consumer
there are two advantages to this design; 1) the device will be cheaper, 2)
the CopChip can be disabled easily. In fact disabling the CopChip can be
done by simply removing the chip itself, and tying the output bit to either
PWR or GND. As an added bonus for manufacturing this leaves only a very
small deviation in the production lines for inside and outside the US. This
seems to be a reasonable way to design to fit the requirements, without
allowing for software disablement (since it is purely hardware).
Joe




Re: How can i check the authenticity of a private key

2002-05-31 Thread Joseph Ashwood


- Original Message -
From: surinder pal singh makkar [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Friday, May 31, 2002 5:30 AM
Subject: CDR: How can i check the authenticity of a private key


 Hi List,

 I am a newbie in cryptography. What I have learnt till
 now is that in assymeric cryptography scenario we have
 a private key and we generate the public key
 corresponding to it and then we send it to the central
 agency.
 Suppose after sometime I have a private key and the
 public key. Is there some software tool which can tell
 me whether the public key is the same corresponding to
 the private key I am having. Also is there some tool
 which can tell me whether the keys have been curropted
 or not

Sure, and it's fairly easy too. Choose some random data, encrypt with the
public key, decrypt with the private key, if the data isn't corrupted, then
they match. Of course this isn't a perfect way of telling, but with any
given potential key pair it's steep odds. If you want to really be sure,
pass it through a few times.
Joe




Re: Re: disk encryption modes

2002-05-01 Thread Joseph Ashwood

- Original Message -
From: Morlock Elloi [EMAIL PROTECTED]

 Collision means same plaintext to the same ciphertext.

Actually all it means in this case is the same ciphertext, since the key is
the same it of course carries back to the plaintext, but that is irrelevant
at this point. The ciritical fact is that the ciphertexts are the same.

 The collision happens on
 the cypher block basis, not on disk block basis.

The only one that matters is the beginning of the disk block, since that is
what was being detected.

 This has nothing to do with practical security.

It has everything to do with practical security. This collision of headers
leaks information, that leak is what I highlighted.

 You imply more than *hundred thousand* of identical-header word *docs* on
the
 same disk and then that identifying several of these as potential word
docs is
 a serious leak.

What I said was that given a significant number of documents with identical
headers (I selected Word documents because business men generally have a lot
of them), it will be possible to detect a reasonable percentage of them
fairly easily. I never implied, much less stated that there would be 100,000
of these, I stated that there is somewhere on the order of 100,000
possibilities for collision (80,000 is close enough, even 50,000 can
sometimes be considered to be on the same order).

The ability to identify that document X and document Y are word documents
may in fact be a serious leak under some circumstances, including where the
data path has been tracked. To steal an example from the current news, if HP
and Compaq had trusted the cryptography, and their messages (but not the
contents) had been traced, and linked, there would have been a substantial
prior knowledge of the something big happening, this would have meant an
opportunity for someone to perform insider trading without any evidence of
it. This encryption mode poses a significant, real security threat in
realistic situations.
Joe




Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: Adam Back [EMAIL PROTECTED]

 On Fri, Apr 26, 2002 at 11:48:11AM -0700, Joseph Ashwood wrote:
  From: Bill Stewart [EMAIL PROTECTED]
   I've been thinking about a somewhat different but related problem
lately,
   which is encrypted disk drives.  You could encrypt each block of the
disk
   with a block cypher using the same key (presumably in CBC or some
similar
   mode), but that just feels weak.
 
  Why does it feel weak? CBC is provably as secure as the block cipher
(when
  used properly), and a disk drive is really no different from many
others. Of
  course you have to perform various gyrations to synchronise everything
  correctly, but it's doable.

 The weakness is not catastrophic, but depending on your threat model
 the attacker may see the ciphertexts from multiple versions of the
 plaintext in the edit, save cycle.

That could be a problem, you pointed out more information in your other
message, but obviously this would have to be dealt with somehow. I was goign
to suggest that maybe it would be better to encrypt at the file level, but
this can very often leak more information, and depending on how you do it,
will leak directory stucture. There has to be a better solution.

  Well it's not all the complicated. That same key, and encrypt the disk
  block number, or address or anything else.

 Performance is often at a premium in disk driver software --
 everything moving to-and-from the disk goes through these drivers.

 Encrypt could be slow, encrypt for IV is probably overkill.  IV
 doesn't have to be unique, just different, or relatively random
 depending on the mode.

 The performance hit for computing IV depends on the driver type.

 Where the driver is encrypting disk block at a time, then say 512KB
 divided (standard smallest disk block size) into AES block sized
 chunks 16 bytes each is 32 encrypts per IV geenration.  So if IV
 generation is done with a block encrypt itself that'll slow the system
 down by 3.125% right there.

 If the driver is higher level using file-system APIs etc it may have
 to encrypt 1 cipher block size at a time each with a different IV, use
 encrypt to derive IVs in this scenario, and it'll be a 100% slowdown
 (encryption will take twice as long).

That is a good point, of course we could just use the old standby solution,
throw hardware at it. The hardware encrypts at disk (or even disk cache)
speed on the drive, eliminating all issues of this type. Not a particularly
cost-effective solution in many cases, but a reasonable option for others.

  This becomes completely redoable (or if you're willing to sacrifice
  a small portion of each block you can even explicitly stor ethe IV.

 That's typically not practical, not possible, or anyway very
 undesirable for performance (two disk hits instead of one),
 reliability (write one without the other and you lose data).

Actually I was referring to changing the data portion of the block from
{data}
to
{IV, data}

placing all the IVs at the head of every read. This of course will sacrifice
k bits of the data space for little reason.

   I've been thinking that Counter Mode AES sounds good, since it's easy
   to find the key for a specific block.   Would it be good enough just
to
  use
Hash( (Hash(Key, block# ))
   or some similar function instead of a more conventional crypto
function?
 
  Not really you'd have to change the key every time you write to
  disk, not exactly a good idea, it makes key distribution a
  nightmare, stick with CBC for disk encryption.

 CBC isn't ideal as described above.  Output feedback modes like OFB
 and CTR are even worse as you can't reuse the IV or the attacker who
 is able to see previous disk image gets XOR of two plaintext versions.

 You could encrypt twice (CBC in each direction or something), but that
 will again slow you down by a factor of 2.

 Note in the file system level scenario an additional problem is file
 system journaling, and on-the-fly disk defragmentation -- this can
 result in the file system intentionally leaving copies of previous or
 the same plaintexts encrypted with the same key and logical position
 within a file.

Yeah the defragmentation would have to be smart, it can't simply copy the
dick block (with the disk block based IV) to a new location. This problem
disappears in the {IV, data} block type, but that has other problems that
are at least as substantial.

 So it's easy if performance is not an issue.

Or if you decide to throw hardware at it.

 Another approach was Paul Crowley's Mercy cipher which has a 4Kbit
 block size (= 512KB = sector sized).  But it's a new cipher and I
 think already had some problems, though performance is much better
 than eg AES with double CBC, and it means you can use ECB mode per
 block and key derived with a key-derivation function salted by the
 block-number (the cipher includes such a concept directly in it's
 key-schedule), or CBC mode with an IV derived from the block number

Re: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: Adam Back [EMAIL PROTECTED]

 Joseph Ashwood wrote:
  Actually I was referring to changing the data portion of the block
  from {data} to {IV, data}

 Yes I gathered, but this what I was referring to when I said not
 possible.  The OSes have 512Kbytes ingrained into them.  I think you'd
 have a hard time changing it.  If you _could_ change that magic
 number, that'd be a big win and make the security easy: just pick a
 new CPRNG generated IV everytime you encrypt a block.  (CPRNG based on
 SHA1 or RC4 is pretty fast, or less cryptographic could be
 sufficient depending on threat model).

From what I've seen of a few OSs there really isn't that much binding to 512
Kbytes in the OS per se, but the file system depends on it completely.
Regardless the logic place IMO to change this is at the disk level, if the
drive manufacturers can be convinced to produce drives that offer 512K+16
byte sectors. Once that initial break happens, all the OSs will play catchup
to support the drive, that will break the hardwiring and give us our extra
space. Of course convincing the hardware vendors to do this without a
substantial hardware reason will be extremely difficult. On our side though
is that I know that hard disks store more than just the data, they also
store a checksum, and some sector reassignment information (SCSI drives are
especially good at this, IDE does it under the hood if at all), I'm sure
there's other information, if this could be expanded by 16 bytes, that'd
supply the necessary room. Again convincing the vendors to supply this would
be a difficult task, and would require the addition of functionality to the
hard drive to either decrypt on the fly, or hand the key over to the driver.

  Yeah the defragmentation would have to be smart, it can't simply copy
the
  di[s]k block (with the disk block based IV) to a new location.

 Well with the sector level encryption, the encryption is below the
 defragmentation so file chunks get decrypted and re-encrypted as
 they're defragmented.

 With the file system level stuff the offset is likley logical (file
 offset etc) rather than absolute so you don't mind if the physical
 address changes.  (eg. loopback in a file, or file system APIs on
 windows).

That's true, I was thinking more as something that will for now run in
software and in the future gets pushed down to the hardware and we can use a
smartcard/USBKey/whatever comes out next to feed it the key. A
meta-filesystem would be useful as a short term measure, but it still keeps
all the keys in system memory where programs can access them, if we can
maintain the option of moving it to hardware later on, I think that would be
a better solution (although also a harder one).

I feel like I'm missing something that'll be obvious once I've found it.
Hmm, maybe there is a halfway decent solution (although not at all along the
same lines). For some reason I was just remembering SAN networks, it's a
fairly known problem to design and build secure file system protocols
(although they don't get used much). So it might actually be a simpler
concept to build a storage area network using whatever extra hardened OSs we
need, with only the BIOS being available without a smartcard, put the smart
card in, the smartcard itself decrypts/encrypts sector keys (or maybe some
larger grouping), the SAN host decrypts the rest. Pull out the smartcard,
the host can detect that, flush all caches and shut itself off. This has
some of the same problems, but at least we're not going to have to design a
hard drive, and since it's a remote file system I believe most OSs assume
very little about sector sizes. Of course as far as I'm concerned this
should still be just a stopgap measure until we can move that entire SAN
host inside the client computer.

Now for the biggest question, how do we get Joe Public to actually use this
correctly (take the smart card with them, or even not choose weak
passwords)?
Joe




Re: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)

2002-04-27 Thread Joseph Ashwood
Title: RE: Re: disk encryption modes (Re: RE: Two ideas for random number generation)





  - Original Message - 
  From: 
  [EMAIL PROTECTED] 
  To: [EMAIL PROTECTED] 
  
  Sent: Saturday, April 27, 2002 12:11 
  PM
  Subject: CDR: RE: Re: disk encryption 
  modes (Re: RE: Two ideas for random number generation)
  
  Instead of adding 16 bytes to the size of each sector for 
  sector IV's how about having a separate file (which could be stored on a 
  compact flash card, CDRW or other portable media) that contains the IV's for 
  each disk sector? 
Not a very good solution.

  
  You could effectively wipe the encrypted disk merely by wiping 
  the IV file, which would be much faster than securely erasing the entire disk. 
  
  
Actually that wouldn't work, at least not in CBC mode 
(which is certainly my, and seems to be generally favored for disk encryption). 
In CBC mode, not having the IV (setting the IV to 0) only destroys the first 
block, after that everything decrypts normally, so the only wiped portion of the 
sector is the first block.

  
  If the IV file was not available, decryption would be 
  impossible even if the main encryption key was rubberhosed it otherwise 
  leaked. This could be a very desirable feature for the tinfoil-hat-LINUX 
  crowd--as long as you have posession if the compact flash card with the IV 
  file, an attacker with your laptop isn't going to get far cracking your 
  encryption, especially if you have the driver constructed to use a dummy IV 
  file on the laptop somewhere after X number of failed passphrase entries to 
  provide plausible deniability for the existence of the compact flash 
  card.
  
And then the attacker would just get all of your file 
except the first block (assuming the decryption key is found).

  
  To keep the IV file size reasonable, you might want to encrypt 
  logical blocks (1K-8K, depending on disk size, OS, and file system used, vs 
  512 bytes) instead of individual sectors, especially if the file system thinks 
  in terms of blocks instead of sectors. I don't see the value of encrypting 
  below the granularity of what the OS is ever going to write to 
disk.

That is a possibility, and actually I'm sure it's 
occurred to the hard drive manufacturers that the next time they do a full 
overhaul of the wire protocol they should enable larger blocks (if they haven't 
already, like I said before, I'm not a hard drive person). This would serve them 
very well as they would have to store less information increasing the disk size 
producible per cost (even if not by much every penny counts when you sell a 
billion devices). Regardless this could be useful for the disk encryption, but 
assuming worst case won't lose us anything in the long run, and should enable 
the best case to be done more easily, so for the sake of simplicity, and 
satisfying the worst case, I'll keep on calling them sectors until there's a 
reason not to.
  

  Joe


Re: Re: disk encryption modes

2002-04-27 Thread Joseph Ashwood

- Original Message -
From: Morlock Elloi [EMAIL PROTECTED]
  There's no need to go to great lengths to find a place to store the IV.

 Wouldn't it be much simpler (having in mind the low cost of storage), to
simply
 append several random bits to the plaintext before ECB encrypton and
discard
 them upon decryption ?

 For, say, 128-bit block cipher and 16-bit padding (112-bit plaintext and
16-bit
 random fill) the storage requirement is increased 14% but each block is
 completely independent, no IV is used at all, and as far as I can see all
 pitfails of ECB are done away with.

The bigger problem is that you're cutting drive performance by 14%,
considering that people notice a matter of  10%, people are going to
complain, and economically this will be a flop. A drive setup like this
would be worse than useless, it would give the impression that encryption
must come at the cost of speed. Designing this into a current system would
set the goal of encryption everywhere back.

 Probability of the same plaintext encrypting to the same cyphertext is 1
in
 65536.

Which is no where near useful. 1 in 65536 is trivial in cryptographic terms,
especially when compared to 1 in approx
340. Additionally you'll be sacrificing
_more_ of the sector to what amount to IV, and in exchange you'll be
decreasing security. If instead in that 512KB block you take up 128 bits,
you'll only lose about 0.02% performance and we were already trying to avoid
that (although for other reasons).
Joe




Re: RE: Two ideas for random number generation

2002-04-26 Thread Joseph Ashwood

- Original Message -
From: Bill Stewart [EMAIL PROTECTED]

 I've been thinking about a somewhat different but related problem lately,
 which is encrypted disk drives.  You could encrypt each block of the disk
 with a block cypher using the same key (presumably in CBC or some similar
 mode),
 but that just feels weak.

Why does it feel weak? CBC is provably as secure as the block cipher (when
used properly), and a disk drive is really no different from many others. Of
course you have to perform various gyrations to synchronise everything
correctly, but it's doable.

 So you need some kind of generator of
 pretty-random-looking keys so that each block of the disk gets a different
key,
 or at the very least a different IV for each block of the disk,
 so in some sense that's a PRNG.  (You definitely need a different key for
each
 block if you're using RC4, but that's only usable for Write-Once media,
 i.e. boring.)
 Obviously you need repeatability, so you can't use a real random number
 generator.

Well it's not all the complicated. That that same key, and encrypt the disk
block number, or address or anything else. This becomes completely redoable
(or if you're willing to sacrifice a small portion of each block you can
even explicitly stor ethe IV.

 I've been thinking that Counter Mode AES sounds good, since it's easy
 to find the key for a specific block.   Would it be good enough just to
use
  Hash( (Hash(Key, block# ))
 or some similar function instead of a more conventional crypto function?

Not really you'd have to change the key every time you write to disk, not
exactly a good idea, it makes key distribution a nightmare, stick with CBC
for disk encryption.
Joe




Re: RE: Lucky's 1024-bit post [was: RE: objectivity and factoring analysis]

2002-04-24 Thread Joseph Ashwood

- Original Message -
From: Morlock Elloi [EMAIL PROTECTED]

 Most hardware solutions that I'm aware of support 1024-bit modular
arithmetic.
 I don't know how easy or hard it is to do 2048-bit ops with 1024-bit
 primitives, or is there any 2048-bit HW around.

For encryption, you're out of luck, just the overhead is sending the data
over the relatively slow link to the device is longer than it takes a 486 to
do the 2048-bit encryption (or signature verification). For
decryption/signing the matter is entirely different. Assuming that p and q
are known on decryption, it's a fairly simple matter to use the Chinese
Remainder Theorem along with the 1024-bit mod-exponentiators, to get the
correct answer. The problem is that some of those same decryption/signing
engines already use this trick and so they really only support 512-bit ops,
in which case you're in the same boat as the encryption.

The good part of all this is that many companies are now expanding their
line to offer 2048-bit capable machines, so it shouldn't be long before
everyone can finally retire their 1024-bit keys, and maintain speed.
Joe




Re: (P)RNG's and k-distribution

2002-04-24 Thread Joseph Ashwood

- Original Message -
From: Jim Choate [EMAIL PROTECTED]

 For a RNG to -be- a RNG it -must- be infinity-distributed. This means that
 there are -no- string repititions -ever-.

Ummm, wrong. That would imply that in a binary stream, once 0 has been used
it can never be used again. This of course means that the next must be 1
(which has no entropy, but that is besides the point). Following this, there
can be no stream. The requirement for a perfect RNG is that given data
points [0,n-1] and [n+1, infinite] it is impossible to determine the point n
with any skew in the probability (in binary it simplifies to with
probability higher than 1/2).

Note that this does not mean that the data point n cannot be the same as
some other point m, simply that m happened (will happen) and the exact time
(place) of it' happening doesn't help determine the value at n.

For an RNG, the only requirement be that it generates numbers that resemble
random in some way, it is the super-class of true RNG, pseudo RNG, perfect
RNG, and pretty much any other RNG you can think of.

 If this can't be guaranteed then
 the algorithm can be a PRNG (there are other conditionals).

Wrong again. The requirement for a pseudo RNG is that it has an algorithm
(very often a key as well) that generates the sequence. There are
exceptions, /dev/random is a pseudo RNG, even though it breaks this rule.

 A PRNG -by
 definition- can -not- rule out repititions of some
 very_large-distribution. Hence, -all- PRNG's must assume - even in
 principle- some very_large-distribution sequence.

Actually I think that's true.

 So, the statement My PRNG has no modulus is incorrect even in principle.

That depends, as I pointed out earlier /dev/random is a pseudo RNG, given a
system in use the internal state is ever changing (assuming the use is at
least slightly entropic), /dev/random has perturbations in it's state that
make it non-repeating, yes it does have a certain quantity of state, but
that state continually has an additional mix of entropy into it.

 It's worth pointing out that the test of 'randomness' are -all'
 statistical. They all have a margin of error. There is the a priori
 recognition of 'window' effect.

Only the tests on the stream, tests on the device itself can be state-less,
eliminating the window effect. It has been proven that one cannot test
randomness of the output stream, leaving only the possibility of testing the
randomness that the device itself is creating (or harvesting).
Joe




Re: Re: Two ideas for random number generation

2002-04-22 Thread Joseph Ashwood

- Original Message -
From: Eugen Leitl [EMAIL PROTECTED]

 On Mon, 22 Apr 2002, Tim May wrote:

  What real-life examples can you name where Gbit rates of random digits
  are actually needed?

 Multimedia streams, routers. If I want to secure a near-future 10 GBit
 Ethernet stream with a symmetric cypher for the duration of a few years
 (periodic rekeying from a RNG might help?) I need both lots of internal
 state (the PRNG can't help leaking information about its state in the
 cypher stream, though the rate of leakage is the function of smarts of the
 attacker) and a high data rate.

Actually that's not necessarily the case. Let's use your example of a
Multimedia stream server that is filling a 10GBit/s connection. Right now
the current minimum seems to be 56kbit/s. So that means that if every
available connection is taken in the same second, the server would only need
a rate of 2.2 million bits/sec from it's RNG to build a 128-bit key for
each. A good design for this though has the client doing most of the random
number choosing, where the only purpose of the server random number is to
prevent the client of biasing the result, so 128-bits is more than
sufficient. So 2.2 Mbit/sec seems to be the peak for that. Finding
situations where a decent design will yield a need for an RNG to run about 1
Gbit/sec is extremely difficult. With poor designs it's actually rather
easy, take a RNG that is poor enough (or a situation where that is a basic
assumption) that it has to be distilled to 1 billionth it's size, obviously
to support that multimedia stream server would require 2.2 million Gigabits
per second (approximately).

  In any case, if someone wants Gbits per second of random numbers,
  it'll cost 'em, as it should. Not something I think we need to worry
  much about.

 Maybe, but it's neat trying to see how the constraints of 2d and 3d layout
 of cells, signal TOF and fanout issues influence PRNG design if lots of
 state bits and a high data rate are involved. It is not very useful right
 now, agreed.

I think it would be a good process to go through to develop a design for
one, or at least a basic outline for how it could be done, but the basic
idea that comes to mind looks a lot like /dev/random, but run in parallel
collecting from several sources including a custom hardware pool similar to
the Intel RNG.
Joe




Re: Re: Two ideas for random number generation: Q for Eugene

2002-04-22 Thread Joseph Ashwood


- Original Message -
From: gfgs pedo [EMAIL PROTECTED]

   Oh surely you can do better than that - making it
  hard to guess the seed
   is also clearly a desirable property (and one that
  the square root rng
   does not have).
 U can choose any arbitrary seed(greater than 100 bits
 as he (i forgot who) mentioned earlier.Then subject it
 to the Rabin-Miller test.
 Since the seed value is a very large number,it would
 be impossible to determine the actual value.The
 chances the intruder  find the correct seed or the
 prime number hence generated is practically verly low.

You act like the only possible way to figure it out is to guess the initial
seed. The truth is that the number used leaves a substantial amount of
residue in it's square root, and there are various rules that can be applied
to square roots as well. Since with high likelihood you will have a lot of
small factors but few large ones, it's a reasonable beginning to simply
store the roots of the first many primes, this gives you a strong network to
work from when looking for those leftover signatures. With decent likelihood
the first 2^32 primes would be sufficient for this when you choose 100 bit
numbers, and this attack will be much faster than brute force. So while you
have defeated brute force (no surprise there, brute force is easy to defeat)
you haven't developed a strong enough generation sequence to really get much
of anywhere.

  Of course, finding the square root of a 100 digit
  number to a
  precision of hundreds of decimal places is a lot of
  computational
  effort for no good reason.
 Yes the effort is going to be large but why no good
 reason?

Because it's a broken pRNG, that is extremely expensive to run. If you want
a fast pRNG you look to ciphers in CTR mode, or stream ciphers, if you want
one that's provably good you go to BBS (which is probably faster than your
algorithm anyway). So there's no good reason to implement such an algorithm.

  BTW, the original poster seemed to be under the
  delusion that
  a number had to be prime in order for its square to
  be irrational,
  but every integer that is not a perfect square has
  an irrational
  square root (if A and B are mutually prime, A^2/B^2
  can't be
  simplified).

 Nope ,I'm under no such delusion :)

Just the delusion that your algorithm was good.
Joe




Re: Re: Two ideas for random number generation

2002-04-22 Thread Joseph Ashwood


- Original Message -
From: [EMAIL PROTECTED]
To: Tim May [EMAIL PROTECTED]; Eugen Leitl [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Sunday, April 21, 2002 1:33 PM
Subject: CDR: Re: Two ideas for random number generation


 Why would one want to implement a PRNG in silicon, when one can
 easily implement a real RNG in silicon?

Because with a pRNG we can sometimes prove very important things, while with
a RNG we can prove very little (we can't even prove that entropy actually
exists, let alone that we can collect it).

 And if one is implementing a PRNG in software, it is trivial to
 have lots of internal state (asymptotically approaching one-time
 pad properties).

The problem is not having that much internal state, but what do you do with
it? Currently the best options on that front involve using block ciphers in
various modes, but this has a rather small state, but again we can quite
often prove things about the construct.
Joe




Re: Re: Two ideas for random number generation

2002-04-22 Thread Joseph Ashwood

- Original Message -
From: Eugen Leitl [EMAIL PROTECTED]

 On Mon, 22 Apr 2002, Tim May wrote:

  What real-life examples can you name where Gbit rates of random digits
  are actually needed?

 Multimedia streams, routers. If I want to secure a near-future 10 GBit
 Ethernet stream with a symmetric cypher for the duration of a few years
 (periodic rekeying from a RNG might help?) I need both lots of internal
 state (the PRNG can't help leaking information about its state in the
 cypher stream, though the rate of leakage is the function of smarts of the
 attacker) and a high data rate.

Actually that's not necessarily the case. Let's use your example of a
Multimedia stream server that is filling a 10GBit/s connection. Right now
the current minimum seems to be 56kbit/s. So that means that if every
available connection is taken in the same second, the server would only need
a rate of 2.2 million bits/sec from it's RNG to build a 128-bit key for
each. A good design for this though has the client doing most of the random
number choosing, where the only purpose of the server random number is to
prevent the client of biasing the result, so 128-bits is more than
sufficient. So 2.2 Mbit/sec seems to be the peak for that. Finding
situations where a decent design will yield a need for an RNG to run about 1
Gbit/sec is extremely difficult. With poor designs it's actually rather
easy, take a RNG that is poor enough (or a situation where that is a basic
assumption) that it has to be distilled to 1 billionth it's size, obviously
to support that multimedia stream server would require 2.2 million Gigabits
per second (approximately).

  In any case, if someone wants Gbits per second of random numbers,
  it'll cost 'em, as it should. Not something I think we need to worry
  much about.

 Maybe, but it's neat trying to see how the constraints of 2d and 3d layout
 of cells, signal TOF and fanout issues influence PRNG design if lots of
 state bits and a high data rate are involved. It is not very useful right
 now, agreed.

I think it would be a good process to go through to develop a design for
one, or at least a basic outline for how it could be done, but the basic
idea that comes to mind looks a lot like /dev/random, but run in parallel
collecting from several sources including a custom hardware pool similar to
the Intel RNG.
Joe




Re: Re: Two ideas for random number generation: Q for Eugene

2002-04-22 Thread Joseph Ashwood


- Original Message -
From: gfgs pedo [EMAIL PROTECTED]

   Oh surely you can do better than that - making it
  hard to guess the seed
   is also clearly a desirable property (and one that
  the square root rng
   does not have).
 U can choose any arbitrary seed(greater than 100 bits
 as he (i forgot who) mentioned earlier.Then subject it
 to the Rabin-Miller test.
 Since the seed value is a very large number,it would
 be impossible to determine the actual value.The
 chances the intruder  find the correct seed or the
 prime number hence generated is practically verly low.

You act like the only possible way to figure it out is to guess the initial
seed. The truth is that the number used leaves a substantial amount of
residue in it's square root, and there are various rules that can be applied
to square roots as well. Since with high likelihood you will have a lot of
small factors but few large ones, it's a reasonable beginning to simply
store the roots of the first many primes, this gives you a strong network to
work from when looking for those leftover signatures. With decent likelihood
the first 2^32 primes would be sufficient for this when you choose 100 bit
numbers, and this attack will be much faster than brute force. So while you
have defeated brute force (no surprise there, brute force is easy to defeat)
you haven't developed a strong enough generation sequence to really get much
of anywhere.

  Of course, finding the square root of a 100 digit
  number to a
  precision of hundreds of decimal places is a lot of
  computational
  effort for no good reason.
 Yes the effort is going to be large but why no good
 reason?

Because it's a broken pRNG, that is extremely expensive to run. If you want
a fast pRNG you look to ciphers in CTR mode, or stream ciphers, if you want
one that's provably good you go to BBS (which is probably faster than your
algorithm anyway). So there's no good reason to implement such an algorithm.

  BTW, the original poster seemed to be under the
  delusion that
  a number had to be prime in order for its square to
  be irrational,
  but every integer that is not a perfect square has
  an irrational
  square root (if A and B are mutually prime, A^2/B^2
  can't be
  simplified).

 Nope ,I'm under no such delusion :)

Just the delusion that your algorithm was good.
Joe




Re: Re: Two ideas for random number generation

2002-04-21 Thread Joseph Ashwood


- Original Message -
From: [EMAIL PROTECTED]
To: Tim May [EMAIL PROTECTED]; Eugen Leitl [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Sunday, April 21, 2002 1:33 PM
Subject: CDR: Re: Two ideas for random number generation


 Why would one want to implement a PRNG in silicon, when one can
 easily implement a real RNG in silicon?

Because with a pRNG we can sometimes prove very important things, while with
a RNG we can prove very little (we can't even prove that entropy actually
exists, let alone that we can collect it).

 And if one is implementing a PRNG in software, it is trivial to
 have lots of internal state (asymptotically approaching one-time
 pad properties).

The problem is not having that much internal state, but what do you do with
it? Currently the best options on that front involve using block ciphers in
various modes, but this has a rather small state, but again we can quite
often prove things about the construct.
Joe




Re: Re: 1024-bit RSA keys in danger of compromise

2002-03-31 Thread Joseph Ashwood

I have done a significant amount of considering on the very questions raised
in this. This consideration has spanned approximately a month of time. These
are my basic conclusions:

Bernstein's proposal does have an impact, but I do not believ that 3x the
key size is necessary
I believe Bernstein's proposal results in the necessity of a keysize of
approximately 1.5 times what was required before
I believe that there are further similar advances available to the
algorithms involved that can push this to approximately 2x

I have reached these considerations through a very long thought process that
involved digging through old textbooks on electrical engineering, and a
fundamental assumption that people will only construct these machines when
there is a stimulus to do so. So for example it would not be reasonable for
me to construct one to break 768-bit keys because I have little interest in
the actual data, merely whether or not the data is secure. Similarly, IBM
would not likely construct one simply because it would be economically more
feasible to dedicate that money towards research. The NSA and similar
organizations is extremely likely to strongly consider building such a
machine because they have the money, and the mandate to to whatever it takes
to gain access to the data encrypted by militaries around the world. Are
these assumptions necessarily correct? In their fundamental form they are
not, Linux is proof of this (people giving their freetime to something that
they get effetively nothing out of), however since we are talking about a
very significant investment of money to make one of usable size, these
assumptions are likely to be approximately correct.

This means that according to my considerations it seems reasonable to
decommission all 512-bit keys immediately (these ahouls hyave been
decomissioned years ago, but there are still a few floating around), 768-bit
keys should be decommissioned at the earliest realizable opportunity (I
don't believe they are in immediate danger of compromise, but they are
compromisable), 1024-bit keys should now be considered moderately secure in
the immediate future and decommissioned over the next couple years, 1536-bit
keys are for reasonable purposes secure, 2048-bit keys are secure for all
but the most demanding situations, and 4096-bit keys are still effectively
invulnerable.

This of course makes some very blanket assumptions about the desirability of
breaking a specific key. If no one wants to read what's inside, you don't
even really need to encrypt it (note the difference between need and want).
It will still cost a minimum of 10^9 US dollars to break 1024-bit keys.
Considering that most businesses and many governments won't have this value
of information transferred in the next 100 years, the desire to break
1024-bit keys simply isn't there.

Also examine _who_ wants to read your data. If it's just messages back and
forth from your girlfriend/wife/mistress it's unlikely that 512-bits will be
broken. If you are protecting state secrets, obviously you need to consider
things more carefully, and 4096-bit keys may not even offer enough security.

As usual there is no one-stop solution for every situation, only more
considerations that need to be made. I welcome any comments on my
conclusions.
Joe




Re: Re: Jail Cell Cipher (modified RC4)

2002-02-24 Thread Joseph Ashwood


- Original Message -
From: Jeremy Lennert [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Saturday, February 23, 2002 8:15 AM
Subject: CDR: Re: Jail Cell Cipher (modified RC4)


  Unfortunately it has a rather damning effect on the cipher.
  First in the key
  scheduling there is a distinct possibility of keys that are
  impossible. It
  assumes that all K[i] are generators mod 37, so using a key
  where the offset
  is 0 will result in an infinite loop in the key scheduling, this is
  obviously a bad design decision. Second the distinguisher
  from random for
  such a small RC4 state would require a relatively small known
  plaintext. In
  fact at that size I think there are better attacks against it than the
  distinguishers known for full sized RC4. I believe it would
  be achievable to
  actually determine that complete state, although it would take more
  significant amounts of work than would be applied to most
  inmate mail (an
  encrypted message would probably be simply discarded and
  never delivered).

 The specification for the key requires all key values to be nonzero.  From
 the web site:

 an array of key values K, where each value is a nonzero alphabetical
 character or its numerical equivalent

 However, there was an error in the source code that allowed zeroes in the
 key.  This has been corrected.  Any zeroes in the key definition now cause
 the program to abort with an invalid character error message.


 Regarding the distinguisher, I don't think I understand how distinguishing
 the keystream from random amounts to an attack that will recover the
 internal state.  Could you offer further clarification on that?

In this case they are two different attacks. The first attack being the
distinguisher which will let the attacker read the plaintext, but not
necessarily find the internal state. The second an attack on the internal
state where the known small variations in the state between outputs could be
used to compute a state that is at least a full collision on the outputs.

 Incidentally, for paper-and-pencil applications, I'm assuming that the
 message length will not exceed about 100 characters.

I think that will be small enough to save the security of the system, but
I'm not sure.

 The problem with using full RC4 is not in the actual keystream generation,
 but in running the key-scheduling algorithm.  Even if we only ran the KSA
 for one round through the permutation table, estimated time is about 50
 minutes (not necessarily impractical, but making many rounds to improve
 security or repeated trials to improve accuracy very difficult) and the
 chances of performing that entire round without error for my current best
 estimations of accuracy are about 1 in 150,000.

Why not just memorize the permutation table? It's only 37 characters. Also I
don't see where a difference of an hour or two will necessarily make a
difference, the point of incarceration is that you can't go out and do
anything you want, you have to sit in your cell for 23 hours a day. So
anything that you can encrypt in 23 hours is good enough. By your estimates
that gives time for 27 KSAs (which wouldn't increase security in the
slightest, a permutation is a permutation) which I think should be more than
enough KSAs for any reasonable demands.

 For the modified RC4, accuracy still isn't great, but it is good enough
that
 careful error-checking may leave the algorithm feasible in terms of both
 time and accuracy.

It's the security of the scheme, not the usability, that I am questioning. I
think the artifacts of RC4 will be enhanced to the point where the security
is, for all practical purposes, useless. The only question remaining in my
mind is how long before those artifacts can be detected and/or made use of?
Joe





Re: re: Remailer Phases

2001-08-08 Thread Joseph Ashwood

- Original Message -
From: A. Melon [EMAIL PROTECTED]
Subject: CDR: re: Remailer Phases


2. Operator probably trustworthy

 Impossible, and unnecessary. Don't assume any remops are trustworthy.

Actually it is absolutely necessary. If all operators are willing to
collude, then your precious anonymity is completely lost. A simple tracing
methodology can establish this. The first remailer operator tracks the exact
outgoing message to the next collusion, the second tracks to the third, etc
until the message escapes, then the colluding operators track back through
the list of remailers, linking based on the intermediate value being sent,
until it reaches operator 1 who knows the sending address. This assumes a
best case of the sender determining the path taken through encryption. Worst
case the first operator can reveal the information to everyone.
Joe




Re: CDR: Re: re: Remailer Phases

2001-08-08 Thread Joseph Ashwood


- Original Message -
From: Meyer Wolfsheim [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, August 08, 2001 5:40 AM
Subject: Re: CDR: Re: re: Remailer Phases


 On Tue, 7 Aug 2001, Joseph Ashwood wrote:

  2. Operator probably trustworthy
  
   Impossible, and unnecessary. Don't assume any remops are trustworthy.
 
  Actually it is absolutely necessary. If all operators are willing to
  collude, then your precious anonymity is completely lost. A simple
  tracing methodology can establish this. The first remailer operator
  tracks the
  exact outgoing message to the next collusion, the second tracks to the
  third, etc until the message escapes, then the colluding operators track
  back through
  the list of remailers, linking based on the intermediate value being
  sent,
  until it reaches operator 1 who knows the sending address. This assumes
  a best case of the sender determining the path taken through encryption.
  Worst case the first operator can reveal the information to everyone.
  Joe

 Run your own remailer. Chain through it at some point. As long as you
 trust yourself, there is no threat.

 Who of the current remops do you trust? Why?

I don't trust any of them. I don't personally use remailers, I don't tend to
do things that are illegal, but if I did there are other methods that I'd
use.
Joe




Re: Re: Mixmaster Message Drops

2001-08-08 Thread Joseph Ashwood

- Original Message -
From: Jim Choate [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, August 08, 2001 7:05 PM
Subject: CDR: Re: Mixmaster Message Drops


 The next major question is to determine where the drops are happening.
 Inbound, outbound, inter-remailer, intra-remailer?

That matters from a correction view but not from a usage view, which I
assume we're taking. Basically we don't care what technology the remailer
uses as long as it is correct technology and trustable. From there we care
only what remailers are disfunctional and which are useful.


 One aspect of this, assuming the remailers are under attack and that is
 the hypothesis we are going to assume, is that we need to be able to
 inject traffic into the remailer stream anonymously. Otherwise Mallet
 get's wise to what is going on and starts playing us.

Well assuming that the remailers are under attack, we start using digital
signatures with initiation information stored in them. Mallet can introduce
duplicates, but the likelihood of a duplicate being detected rises very
quickly, (i.e at a rate of 1-(1/20)^M for M duplicate messages assuming a
drop rate of 1 in 20). This gives us the ability to discount the vast
majority of what Mallet does and get very close to accurate values. The
bigger risk is for Mallet to identify our queries and force the proper
functioning of the node exclusively for the query. Correcting this is much
more difficult, but would only take the use of digital signatures and
encryption on all the messages traversing the network. Since the remailer
user inherently a more developed user than Joe (l)User this is much more
reasonable. But still approaches impossible because the remailer users is a
finite set so Mallet could store all the remailer user keys, and treat them
differently from the query keys. This becomes extremely difficult as long
term keys are defeated as well as ephemeral keys. Instead the remailer users
will have to maintain statistics, or at least a large unknown portion of
them. If users upload to say freenet once a month the number of anonymous
messages they have sent and recieved (without mention of timeframe except
implicitly month) we could get an overall droprate, and the users wouldn't
have to reveal who they are.

 If at all possible all measurements should be made anonymously and as
 stealthily as possible.

Agreed I was beginning to adress this above, it still has some major
problems.

 Q: How to inject traffic into the remailer network anonymously?

through a set of trusted remailers, if those remailers are trusted and are
used for test initiation, then the exact droprate from that entry point will
be known. This will build a reputation for those remailers making it
desirable for trustable remailer operators to be in that set by increasing
the number of messages, leading to better security by initiating from the
trusted list.

 Q: How do we measure the input/output flow without collusion of the
operator?

You count the messages in and the messages out, you don't care what they
say, where they're from etc, the operator doesn'tr even need to know you're
doing it. Of course this is a rather difficult task, the better option would
be to test the network as a whole, by colluding of users to collect
statistics on their own messages going through, this would defeat much of
what Mallet could do because the test messages would be real messages that
are being propogated through.

 Q: Where are the computing resources to munge resulting flood of data over
at least a few weeks time period. How do we hide this 'extra' flow of
data? It represents an opportunity for incidental monitoring due to
load usage.

Wouldn't be that bad. Treating the network as a function of it's entry-point
seems easiest. Then it's just a simple fraction which can be published raw
or you can waste 4 seconds on a 1GHz machine and compute the values. Either
way it's not compute intensive, most of the work needs to be done by
legitimate users with legitimate messages (to prevent Mallet from playing
with the messages).

 Q: How do we munge the data? What are we trying to 'fit'?

We are trying to determine the best entry-point for anonymous remailer use
as measured by percentage of messages that reach their destination, as
filtered by being trusted.

 Q: Once we have the data and can (dis)prove the hypothesis, then what?

Then we only trust the servers on the trusted list, and we use the best
remailer from the list in terms of delivery. This will encourage individuals
that run worthless remailers to improve their systems, eventually leading to
the dropping of only a handful of messages a year.
Joe




Re: Re: Remailer Phases

2001-08-08 Thread Joseph Ashwood

- Original Message -
From: Anonymous [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, August 08, 2001 4:48 PM
Subject: CDR: Re: Remailer Phases


 An Unknown Party wrote:
  On Wed, 8 Aug 2001, Anonymous wrote:
   We need a good mixmaster net.
  
   working remailer:
 1. Average latency less than 5 min
 
  Bad. See the papers done on threats of traffic analysis/spam attacks
  against remailers.

 Average latency exists.  What do you think it should be?

 a) 5 minutes
 b) 5 hours
 c) 5 days
 d) 5 months
 e) longer

 I like a).


As has been pointed out it's not latency but latency/messages that matters.
If there are 2 messages a day going through the system then 5 minutes is
simply not enough, it will be completely traceable. OTOH if there are
5000/sec going through the system then 5 minutes is arguably overkill. I
think that with the current relatively low level of usage 24 hours is the
minimum average latency that should be used. Of course this is across the
entire Mixmaster net where messages could be dispersed enter at any location
and leave at any location. Based on this I believe that each node should
maintain a list l of known other nodes. It should of course select a delay
time at random say up to t time. Assuming that the server will choose a new
exit point at perfect random from itself (where it will exit immediately on
timer expiration) and l this gives an equation for t in hours f(t) =
necessaryDelay; f(t) = t + ((|l|-1)/|l|)f(t), by finding the solution for t
you will find the necessary average t. I don't have the time to solve this
right now but given a list l of magnitude 100 the value of t will be
significantly greater than 5 minutes.

So the remaining question is what value to use for necessary delay? This is
also dependent on the number of known nodes. All nodes must be equally
likely for the transfer for obvious reasons. Based on this I believe that
necessaryDelay needs to be greater than the time needed to receive |l|
messages. The reason for this is fairly simple, at the extreme we have only
one possible message going through the system at once, this is obviously
bad, an observer simply watches the output of the system, and what comes out
is what they are looking for. with at least |l| messages going through a
node and |l| necessary delay time (note that as the magnitude of l increases
the entire system slows, this could be bad, I'm sure I'm missing something
that will dominate on scaling) each message can be mistaken for other
messages. Since it is expectable that the usage of remailers will increase
at least as fast as the size of l the latency will likely decrease over
time.

If there is sufficient demand it is entirely reasonable to reduce from |l|
to a value of at least 2, but I don't believe this is reasonable at 100 or
even 1000 remailers. If the amount of remailer usage increases to the point
where  20% of email traffic goes through remailers it may become feasible
to lower this limit, but probably unnecessary because this scaling would
result in lowered delays as a matter of recomputation.

What is surprising is that this can be automatically calculated in a rather
interesting way. If each still maintains l it is entirely possible for a
remailer to create a message pool of size |l| and when a new message arrives
if the pool is full randomly select 1 entry to be flushed towards it's
destination _prior_ to the insertion of the new message, with an autoflush
happening every sqrt(|l|) hours (perhaps by insertion of null message). This
would cause a ripple effect each time a message was sent which could be seen
as a problem by the uninitiated because there would be a decided pattern of
travel with each message entering the system causing activity along a random
walk. To an amateur this would appear to be a flaw in the system, except
that the message being sent by the ith node is not the message sent by the
i-1th node, so the risk is non-existent, and since the average path length
is going to be k=2((|l|-1)/|l|), and the random walk is going to choose from
|l|^k paths, which we can approximate by |l|^2 this offers a sufficient
growth rate to block tracing. If this growth rate is unacceptable we can
also add a minimumHops value to the protocol increasing the number of paths
to |l|^minimumHops + |l|^2, minimumHops should be chosen to be a suitable
number, based on current assumptions I would recommend minimumHops =
logbase|l|(2^128), making the |l|^2 only a footnote as the total would be
greater than 2^128 giving an enormous difficulty in even selecting a
duplicate path.

Mitigating factors are present however, because each message can only exist
in one of |l| locations, so the maximum difficulty in guessing is still
bounded in that fashion, leaving the reasonable values for minimumHops at
around 10 for a 100 node network.
Joe




Re: Re: HushMail 2.0 released, supports OpenPGP standard

2001-07-19 Thread Joseph Ashwood

What probably happened is that you didn't see the other windows come up
where it was gathering entropy and needed your mouse input. If you don't see
that window I can see where you wouldn't be able to upgrade.
Joe
- Original Message -
From: Steve Schear [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, July 19, 2001 8:37 PM
Subject: CDR: Re: HushMail 2.0 released, supports OpenPGP standard


 Are any of those on the list with HushMail accounts having trouble?  I've
 gone through the upgrade procedure which leaves you on a page with no exit
 and no login prompt.  If you go back to the home page to login you're sent
 right back to the migration page and round you go.

 steve