Cryptography-Digest Digest #575, Volume #11 Thu, 20 Apr 00 05:13:00 EDT
Contents:
Very Large S-Boxes VLSB's ([EMAIL PROTECTED])
Re: updated paper on easy entropy (Francois Grieu)
The Relaxation Method
Re: AES-encryption (Tom St Denis)
Re: Paper on easy entropy (Tom St Denis)
Re: updated paper on easy entropy (Tom St Denis)
Re: Very Large S-Boxes VLSB's (Tom St Denis)
BS on AES3 (from the latest Cryptogram) (David Crick)
Re: Sony's Playstation2 export-controlled (Diet NSA)
password generator (Tom St Denis)
Re: ? Backdoor in Microsoft web server ? [correction] (Diet NSA)
Re: updated paper on easy entropy (Francois Grieu)
Re: updated paper on easy entropy (Tom St Denis)
Re: Advice in my situation (Newbie) (Francois Grieu)
Re: Q: Entropy (CLSV)
Re: BS on AES3 (from the latest Cryptogram) (David Crick)
Re: BS on AES3 (from the latest Cryptogram) (Paul Rubin)
Re: BS on AES3 (from the latest Cryptogram) (Tom St Denis)
Re: BS on AES3 (from the latest Cryptogram) (Mark Wooding)
Re: Should there be an AES for stream ciphers? (Mark Wooding)
Re: ? Backdoor in Microsoft web server ? [correction] (Diet NSA)
Re: password generator (Anton Stiglic)
----------------------------------------------------------------------------
From: [EMAIL PROTECTED]
Subject: Very Large S-Boxes VLSB's
Date: Tue, 18 Apr 2000 17:42:32 GMT
I would like to konw, why are we still using small S-Boxes with 70's
memory limitations..
Has anyone designed a Block Cipher with VLSB (Very Large
S-Box)...something like 128x128 -----> 1024x1024
Memory is pretty cheap these days...Non Linear VLSB's would be very
strong against Differential and Linear Attack....
Sent via Deja.com http://www.deja.com/
Before you buy.
------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: updated paper on easy entropy
Date: Tue, 18 Apr 2000 19:50:09 +0200
Tom St Denis <[EMAIL PROTECTED]> wrote:
> I am not trying to make passwords. And high speed timers
> are not available on all platforms.
Tom is correct that the lack of accurate timer is an issue when
timing keystrokes portably: for example the original PC timer
won't let one collect over 18 bit/s of entropy (this still
compares well with the entropy gathered from the input string
IMHO). But a single-tasking machine that has a way to run while
testing for a keystroke can do without a timer, by using a loop
counter [we don't need absolute timing].
Still, I admit a spreadsheet might be used for input, so let
us assume only an input string from now on.
> My goal was a simple source of bits that could be
> implemented on any computer [with double data types].
The goal, and most of Tom's article, has extended to estimating
if the entropy achieved from an input string is "high enough",
and from this to estimating the entropy achieved from an input
string, right ?
This is not a mathematically well-defined problem, but it is
indeed of practical interest.
I found the code given for the order 1 entropy estimator gives
grossly overestimated entropy, because the denominator in
P[x][y] = p[x][y] / chars
is not what it should; for an equidistributed input with s
symbols, this will be 1/(s^2) on average instead if 1/s,
so the estimated entropy about is like twice what is should be.
Also, the code has some bias with the letter a, or the last
letter in the training text.
Even when this is corrected, I fear the technique will often
grossly over-estimate the entropy gathered, and therefore the
security against key searches based on simulated input as in
known password-guessing attacks.
For a start, an order 1 model trained on a lot of English text
will assign tremendously high entropy to the string aaaaaa
because the digram aa seldom occurs in English.
Even if both the training and the input are English text,
existing data (*) predicts the entropy is overestimated by a
factor well over 2, compared to state-of-the-art automated
entropy estimators.
I played a bit with the idea of the order k model trained
on the input string itself, which appeared a conservative
approach to guard against repetitive keystrokes.
I used the following simple code, which matches the output
of the order 0 model in the text, and I believe is correct
to any order.
/* estimated entropy for a len byte string, using a self-
trained model of order ord */
double entropy_var(char *str, int len, int ord) {
int i,j,k,u,v;
double h;
h = 0.0;
for (i = ord; i<len; ++i) {
u = v = 0;
for (j = ord; j<len; ++j) {
for (k = ord; k>=0; --k) {
if (str[i-k]!=str[j-k]) break;
}
if (k<=0) {
++u;
if (k!=0) ++v;
}
}
h += log((double)u/(double)v);
}
return h*(1.0/log(2.0));
}
[Much faster code is possible with a little more memory]
It turns out it is way too conservative when the order grows
to 4 or more for some input like English text. Main problem
is, each time a new substring of ord+1 symbols occurs, it
generates no entropy at all, which is way too severe. And
of course strings up to the order have no entropy.
I propose to offset things a little so that the first occurence
of a symbol will add some preset entropy of 1 bit (which is
about the entropy of a symbol in English text).
double entropy_var2(char *str, int len, int ord) {
int i,j,k,u,v;
double h;
h = 0.0;
for (i = ord; i<len; ++i) {
u = 1; v = 0;
for (j = ord; j<len; ++j) {
for (k = ord; k>=0; --k) {
if (str[i-k]!=str[j-k]) break;
}
if (k<=0) {
++u;
if (k!=0) ++v;
}
}
h += log((double)u/(double)v);
}
return h*(1.0/log(2.0))+(double)((ord>len)?len:ord);
}
If I was ordered to quickly code a reasonably safe
"entropy estimate of user-supplied string", I would maybe
use the MINIMUM of
- 1 bit per input symbol; this about catches english text
- the output of entropy_var2 for all non-negative ord up
to the square root of the input length.
Francois Grieu
(*) Matt Mahoney: Refining the Estimated Entropy of English
by Shannon Game Simulation
<http://www.cs.fit.edu/~mmahoney/dissertation/entropy1.html>
------------------------------
From: [EMAIL PROTECTED] ()
Subject: The Relaxation Method
Date: 18 Apr 2000 11:53:38 -0700
For DES with independent subkeys, one can, just by altering any two
consecutive subkeys, produce any output from any input.
The post by Gideon Samid caused me to think of a "method" of cracking DES.
It might work for four-round DES, by solving the equations, but I think it
was probably considered at a very early stage in the existence of DES and
found not to work.
The idea is: start with a hypothetical key for DES. Given your target
input and output blocks, change subkeys 2n+1 and 2n+2 so that the desired
input block produces the desired output block (encrypt by preceding
rounds, decrypt by following rounds) by changing the 32 bits of each
subkey that index into the part of the S-box that is a permutation.
Then, change the key bits so that the changed subkeys are derived from it.
For another value of n, repeat the process.
I don't think this method will do anything but cycle randomly forever,
although it may have been shown to work if DES had fewer rounds. It sounds
like the sort of thing Mr. Samid is referring to.
John Savard
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: AES-encryption
Date: Tue, 18 Apr 2000 18:00:27 GMT
[EMAIL PROTECTED] wrote:
>
> Tom
>
> You are a Real PRAT.
> If you have an once of intelligence, you would have guessed that the
> guy made a typo...
>
> And if you read what is IN HIS SITE...ITS pretty Original stuff..totally
> outclass your school boy buggy crypto library or anything you will do in
> the future...
Funny thing is my crypto library is not based on my own ciphers or rng's
etc... I just put together something.
Well I won't start a flamewar, either you like my library or you don't.
If you are so concerned with it email me with what you think is wrong in
it. Of course you won't since you are just being derogatory and mean.
Tom
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Paper on easy entropy
Date: Tue, 18 Apr 2000 18:01:18 GMT
"Trevor L. Jackson, III" wrote:
> You may want to read up a bit on keyboard usage. I believe the USSR used
> keyboard-generated keys, and this contributed to the crack of the system. I
> think you'll find the references under the Venona Project.
Where can I find this info?
Tom
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: updated paper on easy entropy
Date: Tue, 18 Apr 2000 18:10:19 GMT
Francois Grieu wrote:
>
> Tom St Denis <[EMAIL PROTECTED]> wrote:
> > I am not trying to make passwords. And high speed timers
> > are not available on all platforms.
>
> Tom is correct that the lack of accurate timer is an issue when
> timing keystrokes portably: for example the original PC timer
> won't let one collect over 18 bit/s of entropy (this still
> compares well with the entropy gathered from the input string
> IMHO). But a single-tasking machine that has a way to run while
> testing for a keystroke can do without a timer, by using a loop
> counter [we don't need absolute timing].
> Still, I admit a spreadsheet might be used for input, so let
> us assume only an input string from now on.
Bingo.
>
> > My goal was a simple source of bits that could be
> > implemented on any computer [with double data types].
>
> The goal, and most of Tom's article, has extended to estimating
> if the entropy achieved from an input string is "high enough",
> and from this to estimating the entropy achieved from an input
> string, right ?
> This is not a mathematically well-defined problem, but it is
> indeed of practical interest.
Bingo :-)
> I found the code given for the order 1 entropy estimator gives
> grossly overestimated entropy, because the denominator in
> P[x][y] = p[x][y] / chars
> is not what it should; for an equidistributed input with s
> symbols, this will be 1/(s^2) on average instead if 1/s,
> so the estimated entropy about is like twice what is should be.
> Also, the code has some bias with the letter a, or the last
> letter in the training text.
Well that's why I use the previous char to map my way thru the tables.
If my order-1 model is wrong then so is my order-0, since all I am doing
is making a more complex model based on the previous char. For example
'az' and 'bz', the 'z' is completely independant with the order-1
model. And this is essentially right.
After training the model for a while it estimates much better then the
order-0. And yes there is a slight bias towards 'a' but only for the
very first char of the entire input...
> Even when this is corrected, I fear the technique will often
> grossly over-estimate the entropy gathered, and therefore the
> security against key searches based on simulated input as in
> known password-guessing attacks.
> For a start, an order 1 model trained on a lot of English text
> will assign tremendously high entropy to the string aaaaaa
> because the digram aa seldom occurs in English.
> Even if both the training and the input are English text,
> existing data (*) predicts the entropy is overestimated by a
> factor well over 2, compared to state-of-the-art automated
> entropy estimators.
Why would you train the model on english? That makes no sense. As for
the 'a's well let's assume the user is at least trying to be random.
> I played a bit with the idea of the order k model trained
> on the input string itself, which appeared a conservative
> approach to guard against repetitive keystrokes.
> I used the following simple code, which matches the output
> of the order 0 model in the text, and I believe is correct
> to any order.
The problem is when you get into higher models you need more input to
train the model. Otherwise it will over estimate the entropy each time.
Some guidelines to avoid bad entropy:
- Use either a order-0 or trained order-1
- Minimum input of 250 characters. Even at 0.75 bits per char this
is 187.5 bits of entropy.
- Use the minimum of an order-0 model and order-1 model (at the same
time :)
Tom
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Very Large S-Boxes VLSB's
Date: Tue, 18 Apr 2000 18:30:59 GMT
[EMAIL PROTECTED] wrote:
>
> I would like to konw, why are we still using small S-Boxes with 70's
> memory limitations..
>
> Has anyone designed a Block Cipher with VLSB (Very Large
> S-Box)...something like 128x128 -----> 1024x1024
>
> Memory is pretty cheap these days...Non Linear VLSB's would be very
> strong against Differential and Linear Attack....
That's pretty silly actually. A 8x8 sbox takes 256 bytes of ram, a
128x128 sbox would take 2^128 bytes of ram. Apparently your thinking is
quite flawed.
You could built a 128x128 sbox out of some intermediate algebraic steps,
whoa, that's a block cipher... sorry.
Tom
------------------------------
From: David Crick <[EMAIL PROTECTED]>
Subject: BS on AES3 (from the latest Cryptogram)
Date: Tue, 18 Apr 2000 19:18:05 +0100
The Advanced Encryption Standard (AES) is the forthcoming encryption
standard that will replace the aging DES. In 1996, the National Institute
of Standards and Technology (NIST) initiated this program. In 1997, they
sent out a call for algorithms. Fifteen candidates were accepted in 1998,
whittled down to five in 1999. This past week was the Third AES Candidate
Conference in New York. Attendees presented 23 papers (in addition to the
7 AES-related papers presented at Fast Software Encryption earlier in the
week) and 12 informal talks (more papers are on the AES website), as NIST
prepares to make a final decision later this year.
Several of the algorithms took a beating cryptographically. RC6 was
wounded most seriously: two groups were able to break 15 out of 20 rounds
faster than brute force. Rijndael fared somewhat better: 7 rounds broken
out of 10/12/14 rounds. Several attacks were presented against MARS, the
most interesting breaking 11 of 16 rounds of the cryptographic
core. Serpent and Twofish did best: the most severe Serpent attack broke 9
of 32 rounds, and no new Twofish attacks were presented. (Lars Knudsen
presented an attack at the FSE rump session, which he retracted as
unworkable two days later. Our team also showed that an attack on
reduced-round Twofish we presented earlier did not actually work.)
It's important to look at these results in context. None of these attacks
against reduced-round variants of the algorithms are realistic, in that
they could be used to recover plaintext in any reasonable amount of
time. They are all "academic" attacks, since they all show design
weaknesses of the ciphers. If you were using these algorithms to keep
secrets, none of these attacks would cause you to lose sleep at night. If
you're trying to select one of five algorithms as a standard, all of these
attacks are very interesting.
As the NSA saying goes: "Attacks always get better; they never get
worse." When choosing between different algorithms, it's smarter to pick
the one that has the fewest and least severe attacks. (This assumes, of
course, that all other considerations are equal.) The worry isn't that
someone else discovers another unrealistic attack against one of the
ciphers, but that someone turns one of those unrealistic attacks into a
realistic one. It's smart to give yourself as large a security margin as
possible.
Many papers discussed performance of the various algorithms. If there's
anything I learned, it's that you can define "performance" in all sorts of
ways to prove all sorts of things. This is what the trends were:
In software, Rijndael and Twofish are fastest. RC6 and MARS are also
fast, on the few platforms that have fast multiplies and data-dependent
rotates. They're slow on smart cards, ARM chips, and the new Intel chips
(Itanium and beyond). They're fast on Pentium Pro, Pentium II, and Pentium
III. Serpent is very slow everywere.
In hardware, Rijndael and Serpent are fastest. Twofish is good. RC6
is poor, and MARS is terrible.
The only two algorithms that had such implementation problems that I would
categorically eliminate them were Mars and RC6. MARS is so bad in hardware
that it would be a disaster for Internet applications, and RC6 is
close. And both algorithms just don't fit on small smart cards. (The RC6
team made a comment about being suitable for cheap--$5--smart cards. I am
talking about $0.25 smart cards.)
I would increase the number of rounds in Rijndael to give it a safety
margin similar to the others. Either Serpent, Twofish, and 18-round
Rijndael would make a good standard, but I think that Twofish gives the
best security to performance trade-off of the three, and has the most
implementation flexibility. So I support Twofish for AES.
------------------------------
Subject: Re: Sony's Playstation2 export-controlled
From: Diet NSA <[EMAIL PROTECTED]>
Date: Tue, 18 Apr 2000 11:40:27 -0700
In article <[EMAIL PROTECTED]>,
"Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
>Diet NSA wrote:
>> The PlayStation2 is not under export
>> control for crypto reasons but because it
>> does high speed image processing similar
>> to the type done in some missile guidance
>> systems.
>
>That might very well be the official "thought",
>but it's absurd. By the same token, BiC mechanical
>pencils should be export-controlled because they're
>used by nuclear weapons designers.
>
It was the official explanation given in
the news and I agree that it might be
somewaht absurd, especially if the export
controls are too strict due to excessive
caution. Anyways, Saddam Hussein, for
example, already has the gyroscopes, etc.
needed to target any major city in Europe.
"I feel like there's a constant Cuban Missile Crisis in my pants."
- President Clinton commenting on the Elian Gonzalez situation
=======================================================================
* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: password generator
Date: Tue, 18 Apr 2000 18:57:51 GMT
Got bored this afternoon so I wrote a password generator using the
windows timer rng idea.
You can get the source at
http://24.42.86.123/files/passwd.c
Or the exe (it's 3kb)
http://24.42.86.123/files/passwd.exe
It's nice for people needing to make up a good password, or for those
admins assigning passwords, etc...
The chars it outputs are a..z, A..Z, 0..9 and '{}'. So there are six
bits per character. For a decent password their should be at the very
least ten characters and more like 15 or so.
Tom
------------------------------
Subject: Re: ? Backdoor in Microsoft web server ? [correction]
From: Diet NSA <[EMAIL PROTECTED]>
Date: Tue, 18 Apr 2000 11:53:57 -0700
In article <fgrieu-
[EMAIL PROTECTED]
le.fr>, Francois Grieu <
[EMAIL PROTECTED]> wrote:
>Jim Gillogly <[EMAIL PROTECTED]> wrote:
>> More than that: it fits the classical definition of a back
door.
>> The insiders who placed this back door can access more
information
>> than they're entitled to
>
>Yes. Despite Microsoft denials (*), the word "backdoor" does
>apply IMHO.
>
>
>> by using the password they left in there.
>
>It is not really a "password" I believe. It is the key of an
>encryption scheme, which makes some difference. The intend was
>apparently to rush a feature to the market, rather than leave
>an open access to a selected few.
>
>
Y'all might want to take a look at this
recent & brief news article entitled
"Gates and Gerstner Helped NSA Snoop"
which discusses the _NSAKEY, etc. The
article is near the bottom of this page:
http://jya.com/crypto.htm
"I feel like there's a constant Cuban Missile Crisis in my pants."
- President Clinton commenting on the Elian Gonzalez situation
=======================================================================
* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!
------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: updated paper on easy entropy
Date: Tue, 18 Apr 2000 21:49:26 +0200
Tom St Denis <[EMAIL PROTECTED]> wrote:
> Francois Grieu wrote:
> > I found the code given for the order 1 entropy estimator gives
> > grossly overestimated entropy, because the denominator in
> > P[x][y] = p[x][y] / chars
> > is not what it should; for an equidistributed input with s
> > symbols, this will be 1/(s^2) on average instead if 1/s,
> > so the estimated entropy about is like twice what is should be.
>
> Well that's why I use the previous char to map my way thru the
> tables.
p[x][y] is the number of time 'a'+y has been followed by 'a'+x.
You got that part right.
> If my order-1 model is wrong then so is my order-0, since all
> I am doing is making a more complex model based on the previous
> char. For example 'az' and 'bz', the 'z' is completely
> independant with the order-1 model. And this is essentially right.
I stand behing my previous claim. For example with the text
bbccb
we have p[1][1] = p[1][2] = p[2][1] = p[2][2] = 1
Your formula assigns the last 'b' entropy -log2(1/5) = 2.3,
when it should be assigned the entropy -log2(1/2) = 1 !!!
Fact is, the last 'b' follows a 'c', and what do we have that
follows a 'c' ? One occurence of 'c', and one of 'b', which
we should assign each an estimated probability of 1/2.
In other words, you should not divide by count, but by the number
of occurences of the previous letter.
Francois Grieu
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: updated paper on easy entropy
Date: Tue, 18 Apr 2000 19:58:05 GMT
Francois Grieu wrote:
>
> Tom St Denis <[EMAIL PROTECTED]> wrote:
> > Francois Grieu wrote:
> > > I found the code given for the order 1 entropy estimator gives
> > > grossly overestimated entropy, because the denominator in
> > > P[x][y] = p[x][y] / chars
> > > is not what it should; for an equidistributed input with s
> > > symbols, this will be 1/(s^2) on average instead if 1/s,
> > > so the estimated entropy about is like twice what is should be.
> >
> > Well that's why I use the previous char to map my way thru the
> > tables.
>
> p[x][y] is the number of time 'a'+y has been followed by 'a'+x.
> You got that part right.
>
>
> > If my order-1 model is wrong then so is my order-0, since all
> > I am doing is making a more complex model based on the previous
> > char. For example 'az' and 'bz', the 'z' is completely
> > independant with the order-1 model. And this is essentially right.
>
> I stand behing my previous claim. For example with the text
> bbccb
> we have p[1][1] = p[1][2] = p[2][1] = p[2][2] = 1
> Your formula assigns the last 'b' entropy -log2(1/5) = 2.3,
> when it should be assigned the entropy -log2(1/2) = 1 !!!
> Fact is, the last 'b' follows a 'c', and what do we have that
> follows a 'c' ? One occurence of 'c', and one of 'b', which
> we should assign each an estimated probability of 1/2.
> In other words, you should not divide by count, but by the number
> of occurences of the previous letter.
Your are right... so Just make up an array chars[26], and do chars[pc]++
instead of chars++ ?
Tom
------------------------------
From: Francois Grieu <[EMAIL PROTECTED]>
Subject: Re: Advice in my situation (Newbie)
Date: Tue, 18 Apr 2000 22:03:56 +0200
Pat Caudill <[EMAIL PROTECTED]> wrote (ecrivait) :
> Would it be better to apply the MD5 to the concatenation of the score and
> the whole program? that way it would be harder to modify the program.
Right. It adds some difficulty, and is certainly usefull.
The adversary has to find a way to trick the code to apply
MD5 on the original application code, rather than to what is
currently running. It is not impossible though, and as a proof
of concept the verifier will do just that. And if the MD5 code
gets at the program thru the file system, a simple file system
trick (maybe as simple as giving the modified program another
name) will cut it.
Francois Grieu
------------------------------
From: CLSV <[EMAIL PROTECTED]>
Subject: Re: Q: Entropy
Date: Tue, 18 Apr 2000 22:54:32 +0200
Diet NSA wrote:
> If I understand correctly, the algorithmic
> complexity of *finite* strings depends on
> which UTM is used. If we switched to a
> new UTM then the complexities would
> change by a *bounded* function. If this is
> true, then wouldn't the complexities of
> *infinite* strings be definable
> independently of which UTM is used?
Not if the complexity of some infinite strings
would be finite. Then the (size of the) specific UTM
still matters. For example you can define a UTM
that gives the infinite string 1111111... a lower
complexity than 10101010.... Another UTM may give
10101010.... the lower complexity (within a
constant independent of the two strings).
Regards,
CLSV
------------------------------
From: David Crick <[EMAIL PROTECTED]>
Subject: Re: BS on AES3 (from the latest Cryptogram)
Date: Tue, 18 Apr 2000 21:56:07 +0100
David Crick wrote:
(quoting BS)
>
> I would increase the number of rounds in Rijndael to give it a safety
> margin similar to the others. Either Serpent, Twofish, and 18-round
> Rijndael would make a good standard, but I think that Twofish gives the
> best security to performance trade-off of the three, and has the most
> implementation flexibility. So I support Twofish for AES.
In all fairness to Bruce I should point out that I clipped the
footer of the article, which stated:
> For the record, I am one of the creators of Twofish:
> <http://www.counterpane.com/twofish.html>
The full cryptogram can be viewed at
http://www.counterpane.com/crypto-gram-0004.html
David.
------------------------------
From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: BS on AES3 (from the latest Cryptogram)
Date: 18 Apr 2000 21:23:10 GMT
Why do you say this is BS?
------------------------------
From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: BS on AES3 (from the latest Cryptogram)
Date: Tue, 18 Apr 2000 21:37:34 GMT
Paul Rubin wrote:
>
> Why do you say this is BS?
Hehehe, no 'by' BS = Bruce S.
Tom
------------------------------
From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: BS on AES3 (from the latest Cryptogram)
Date: 18 Apr 2000 21:36:48 GMT
Paul Rubin <[EMAIL PROTECTED]> wrote:
> Why do you say this is BS?
Because it was written by Bruce Schneier and those are his initials.
Just a guess.
-- [mdw]
------------------------------
From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: Should there be an AES for stream ciphers?
Date: 18 Apr 2000 22:14:20 GMT
Anton Stiglic <[EMAIL PROTECTED]> wrote:
> For hash function, everyone seems happy with SHA1, and the only
> competition is probably RIPEMD, so I don't think that there is a
> specific need.
Biham and Anderson's Tiger offers a wider hash. I've been concerned
for a while that hash functions in current use are too narrow. The
recent news (to me at least) of the NESSIE project and SHA2 are
extremely welcome.
-- [mdw]
------------------------------
Subject: Re: ? Backdoor in Microsoft web server ? [correction]
From: Diet NSA <[EMAIL PROTECTED]>
Date: Tue, 18 Apr 2000 15:35:32 -0700
In article <004baf42.360c0b25@usw-
ex0102-014.remarq.com>, Diet NSA <
[EMAIL PROTECTED]> wrote:
>>
The
>article is near the bottom of this page:
>
Sorry, I meant to say that the article is
near the bottom of the initial hyperlinked
entries, i.e., in the "offsite" section.
"I feel like there's a constant Cuban Missile Crisis in my pants."
- President Clinton commenting on the Elian Gonzalez situation
=======================================================================
* Sent from RemarQ http://www.remarq.com The Internet's Discussion Network *
The fastest and easiest way to search and participate in Usenet - Free!
------------------------------
From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: password generator
Date: Tue, 18 Apr 2000 18:48:23 -0400
This is a multi-part message in MIME format.
==============7BC9651DE4105517A0EAAC81
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Here are some comments on the code:
static int trng_bit(void)
{
long a, b;
b = 0;
a = GetTickCount();
while (a == GetTickCount())
b ^= 1;
return b&1;
}
Something seems wrong with this function. I don't know what exactly
GetTickCount() returns, but if it's something greater or equal to 1,
you will always be returning 0. Here is why: in your while loop,
you XOR b with 1, b start at 0, so the first time in you get
b = 0 XOR 1 = 1. Every other iteration, you simply do
b = 1 XOR 1, which will always give you b = 1
(you might want to do something like a logical AND instead).
Now, when you go out of the while loop, you return b&1,
which I believe you do so as to get the last bit (inverted), so
you will always return 0 if you ever went in the while loop.
Anton
==============7BC9651DE4105517A0EAAC81
Content-Type: text/x-vcard; charset=us-ascii;
name="anton.vcf"
Content-Transfer-Encoding: 7bit
Content-Description: Card for Anton Stiglic
Content-Disposition: attachment;
filename="anton.vcf"
begin:vcard
n:Stiglic;Anton
x-mozilla-html:FALSE
org:Zero-Knowledge Systems Inc;Security dev. team.
adr:;;;;;;
version:2.1
email;internet:[EMAIL PROTECTED]
title:Crypto Punk
x-mozilla-cpt:;0
fn:Anton Stiglic
end:vcard
==============7BC9651DE4105517A0EAAC81==
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list (and sci.crypt) via:
Internet: [EMAIL PROTECTED]
End of Cryptography-Digest Digest
******************************