Cryptography-Digest Digest #593, Volume #14      Tue, 12 Jun 01 06:13:01 EDT

Contents:
  Re: Help with Comparison Of Complexity of Discrete Logs, Knapsack, and   (Mok-Kong 
Shen)
  Re: Brute-forcing RC4 (S Degen)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY (Mok-Kong Shen)
  Re: Shannon's definition of perfect secrecy (Mok-Kong Shen)
  Re: One last bijection question (Mok-Kong Shen)
  Re: Best, Strongest Algorithm (gone from any reasonable topic) -  (Mok-Kong Shen)
  Re: IV (Mark Currie)
  Re: Unknown encryption (Daniel)
  Re: One last bijection question (Mark Wooding)
  Re: Unknown encryption (Terrence Koeman)
  Re: One last bijection question (Mark Wooding)

----------------------------------------------------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Help with Comparison Of Complexity of Discrete Logs, Knapsack, and  
Date: Tue, 12 Jun 2001 08:51:36 +0200



"Douglas A. Gwyn" wrote:
> 
> Mok-Kong Shen wrote:
> > My point was that the stuff of Whitehead and Russell
> > is not wrong (mathematically).
> 
> What was wrong was the idea that such an approach would
> provide a firm logical foundation for all of mathematics.

I said 'the stuff is not wrong', meaning that there is
no fault (wrong proof) in it, i.e. the results in the 
book are correct, no more nor less. In the part that you 
snipped, I already said that these authors had an ambitious
goal of sort of reducing all mathematics to logics but 
that goal failed.  

One could say that Whitehead and Russell had persued the 
'wrong' goal and failed. But then which goal was/is 
instead 'the' 'right' one? Could anyone say that? I am 
not sure that one can. In many research projects one 
is even uncertain at the beginning whether any useful 
results will come out at all. Do you think there is 
always ground to blame, when a scientist fails to 
achieve or fully achieve his goal? (One is afterwards 
always much much much more 'clever', for sure.)

> 
> A major motivator was the hope that thereby all antinomies
> could be eliminated.  But the mechanism introduced for
> that purpose was awkward and unnatural.  Today we take
> different approaches for such matters.  For example:
> Tightest form of antinomy:  "This statement is false."
> Is that statement true or false, or neither, or what?
> A simple, *stable* solution is to treat the truth value
> on a continuum, or in other words, to apply fuzzy logic;
> then the statement has a truth value of (true+false)/2,
> which is 0.5 using true=1 and false=0, or 0 using
> true=1 and false=-1.  (There is a theorem to the effect
> that this approach solves all logical antinomies.)
> There is actually a connection with cryptanalysis lurking
> in this approach, in that one can treat Boolean variables
> as fuzzy; then a *mutually contradictory* or "incorrect"
> (according to "hard" logic) assignment of values to
> variables no longer stymies further solution.  (Think
> eigenvalue convergence, etc.)

There are in fact many many kinds of logics. Fuzzy logic 
is only one of that many many, though due to practical
applications many engineers are nowadays also quite 
knowledgeable in that. There is nothing that could be 
regarded as 'the' logic. The 'diversity' is very high
and only comparatively few real specialists in logics 
could oversee the entire field. (I was so told by a 
person who has studied math and apparently has delved 
quite a bit into logics). My knowledge in logics is 
very meager. (Excepting some familiarity with automated 
deduction, my current knowledge in logics is practically 
null.) Hence I have a possibly very very dumb question: 
Does the theorem you mentioned have essential 
implications to the other subfields of logics or is it 
limited in its significance to fuzzy logic? If the 
former case, in which sense/manner would be its impact? 
And could you also give a pointer (book, page number) 
to the theorem?

> 
> > There is an analogous case, namely that of the legendary
> > Nicolas Bourbaki (passed away a few years ago officially).
> 
> ? Bourbaki was the pseudonym of a group of mathematicians.

I had hoped that my words 'legendary' and 'officially' 
(in connection with decease) above could have made clear
that the name isn't one of a real person. Certain details
about the group are sometimes told for interest, but 
that's certainly not relevant here.

> 
> > He attempted to axiomatize the whole of mathematics
> > but failed. But that does not mean that anything he
> > wrote is wrong.
> 
> I don't think that is the thrust of Bourbaki.  As I see it,
> it was to provide rigorous developments of the "standard"
> content of mathematics.

As far as I know, Bourbaki wanted to somehow encompass ALL 
fields of math from a certain common (unified) structural 
point of view. But some new (some yet developing) subfields 
apparently couldn't be readily put to fit well with such 
a common scheme, so he finally had to give up. I am not a 
mathematician and haven't closely followed the issue, so 
I can't ensure that this view is entirely correct. But I 
strongly believe that his failure to achieve his goal was 
because he was too ambitious (having set too high a goal). 

I don't think that I understand your notion of 'standard 
content' of math. In which sense is an 'arbitrary' 
subfield (or materials of a subfield) of math 'standard'? 

Any contemporary branch of math, classical or modern, 
theoretical or applied, is in my view rigorously founded 
and developed. (There is no snakeoil like in crypto.) 
So rigor has certainly always been there, whether there 
was Bourbaki or not. (He strictly adhered to his own 
scheme of treatment, though.) I think it was basically 
the inherent difficulty of (so to say) puting 'everything' 
on a 'common denominator' that was the cause of Bourbaki's 
not having achieved his goal. Perhaps some mathematicians
of our group would like to say something on this topic.

M. K. Shen
=========================
http://home.t-online.de/home/mok-kong.shen

------------------------------

From: S Degen <[EMAIL PROTECTED]>
Subject: Re: Brute-forcing RC4
Date: Tue, 12 Jun 2001 09:06:25 +0200



Trei Family wrote:
> 
> It sounds like this this guy wants to brute force the old 'export' strength SSL.

Actually, had you read my post (probably 3 posts down below), you would
have known
what i am relating too.

> 
> This was done several times quite a few years ago, back in the mid-90s.
> 
> The fastest crack was under 30 hours, using about 300 PCs, mostly in the
> ~90MHz range.
> 
> This was the first of the publicity attacks demonstrating the ludicrous nature
> of the US export regulations.
> 
> Peter Trei
> -------------------
> 
> Joseph Ashwood wrote:
> 
> > There was a mention a year and a half ago on this group of using a small
> > beowolf cluster of overclocked celerons (at the time this would bring it to
> > approximately the 500MHz mark), and being able to brute force RC4-40 in a
> > few days for $1200, with proper optimization I think it could be lowered.
> >
> > Let's see
> > Load takes L1 clocks
> > Store takes S1 clocks
> >
> > In each iteration of the key schedule there are 3 loads, and 2 stores. the 2
> > of the 3 loads can be done simultaneously, and parellelized highly, so the
> > throughput there can be reduced to say 1.5 cache clocks per value. The
> > stores are fully async on modern cpus, so those will take only 1 clock each.
> >
> > You've got about 6.5 clocks for each iteration of the RC4 key schedule.
> > There are 256 iterations for the entire key schedule so that's 1664 clocks
> > for initialization. RC4 can generate a byte in 4 clocks so we'll just assume
> > that, the likelihood of having to go beyond 2 bytes is negligible and
> > certainly doesn't compare to the key setup, so we'll pretend we only have to
> > generate 2 bytes, that's 8 clocks. The value for comparison can be stored in
> > a register so we don't have to load it, RC4 output will be to a register, so
> > we simply compare them, that's an additional 1 clock. So we've got a total
> > of 1664+8+1 clocks for each key. That's 1673 clocks, or 299,000/sec, 3.6
> > million seconds per key (worst case), that's 42 days. Normally it would
> > finish in 21 days, 3 systems could be parrelellized on this to finish in a
> > week on average. This is assuming a very carefully optimized RC4
> > implementation by someone who is exceptionally good at writing assembly,
> > with a reasonable assembly coder, you're looking at twice as long, minimum,
> > and a compiler using pure C would probably offer better results.
> >                                 Joe
> >
> > "S Degen" <[EMAIL PROTECTED]> wrote in message
> > news:[EMAIL PROTECTED]...
> > >
> > > Howmuch time would it take to brute-force a 40 bit RC4 key? (Ofcourse
> > > depending on the processor-speed, but lets say a PIII 500)
> > >
> > > This is the case:
> > > You have a 128 bit (ASCII) text, and the encyphered version of it. This
> > > version is encyphered with a 64 bit secret key, but of those 64 bits, 24
> > > bits are known. (Leaving 40 unkown bits)
> > >
> > > I would like to know how long it would approximately take to calculate
> > > the 40 bit secret key.
> > >
> > > (Worst case scenario)
> > >
> > >
> > > Thank you if you can help me.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY
Date: Tue, 12 Jun 2001 09:13:00 +0200



John Savard wrote:
> 

> I would tend to think that the basic concepts of information theory
> have now become integral to our culture, and as natural as the air we
> breathe to people with any degree of interest in communications,
> electronics, or data processing and some degree of mathematical
> background. I can hardly, therefore, believe they are being kept
> secret.
> 
> It is not surprising, however, that today cryptography is concerned
> mainly with an area about which Shannon said little, other than to
> give it a name: the work factor. Particularly as the extreme utility
> (and practicality, and convenience) of the 'public-key' methods has
> made them central to most modern employment of cryptographic
> techniques, despite the fact that their security, in the
> information-theoretic sense, is precisely nil.

But, if something that is practically secure is of 
precisely nil security in the information-theoretical 
sense, that means a big contradiction to intuition/
common-sense, isn't it? How could the theory nonetheless 
gain acceptance? Could you give reference to materials
about that topic? Thanks.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Shannon's definition of perfect secrecy
Date: Tue, 12 Jun 2001 09:22:25 +0200



wtshaw wrote:
> 
> Mok-Kong Shen<[EMAIL PROTECTED]> wrote:
> 
> > One may add additional layers to increase the complexity
> > for the opponent, but that's not necessary, if the
> > encryption is otherwise secure. Consider the normal case
> > of business letters. They have each sufficient header
> > informations (including page numbers), such that, if the
> > pages of several letters get mixed up, they can be
> > separated. What I said is that one could always have
> > the headers included in the encryption processing and
> > that one has a number messages concatenated with the
> > result sent as a number of 'records' of some fixed
> > constant length. (The last message, if not urgent, could
> > be sent only partly, with the remaining sent on the next
> > day, say.)

> Yes, all this works if care is given to allowing at least two layers.
> Such a continuous system should not be encumbered by cross block
> chaining.   The choices of ciphers then narrows to those with better
> security, need I suggest one that can do meet the requirements as I reject
> those that can't.

I am afraid that there is some misunderstanding between
us. What I meant is: Instead of sending bit strings
of length n1, n2, ... nm, one sends a big string that
is the concatenation. Any chaining or other techniques
that could be applied to the smaller strings could be
applied just as well to the big string, isn't it?

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: One last bijection question
Date: Tue, 12 Jun 2001 09:30:26 +0200



Tim Tyler wrote:
> 
> Mark Wooding <[EMAIL PROTECTED]> wrote:
> : Nicol So <[EMAIL PROTECTED]> wrote:
> 
> :> A comment on the terminology: the range of a function f is the image of
> :> the domain under f. The codomain of a function is a (not necessarily
> :> proper) superset of its range.
> 
> : This isn't the terminology I'm familiar with.  I've always used the
> : terms `range' and `image' to mean what you're calling the `codomain' and
> : `range' respectively.  I think these names were standard in the UK when
> : I learned this stuff.
> 
> I'm in much the same boat - possibly for the same reason.  Can anyone
> describe the difference between the range of a function and its co-domain?

These terms are explained in most textbooks on algebra, I
suppose. BTW, in terminology questions, I find it mostly
very practical to take a good dictionary/encyclopedia of 
math.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) - 
Date: Tue, 12 Jun 2001 09:34:03 +0200



Tim Tyler wrote:
> 

> It appears that Shannon's work didn't address the case of finite
> strings being encrypted with an OTP.
> 
> His reference to an OTP was in the context of infinite messages.

Please note the recent follow-up of Dennis Ritchie. Perhaps
you would like to discuss with him in some details.

M. K. Shen

------------------------------

Subject: Re: IV
From: [EMAIL PROTECTED] (Mark Currie)
Date: 12 Jun 2001 08:14:57 GMT

Hi,

Sorry, may have missed a discussion on this, but how does CTR compare with CBC 
from a security perspective ?

Mark

In article <gx9V6.87876$[EMAIL PROTECTED]>, 
[EMAIL PROTECTED] says...
>
>
>"Cristiano" <[EMAIL PROTECTED]> wrote in message
>news:9g37mj$ot7$[EMAIL PROTECTED]...
>> "Tom St Denis" <[EMAIL PROTECTED]> ha scritto nel messaggio
>> news:ii9V6.87860$[EMAIL PROTECTED]...
>> >
>> > "Cristiano" <[EMAIL PROTECTED]> wrote in message
>> > news:9g36s0$ol4$[EMAIL PROTECTED]...
>> > > I want to encrypt a file of L bytes with a block cipher in CBC mode
>> (like
>> > > RC6 or Rijndael).
>> > > For speed reasons I read N bytes at time (N>1024) and then I encrypt
>> this
>> > > block.
>> > > Every N bytes I use the IV to XORing the firsts 16 bytes of plain
>text.
>> > > Is there some weakness in this way?
>> >
>> > Hmm?  This doesn't sound like CBC mode.
>> >
>> > How about using CTR mode instead?
>>
>> What is CTR?
>
>CTR is where instead of encrypting the message you encrypt a binary counter
>than xor the output against your message.
>
>Look up Wagners website, he has a paper on the subkect
>
>Tom
>
>


------------------------------

From: Daniel <[EMAIL PROTECTED]>
Subject: Re: Unknown encryption
Date: 12 Jun 2001 10:17:24 CET

Look at the first 8-bit columns:

01001100 = ASCII(0x4C) = 'L'
01101111 = ASCII(0x6F) = 'o'
01101111 = ASCII(0x6F) = 'o'
01101011 = ASCII(0x6B) = 'k'
00100000 = ASCII(0x20) = ' '

This can't be a coincidence...

/Daniel

Terrence Koeman wrote:
> 
> Hello, I've been trying to crack an encrypted textfile as part of a
> challenge, but I'm STUCK!
> 
> I think the lines starting with "0110100" have something to do with
> the key or something,  but that's about as far as I get :(
> 
> Maybe someone here can help me cracking this text?
> 
> Regards,
> 
> Terrence Koeman
> 
> This is the text and there's a hint: "holes":
> 
> 0100110000010101
> 0110111110101111
> 0110111101100011
> 0110101100101011
> 0010000000011111
> 0111010010110101
> 0110100010100001
> 0111001010110010
> 0110111101101111
> 0111010100000010
> 0110011110101001
> 0110100000010111
> 0010000011001111
> 0111010010000101
> 0110100010101011
> 0110010110000001
> 0010000000101111
> 0110100010101000
> 0110111101100000
> 0110110001100101
> 0110010110001010
> 0111001110010100
> 0010000011001111
> 0111010010110000
> 0110111110101111
> 0010000000100011
> 0111001010000001
> 0110010110101010
> 0111011001110010
> 0110010110101010
> 0110000101101110
> 0110110000100001
> 0010000011001111
> 0111010010010101
> 0110100010101000
> 0110010101101010
> 0010000001000111
> 0111001110110100
> 0110010110100010
> 0110001101101101
> 0111001001110010
> 0110010101101001
> 0111010000010001
> 0010000011011111
> 0110100010101000
> 0110100110100110
> 0110010001101000
> 0110010000001011
> 0110010110000010
> 0110111001100110
> 0010000011001111
> 0111011110000111
> 0110100100000110
> 0111010010110101
> 0110100010011111
> 0110100111111111
> 0110111011111111
> 0010000011111111
> 0110110111111111
> 0110010111111111

------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: One last bijection question
Date: 12 Jun 2001 08:55:28 GMT

Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
> 
> 
> Mark Wooding wrote:
> > 
> > Nicol So <[EMAIL PROTECTED]> wrote:
> > 
> > > A comment on the terminology: the range of a function f is the image of
> > > the domain under f. The codomain of a function is a (not necessarily
> > > proper) superset of its range.
> > 
> > This isn't the terminology I'm familiar with.  I've always used the
> > terms `range' and `image' to mean what you're calling the `codomain' and
> > `range' respectively.  I think these names were standard in the UK when
> > I learned this stuff.
> 
> I don't know teaching in UK, but I suppose one doesn't
> use 'range' in lieu of 'domain' in the way you indicated.

You can't read.  I said that I was taught to use the word `range' in the
sense in which Nicol used the word `*co*domain'.

-- [mdw]

------------------------------

From: Terrence Koeman <[EMAIL PROTECTED]>
Subject: Re: Unknown encryption
Date: Tue, 12 Jun 2001 09:36:39 GMT


You're right!

I was looking too hard for holes ;)

Now I have:

01001100 4C "L"    00010101
01101111 6F "o"    10101111
01101111 6F "o"    01100011
01101011 6B "k"    00101011
00100000 20 " "    00011111
01110100 74 "t"    10110101
01101000 68 "h"    10100001
01110010 72 "r"    10110010
01101111 6F "o"    01101111
01110101 75 "u"    00000010
01100111 67 "g"    10101001
01101000 68 "h"    00010111
00100000 20 " "    11001111
01110100 74 "t"    10000101
01101000 68 "h"    10101011
01100101 65 "e"    10000001
00100000 20 " "    00101111
01101000 68 "h"    10101000
01101111 6F "o"    01100000
01101100 6C "l"    01100101
01100101 65 "e"    10001010
01110011 73 "s"    10010100
00100000 20 " "    11001111
01110100 74 "t"    10110000
01101111 6F "o"    10101111
00100000 20 " "    00100011
01110010 72 "r"    10000001
01100101 65 "e"    10101010
01110110 76 "v"    01110010
01100101 65 "e"    10101010
01100001 61 "a"    01101110
01101100 6C "l"    00100001
00100000 20 " "    11001111
01110100 74 "t"    10010101
01101000 68 "h"    10101000
01100101 65 "e"    01101010
00100000 20 " "    01000111
01110011 73 "s"    10110100
01100101 65 "e"    10100010
01100011 63 "c"    01101101
01110010 72 "r"    01110010
01100101 65 "e"    01101001
01110100 74 "t"    00010001
00100000 20 " "    11011111
01101000 68 "h"    10101000
01101001 69 "i"    10100110
01100100 64 "d"    01101000
01100100 64 "d"    00001011
01100101 65 "e"    10000010
01101110 6E "n"    01100110
00100000 20 " "    11001111
01110111 77 "w"    10000111
01101001 69 "i"    00000110
01110100 74 "t"    10110101
01101000 68 "h"    10011111
01101001 69 "i"    11111111
01101110 6E "n"    11111111
00100000 20 " "    11111111
01101101 6D "m"    11111111
01100101 65 "e"    11111111

Again the 'holes'.... I tried inverting the second column, as the last
5 bytes seem padding that should be '00000000'. I also tried XOR-ing
and OR-ing  the first column with the second, but no avail...

Any ideas on the second column?

Regards,

Terrence Koeman

On 12 Jun 2001 10:17:24 CET, Daniel <[EMAIL PROTECTED]> hoped
somebody would understand this:

>Look at the first 8-bit columns:
>
>01001100 = ASCII(0x4C) = 'L'
>01101111 = ASCII(0x6F) = 'o'
>01101111 = ASCII(0x6F) = 'o'
>01101011 = ASCII(0x6B) = 'k'
>00100000 = ASCII(0x20) = ' '
>
>This can't be a coincidence...
>
>/Daniel
>
>Terrence Koeman wrote:
>> 
>> Hello, I've been trying to crack an encrypted textfile as part of a
>> challenge, but I'm STUCK!
>> 
>> I think the lines starting with "0110100" have something to do with
>> the key or something,  but that's about as far as I get :(
>> 
>> Maybe someone here can help me cracking this text?
>> 
>> Regards,
>> 
>> Terrence Koeman
>> 
>> This is the text and there's a hint: "holes":
[snip]

------------------------------

From: [EMAIL PROTECTED] (Mark Wooding)
Subject: Re: One last bijection question
Date: 12 Jun 2001 09:42:35 GMT

Mok-Kong Shen <[EMAIL PROTECTED]> wrote:

> These terms are explained in most textbooks on algebra, I
> suppose. BTW, in terminology questions, I find it mostly very
> practical to take a good dictionary/encyclopedia of math.

The reason we're in this mess is that different books give different
definitions.  I suggest that, to avoid confusion in present discussions,
we avoid the ambiguous term `range' and stick to `codomain' and
`image'.

-- [mdw]

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to