Cryptography-Digest Digest #194, Volume #12      Mon, 10 Jul 00 16:13:01 EDT

Contents:
  Re: Proposal of some processor instructions for cryptographical   applications 
("Thomas Womack")
  Re: Steganographic encryption system (jungle)
  Re: Steganographic encryption system (Andrew McDonald)
  Re: key dependent s-boxes (Helger Lipmaa)
  Re: Steganographic encryption system ("Joseph Ashwood")
  Re: Proposal of some processor instructions for cryptographical applications 
("Kasper Pedersen")
  Re: Steganographic encryption system ("Bob Billing (AKA Uncle Bob)")
  Re: cray and time needed to attack (Bob Silverman)
  Re: Steganographic encryption system (John Winters)
  Re: Random Numbers (Herman Rubin)
  Re: cray and time needed to attack (Jerry Coffin)
  Re: Steganographic encryption system (Kaz Kylheku)
  Re: cray and time needed to attack (Paul Rubin)
  Re: Suggestions for crypto for constrained-memory/CPU computers? (Paul Rubin)

----------------------------------------------------------------------------

From: "Thomas Womack" <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical   applications
Date: Mon, 10 Jul 2000 19:19:01 +0100


"Mok-Kong Shen" <[EMAIL PROTECTED]> wrote

> Helger Lipmaa wrote:
>
> > Finite field GF(2^8) and GF(2^16) multiplication (modulo some
> > fixed irreducible polynomial) would be nice. May be also some
> > other operations (x->x^-1, e.g.).

Rob Harley knows a lot about speeding this sort of thing up with MMX, I
believe.

> Maybe some support for multi-precision integer arithmetics would also be
> advantageous, not only for crypto but also for other scientific
applications.

What additional support do you need? x86 has (rather awkward) 32x32 -> 64;
Alpha has commands for top and bottom of a 64x64 product, ARM has top and
bottom of 32x32 (though this is really for doing 16.16 FP work), P4 can do
two 32x32->64 multiplies with one instruction.

Or are you more thinking of the idea of having a microcoded engine which
will go off and do hundreds of memory loads, operations and stores with a
single instruction - a vector machine, basically.

Tom



------------------------------

From: jungle <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,uk.comp.os.linux
Subject: Re: Steganographic encryption system
Date: Mon, 10 Jul 2000 14:34:33 -0400

your concept doesn't work ...
you can't create system that will work on principle that ...

24 decrypts into 20 or/and 21 or/and 22 or/and 23 or/and ...
[ number represents file size in kb OR mb OR ... ]

phil hunt wrote:
> 
> I am thinking of developing a steganographic encryption system that
> will enable a particular cyphertext to be decrypted into two or more
> different plaintexts, depending on which key is used. (Provisionally
> named 'stes'). To be more precise:



------------------------------

From: [EMAIL PROTECTED] (Andrew McDonald)
Crossposted-To: comp.os.linux.development.apps,uk.comp.os.linux
Subject: Re: Steganographic encryption system
Date: Mon, 10 Jul 2000 19:39:36 +0100

On Mon, 10 Jul 2000 18:07:37 +0100,
phil hunt <[EMAIL PROTECTED]> wrote:
> I am thinking of developing a steganographic encryption system that
> will enable a particular cyphertext to be decrypted into two or more 
> different plaintexts, depending on which key is used.
[snip description]
>
> I have some questions:
>
>
> (1) does anything like this exist already?

The thought that comes to my mind first is basing it on Rivest's
Chaffing and Winnowing:
<http://theory.lcs.mit.edu/~rivest/chaffing.txt>
I'm fairly certain that people have done things with this (I think I
might have seen something on Freshmeat <http://freshmeat.net/>).
There have also been some developments from this suggested by others,
e.g. John McHugh's 'Chaffing at the Bit' from last year's Information
Hiding Workshop.

> (I am aware of stegFS, which is a steganographic file system for Linux.

Oh, I think I might have heard of that one. ;-)

> My system is different in that it encypts files not file systems, so
> can be used under any OS, and also in StegFS, keys have "levels of secretness":
> if you decrypt with one key, you automatically have access to all the data
> encypted with "less secret" keys -- in stes, all keys are as secret as
> each other).

Later versions are more flexible in this area (e.g. have a look at
stegfsctrl). I haven't done any work on it for a while. I probably
will when I have time.

> (2) What's a good private-key cypher algorithm to use?

I sure that everyone on sci.crypt will have their own opinion.
Blowfish is reasonably well respected. I'd also have a look at
implementing a framework that allows AES candidate algorithms to be
plugged in as needed. Brian Gladman's implementations are probably
worth looking at:
<http://www.btinternet.com/~brian.gladman/cryptography_technology/aes/index.htm>


Andrew
-- 
Andrew McDonald
[EMAIL PROTECTED]
http://www.mcdonald.org.uk/andrew/
OpenPGP DSA/ElG 1024/2048  3EDE 0FBC 6138 DCA0 FC8E C508 FCBB A9C8 F2DE ED36

------------------------------

From: Helger Lipmaa <[EMAIL PROTECTED]>
Subject: Re: key dependent s-boxes
Date: Mon, 10 Jul 2000 20:51:06 +0300

Mack wrote:

> Vladimir Castro Alves [EMAIL PROTECTED]
>
> >
> >Hi,
> >
> >I am looking for references on key dependent s-boxes
> >
> >and their robustness compared to "conventional" s-boxes.
> >Thanks for any hint on the subject.
> >Vladimir Castro.
> >[EMAIL PROTECTED]
> >
> >

There is some literature on the subject. One publication that I remember
is Sandy Harris, Carlisle M. Adams: Key-Dependent S-Box Manipulations.
Selected Areas in Cryptography 1998: pp. 15-26.

Helger


------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: Steganographic encryption system
Date: Mon, 10 Jul 2000 11:51:45 -0700
Crossposted-To: comp.os.linux.development.apps

[uk.comp.os.linux removed due to newsserver limitations, please feel free to
re-add them in replies]

As was said before this sounds very similar to Rivest's Winnowing and
Chaffing, for which there are some rather intersting ways of speeding it up.
I would like to suggest one change, don't structure your file so that given
a key you can immediately determine if there is useful information, by doing
that you immediately (minorly) compromise the security, instead, what I
would suggest is that you make use of an All-Or-Nothing-Transform on the
files (with padding) before performing your stego.

There are actually several things similar, but I'm not immediately aware of
anything identical.
                Joe

"phil hunt" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> I am thinking of developing a steganographic encryption system that
> will enable a particular cyphertext to be decrypted into two or more
> different plaintexts, depending on which key is used. (Provisionally
> named 'stes'). To be more precise:
>
>    $ stes --create ctf hello myletter.txt goodbye apicture.jpg
>
> This creates cypher text file <ctf> which contains an encrypted form of
> the data in the plaintext files <myletter.txt> and <apicture.jpg>. The
> data in <myletter.txt> is accessible by the key "hello", the data in
> <apicture.jpg> by the key "goodbye".
>
> Note that if you have the file <ctf>, there is no way of knowing how
> many different files are encrypted inside it; Cyphertext files will
> contain 'padding' so you can't infer contents by length, either.
>
>    $ stes --decode ctf goodbye xxx.jpg
>
> This looks in the file <ctf> and finds out whether there is any data
> in it using the key "goodbye". if there is data, it is put into the
> file <xxx.jpg>. (in this example, there is data, i.e. the contents of file
> <apicture.jpg>. (Note that stes doesn't know what a jpg file is, or
anything
> else about file types))
>
>    $ stes --alter ctf hello anotherfile.doc
>
> This looks in the file <ctf> and finds if there is any data for the
> key "hello". In this example, there is, so the old data stored under
> the key "hello" is lost, and replaced with the new data from
> <anotherfile.doc>. Note that if <ctf> didn't already have the key "hello",
> it could not be added; keys can only be put into a cyphertext file when
> it is created.
>
> I have some questions:
>
>
> (1) does anything like this exist already?
>
> (I am aware of stegFS, which is a steganographic file system for Linux.
> My system is different in that it encypts files not file systems, so
> can be used under any OS, and also in StegFS, keys have "levels of
secretness":
> if you decrypt with one key, you automatically have access to all the data
> encypted with "less secret" keys -- in stes, all keys are as secret as
> each other).
>
>
> (2) What's a good private-key cypher algorithm to use?
>
> A good algorithm will have these features:
>
> * strong encryption -- no-one can crack it (what key size do I need for
this?)
> * not encumbered by patent or similar restrictions
> * an open source implementation, preferably one that is well documented
> * popular. The more eyeballs looking at it, the less security holes will
>   be in it.
> * efficient on processor time (this is not as important as the other
>   requirements, it's an "all other things being equal" thing)
>
> Would Blowfish be a good algorithm to use? If so, which implementation?
> (Bearing in mind that my program will be written in C++ and aims to be as
> portable as possible across OSes, so any code that relies on, for example,
> endianness would be non-optimal).
>
> --
> ***** Phil Hunt ***** send email to [EMAIL PROTECTED] *****
> Moore's Law: hardware speed doubles every 18 months
> Gates' Law: software speed halves every 18 months



------------------------------

From: "Kasper Pedersen" <[EMAIL PROTECTED]>
Crossposted-To: comp.arch
Subject: Re: Proposal of some processor instructions for cryptographical applications
Date: Mon, 10 Jul 2000 18:57:54 GMT


"Mok-Kong Shen" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> Well, if I don't err, for a given particular permutation one has to
> have a look-up table of 2^32 words as an array. Isn't that fairly huge?
> And one has to compute the contents of these, even if done only once
> in a key dependent way.

Oops. Always remember to use the correct notation in a given context.
I mean your 32 bits to 32 bits bit-router function (the one with 32!
possible functions)
Usually I implement a such as 4 32-bit-wide arrays of 256 elements each.
As with DES, where the expansion stage can be pushed up into widened
SE-boxes

.........
> The context in the above case was computing the parity. Does
> Data[index] of your code refer to a single bit? Thanks.

Data[index] in this case is usually a word-width.
Exactly what parity do you want? (expression please)
Computing parity is already highly efficient in asm on modern micros, but
not from C.

/Kasper



------------------------------

From: "Bob Billing (AKA Uncle Bob)" <[EMAIL PROTECTED]>
Crossposted-To: comp.os.linux.development.apps,uk.comp.os.linux
Subject: Re: Steganographic encryption system
Date: Mon, 10 Jul 2000 19:58:18 +0100

jungle wrote:
> 
> your concept doesn't work ...

 This is entirely false. Any number of messages may be embedded if the
cyphertext is long enough.

-- 
I am Robert Billing, Christian, inventor, traveller, cook and animal
lover, I live near 0:46W 51:22N.  http://www.tnglwood.demon.co.uk/
"It burned me from within. It quickened; I was with book as a woman
is with child." CS Lewis - Till we have faces, Ch 21.

------------------------------

From: Bob Silverman <[EMAIL PROTECTED]>
Subject: Re: cray and time needed to attack
Date: Mon, 10 Jul 2000 18:46:10 GMT

In article <[EMAIL PROTECTED]>,
  Jerry Coffin <[EMAIL PROTECTED]> wrote:
> In article <8kb67e$l77$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> The cycle time of the main memory IS very significant to the speed in
> this case.  I'm not sure why you're talking about vector vs. scalar
> processors -- nobody (to my knowledge) has suggested that this should
> be one in a scalar machine anyway.

You did. i.e. the Alpha.



> I'd already said that the overall improvement available was roughly 4
> orders of magnitude,

You have *claimed* this, but nothing you have said has substantiated it.
And I doubt if anyone here believes that a 10,000 fold speed improvement
is possible with existing technology.


> I never said such a thing was likely to be built.  I have VERY
> carefully said that I was talking about what was theoretically
> possible with current technology,
> NOT what was likely to be built or
> was economically feasible.  How much more simply will I have to make
> this statement before you understand it?

You have made many "theoretical" claims without substantiating
any of them.  You are selling vaporware. I understand the snake oil you
are selling, but I am not buying any.

>
> > (1) When working mod 2 one can add 64 columns at once via a single
> > xor.  Working mod p means doing 64 separate additions mod p. And
> > mod p additions are more than twice as slow as an xor.
>
> Doing 64 separate additions does NOT mean taking 64 times longer
> unless there's a dependency between the additions.  In case you'd
> forgotten the fact, the whole basic notion of a vector machine is to
> allow doing things like this in parallel.

You really do not understand the algorithm do you?  You pack
columns of the matrix (mod 2) into individual bits of each word.
Thus, the first *word*  in *each* row of the matrix holds 64 columns
(for a 64 bit machine).  Thus, you can add 64 columns at once mod 2 with
a single xor.  You then vectorize this over the rows. A Cray with
a length 64 vector pipe does  64 x 64 additions in one vectorized xor.

If one works mod p,  you can no longer do this. To add mod p
one needs to do 64 separate additions instead of a single xor.
And the addition is no longer an xor of 2 bits, but an addition of
two integers, then a reduction mod p.  With a length 64 vector pipe
one adds 64  64-bit integers,  then a second vector instruction
reduces the sums mod p.  One is reduced from 64 x 64 in a single
vector instruction to  64 in a single instruction, followed by another
64 in a single instruction.  This is a factor of 128 SLOWER.

Do us all a favor. Study the algorithm. Study how it performs on
different architectures. Implement it. Then try it out.

Maybe then you will be able to intelligently discuss its performance.


> > (2) You have to solve the matrix 33 times, then paste the results
> > together with the CRT.
>
> Yes, and those 33 solutions can also be done in parllel, can't they?

Now you need 33 machines, with 33x the total memory (already
unreachable with today's technology).  Exactly how will one
manufacture 3 Petabytes of this .3 nsec memory of yours with
existing technology?

The fact that we can manufacture this memory does not mean that
it is possible to manufacture 3 Petabytes of the stuff.


>
> In reality, nearly NONE of these factors needs to apply at all.

You need a serious reality check.

>  If
> cost is not an object (and I've made it VERY clear from the beginning
> that we're talking ONLY about what's theoretically technically
> possible, NOT what's economically feasible)

Now you are assuming that resources are unbounded.

It is not theoretically possible to build an unlimited amount of
hardware with a finite GNP. Theoretically or not, if resources
(i.e. money) are bounded, then what we might build is bounded.


> > Therefore, a machine 1000 times faster than the CRAY will only
> > take 240 million days to solve it.
> >
> > This is what I meant when I said that you couldn't do arithmetic.
>
> Doing math usefully consists of two phases: first deciding what math
> to do, and then carrying out the calculations.  You've done the
> second flawlessly, but failed utterly in the first regard.

On the contrary.  I have a sound understanding of the algorithm
and how it works and what its hardware requirements are.  You
understand none of this.


>  You've started with grossly incorrect assumptions

"Proof by assertion"?  The fact that you say so, does not make it so.
Noone else who has posted here agrees with you.


> You've argued about things like whether anybody's likely to ship a
> machine faster than a current Cray in the near future.  Given that I
> don't really think Cray-class machines have been profitable in the
> last decade or so, I agree completely that it's unlikely to happen.
>
> That has NOTHING to do with what's theoretically possible though.  If
> Intel (or AMD, or whoever) decided to build a 128-bit, 256-way vector
> machine using .3 ns SRAM for its main memory, they could do so.

Where would they get:

(1) The amount of needed memory? How much manufacturing capacity do
you think Intel has for this supposed .3 nsec memory?
(2) The money to pay for it?  You say "If Intel...decided to..., they
could do so."  This is false.  They COULDN'T because they don't have
the resources. Manufacturing capacity is BOUNDED.  You assume
it is not.

> They're NOT going to do it, because it makes no sense economically,
> but they doesn't change the fact that the technology exists to make
> it theoretically possible.

You can't untie technology development from economics.

===================================================================
And even
if one *could* build a machine with 100 Tbytes that is 1000x faster
than a Cray,  it will still take ~650,000 YEARS to solve the matrix.
I call a computation on that scale "theoretically impossible with
existing technology". I think most would agree.
===================================================================


--
Bob Silverman
"You can lead a horse's ass to knowledge, but you can't make him think"


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] (John Winters)
Crossposted-To: comp.os.linux.development.apps,uk.comp.os.linux
Subject: Re: Steganographic encryption system
Date: 10 Jul 2000 19:59:28 +0100

In article <[EMAIL PROTECTED]>, jungle  <[EMAIL PROTECTED]> wrote:

[re-arranged in conventional order]
>
>phil hunt wrote:
>> 
>> I am thinking of developing a steganographic encryption system that
>> will enable a particular cyphertext to be decrypted into two or more
>> different plaintexts, depending on which key is used. (Provisionally
>> named 'stes'). To be more precise:
>
>your concept doesn't work ...
>you can't create system that will work on principle that ...
>
>24 decrypts into 20 or/and 21 or/and 22 or/and 23 or/and ...
>[ number represents file size in kb OR mb OR ... ]

You can if you use OTP, but OTOH your "keys" would need to be the same
size as your encrypted texts.

John
-- 
John Winters.  Wallingford, Oxon, England.

The Linux Emporium - the source for Linux CDs in the UK
See http://www.linuxemporium.co.uk/

------------------------------

From: [EMAIL PROTECTED] (Herman Rubin)
Subject: Re: Random Numbers
Date: 10 Jul 2000 14:08:08 -0500

In article <[EMAIL PROTECTED]>,
John Savard <[EMAIL PROTECTED]> wrote:
>On Mon, 10 Jul 2000 00:18:55 +0100, "David Hyde"
><[EMAIL PROTECTED]> wrote, in part:

>>Sorry I left out some detail about the random bit stream.  The bit stream
>>has been generated from a white noise source I built from a zener diode, a
>>couple of op-amp stages and a comparator.  Although the bits produced are
>>independent of each other, there is a bias that can't be removed by
>>adjusting the dc mean level of the noise.  Therefore I was asking if there
>>is a way of processing the bit stream to produce 16-bit random numbers with
>>a uniform distribution?

>One important thing to note is the information content of these random
>bits, so you know how many of them are needed to produce a 16-bit
>random number. But the simplest rule for producing truly random bits
>from biased bits is to use this table:

>00 : discard
>01 : 0
>10 : 1
>11 : discard

This method is only "correct" if there is no dependence
and the probability of a 1 is constant.  If this is not
the case, the results can be worse than doing nothing
to the input.  

Independence is MUCH harder to approximate than lack of
bias.  It is also much harder to detect.


-- 
This address is for information only.  I do not claim that these views
are those of the Statistics Department or of Purdue University.
Herman Rubin, Dept. of Statistics, Purdue Univ., West Lafayette IN47907-1399
[EMAIL PROTECTED]         Phone: (765)494-6054   FAX: (765)494-0558

------------------------------

From: Jerry Coffin <[EMAIL PROTECTED]>
Subject: Re: cray and time needed to attack
Date: Mon, 10 Jul 2000 13:39:28 -0600

In article <8kd5lc$2d9$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> In article <[EMAIL PROTECTED]>,
>   Jerry Coffin <[EMAIL PROTECTED]> wrote:
> > In article <8kb67e$l77$[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> > The cycle time of the main memory IS very significant to the speed in
> > this case.  I'm not sure why you're talking about vector vs. scalar
> > processors -- nobody (to my knowledge) has suggested that this should
> > be one in a scalar machine anyway.
> 
> You did. i.e. the Alpha.

Not so -- I gave the Alpha as an example of a machine in current 
production that has memory of approximately this speed.  I did NOT at 
any time suggest that it was an appropriate machine to use for the 
final job.
 
> > I never said such a thing was likely to be built.  I have VERY
> > carefully said that I was talking about what was theoretically
> > possible with current technology,
> > NOT what was likely to be built or
> > was economically feasible.  How much more simply will I have to make
> > this statement before you understand it?
> 
> You have made many "theoretical" claims without substantiating
> any of them.  You are selling vaporware. I understand the snake oil you
> are selling, but I am not buying any.

What don't you believe?  Do you disbelieve that the Alpha has memory 
as fast as I've said?  Do you believe that what's in current 
production is as fast as anything anybody knows how to do?

Neither of these even bears a resemblence to being sensible...

> If one works mod p,  you can no longer do this. To add mod p
> one needs to do 64 separate additions instead of a single xor.
> And the addition is no longer an xor of 2 bits, but an addition of
> two integers, then a reduction mod p.  With a length 64 vector pipe
> one adds 64  64-bit integers,  then a second vector instruction
> reduces the sums mod p.  One is reduced from 64 x 64 in a single
> vector instruction to  64 in a single instruction, followed by another
> 64 in a single instruction.  This is a factor of 128 SLOWER.

Yes, using an exist machine.  You're carefully ignoring the fact that 
the Cray does not define the only way a vector machine can or could 
be built.  If one wants to build a vector machine that executes an 
instruction on 64x64 (4096) elements at a time, there's absolutely NO 
reason it can't be done.  Increasing that to 8192 or 16384 elements 
is entirely feasible too.

In short, you're continuing to look at things through the VERY narrow 
tunnel vision of how it would execute on one specific existing 
machine.  It's a darned good machine, but it's NOT the only possible 
way to build a high-end vector processor.
 
> Now you need 33 machines, with 33x the total memory (already
> unreachable with today's technology).  Exactly how will one
> manufacture 3 Petabytes of this .3 nsec memory of yours with
> existing technology?

Very carefully, obviously.  If Samsung (far from the biggest or 
highest-tech manufacturer around) can do it in small quantities, 
others can do it in substantially larger quantities, can't they?
 
> The fact that we can manufacture this memory does not mean that
> it is possible to manufacture 3 Petabytes of the stuff.

Unless you're going to postulate that there's not enough silicon on 
planet earth to do so (in which case you're drastically wrong) yes, 
being able to produce small quantites DOES mean that it's possible to 
produce larger quantities.

> > In reality, nearly NONE of these factors needs to apply at all.
> 
> You need a serious reality check.

No, you need a serious reading lesson.  I originally said that the 
job was bordering on theoretically possible using technology 
available now or in the near future.  You contradicted that, saying 
"not even close."  You've spent the (roughly) week since then TRYING 
to force me to argue that it was economically feasible, could be done 
with machines that were likely to be built, etc., ad naseum.  None of 
these is what I said.  I was very specific in the first place, and 
you've insisted on arguing against something I never said in an 
attempt at justifying a position that was clearly indefensible.
 
> >  You've started with grossly incorrect assumptions
> 
> "Proof by assertion"?  The fact that you say so, does not make it so.
> Noone else who has posted here agrees with you.

I made the original assertion about it's being theoretically 
possible.  Your claims have all related to what's reasonable, what's 
possible using current machines, how the current Cray works, and so 
on.  Can you show any reason for anybody to believe that the way the 
current Cray works is the only way a vector machine could work?  Of 
course you can't because we all know it's simply not true.

The fact that you now feel obliged to appeal to popularity rather 
than logic shows what we both know: your original assertion was 
incorrect, and nearly every one you've made since then was equally 
incorrect.

-- 
    Later,
    Jerry.
 
The universe is a figment of its own imagination.

------------------------------

From: [EMAIL PROTECTED] (Kaz Kylheku)
Crossposted-To: comp.os.linux.development.apps,uk.comp.os.linux
Subject: Re: Steganographic encryption system
Reply-To: [EMAIL PROTECTED]
Date: Mon, 10 Jul 2000 19:40:01 GMT

On Mon, 10 Jul 2000 18:07:37 +0100, phil hunt <[EMAIL PROTECTED]>
rote:
>I am thinking of developing a steganographic encryption system that
>will enable a particular cyphertext to be decrypted into two or more 
>different plaintexts, depending on which key is used. (Provisionally
>named 'stes'). To be more precise:
>
>   $ stes --create ctf hello myletter.txt goodbye apicture.jpg

That's wonderful; perhaps the sci.crypt newsgroup will receive this
with the proper enthusiasm that it deserves. It's not really topical
in the Linux newsgroups.

-- 
#exclude <windows.h>

------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: cray and time needed to attack
Date: 10 Jul 2000 19:48:08 GMT

In article <[EMAIL PROTECTED]>,
Jerry Coffin  <[EMAIL PROTECTED]> wrote:
>> > Right now, Samsung is working on the Alpha 21464, which will include
>> > memory with a cycle time of under .3 ns.
>> 
>> Perhaps for on-chip cache. What about the 10 to 100 Terabytes
>> of memory that will be needed to hold the matrix?
>
>If it can be built into a cache, it can be done outside the cache as 
>well.  Yes, you're likely to incur some slowdown (usually on the 
>order of 10 to 20%) due to extra buffering needed. ... I have VERY 
>carefully said that I was talking about what was theoretically 
>possible with current technology, NOT what was likely to be built or 
>was economically feasible. 

Excuse me, but I always heard that electrical signals in a wire propagate
at about 1/3 of the speed of light.  That means that with that 10-20%
buffering slowdown, in .3 ns they can travel about 1 inch.

I'd be very interested to know how with current technology, it's even
theoretically possible to put 10 to 100 terabytes of .3 nsec memory in
a device no more than 1 inch in radius.  If you can go to Intel or IBM
with a coherent proposal, I'm sure they will make you a rich man ;-).

------------------------------

From: [EMAIL PROTECTED] (Paul Rubin)
Subject: Re: Suggestions for crypto for constrained-memory/CPU computers?
Date: 10 Jul 2000 20:00:19 GMT

In article <[EMAIL PROTECTED]>,
Mark Wooding <[EMAIL PROTECTED]> wrote:
>Paul Rubin <[EMAIL PROTECTED]> wrote:
>
>> If you're thinking of public key algorithms, all the ones I know of
>> are slower at decryption than low-exponent RSA is at encryption.
>
>The group's number theorists will be able to point out why it's a stupid
>idea, but: can't you improve RSA private-key operation performance at
>the expense of public-key operations by choosing the private exponent to
>have low Hamming-weight residues modulo each of (p - 1) and (q - 1)?

Um, er, oh man.  I don't see a specific attack off the top of my head
but this raises all kinds of red flags about leaking info about phi(N).

More to the point it doesn't give the kind of speedup being asked for.
Normally you find x^s mod p with a square-and-multiply algorithm, where
for n bits you do n squarings and (on average) n/2 multiplications.
The low hamming weight gets rid of most of the multiplications but
you're left with the squarings.  Squaring can be a little more efficient
than multiplication, but still, you've only gotten a factor of 2 or so
at best out of the total.

Interesting idea though.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to