Cryptography-Digest Digest #681, Volume #10       Sat, 4 Dec 99 16:13:01 EST

Contents:
  Re: What part of 'You need the key to know' don't you people get? (Tim Tyler)
  Re: Random Noise Encryption Buffs (Look Here) (Tim Tyler)
  Re: Literature on secure systems engineering methodology--pointers  (CLSV)
  Re: Literature on secure systems engineering methodology--pointers  (The Bug Hunter)
  NSA competitors (CLSV)
  Re: Literature on secure systems engineering methodology--pointers  (CLSV)
  Re: NP-hard Problems (Bill Unruh)
  Re: Why Aren't Virtual Dice Adequate? ("r.e.s.")
  Re: Why Aren't Virtual Dice Adequate? ("r.e.s.")
  Re: Cryptological discovery, rediscovery, or fantasy? (Brian Chase)
  Re: Any negative comments about Peekboo free win95/98 message encryptor (Lame I. 
Norky)
  1 round Defeats Enigma attacks (UBCHI2)
  Re: What part of 'You need the key to know' don't you people get? (wtshaw)
  Re: cookies (Brian Chase)
  DNA based brute-force attacks? (Brian Chase)
  Re: What part of 'You need the key to know' don't you people get? (wtshaw)
  Re: NSA should do a cryptoanalysis of AES (wtshaw)
  Re: NSA should do a cryptoanalysis of AES (wtshaw)
  Re: NSA should do a cryptoanalysis of AES (wtshaw)

----------------------------------------------------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: What part of 'You need the key to know' don't you people get?
Reply-To: [EMAIL PROTECTED]
Date: Sat, 4 Dec 1999 15:27:03 GMT

Rick Braddam <[EMAIL PROTECTED]> wrote:

: BTW, the thread so far seems to have concentrated on CBC and CFB.
: I've seen little mention of OFB. Does it have the same limited
: scope of affect (2 blocks) as the other two, in spite of its
: resemblance to a stream cipher?

Plaintext information is confined to individual blocks with OFB.
Errors are confined to the block in which they occur.
There's no diffusion of plaintext information between blocks at all.

:> An all or nothing method is more secure vs. these forms of attack.
:> However, the three letter methods do not weaken the underlying code,
:> they merely fail to strengthen it.

: If they fail to strengthen it, then they have failed to satisfy the
: objective for using them, haven't they?

They do strengthen it against some forms of attack.  However they fail to
strengthen it against others, and it is those that are the subject here.

: If an all or nothing method is more secure, is the difference in degree
: of security a significant amount?

Unfortunately, the answer to this depends on all sorts of things.
You'd probably have to do a cost-benefit analysis of your circumstances
before being able to make a firm decision.

Security benefits are notoriously hard to quantify, so the value 
of hinderance against a certain range of attacks may be hard to put a
figure on.
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

May all your PUSHes be POPed.

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Random Noise Encryption Buffs (Look Here)
Reply-To: [EMAIL PROTECTED]
Date: Sat, 4 Dec 1999 15:37:13 GMT

Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> Anthony Stephen Szopa <[EMAIL PROTECTED]> wrote:

:> : You have just discovered true randomness.
:>
:> Alas, even *if* this is genuinely random - which you will never
:> demonstrate - nobody has developed a scheme for extracting this
:> information onto a macroscopic scale without introducing bais of
:> one type or another.
:>
:> Until such a scheme is demonstrated, "true atomic randomness" is
:> of the same utility to a cryptographer as a "perfectly straight line"
:> is to a student of geometry.

: I think you have taken a misguided position and are struggling too much to
: defend it.

Whereas your position appears to be based on faith in the existence of
genuine randomness in subatomic behaviour, and in our ability to
magnify this up to a macroscopic scale, without distorting it at all.

: I think that a very good true random demonstration would be to generate a
: single photon and direct it through a tiny hole.  Where it strikes a screen
: on the other side of the hole will be unpredictable within the possible field
: in which it may strike.

I can't predict it /exactly/ - but I know that it will be more likely to
hit near the centre of the field near the edges, and that there will be
radial fringes of probability distribution relating to where it is likely
to hit.

How yo you propose using this source of information to generate a
genuinely random bitstream?

What equipment will you use, and how will it be set up?
Will you use polarised light?  Light with random polarisation?
What frequency is your light source?  Is it from a lazer?
Is it projecting through a perfect vacuum?
What shape is your "small hole"?
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

COBOL programs are an exercise in Artificial Inelegance.

------------------------------

From: CLSV <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc
Subject: Re: Literature on secure systems engineering methodology--pointers 
Date: Sat, 04 Dec 1999 17:24:38 +0000

The Bug Hunter wrote:
> 
> I've been searching the web for literature on methodology/process for
> designing and developing secure systems. Maybe I was looking at the
> wrong places, I couldn't find much.
> 
> I have my own process for engineering secure systems, but would like to
> know what processes others use or advocate. Pointers anyone?

Have you read:

http://www.counterpane.com/attacktrees.pdf

You might find that interesting.

Regards,

        Coen Visser

------------------------------

From: The Bug Hunter <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc
Subject: Re: Literature on secure systems engineering methodology--pointers 
Date: Sat, 04 Dec 1999 12:36:51 -0500

CLSV wrote:
> 
> > I have my own process for engineering secure systems, but would like to
> > know what processes others use or advocate. Pointers anyone?
> 
> Have you read:
> 
> http://www.counterpane.com/attacktrees.pdf
> 
> You might find that interesting.
>
>         Coen Visser

Thanks for responding, but I've seen that one. It is very similar to the
idea of a fault tree in the study of reliability and fault tolerance. In
a sense, you can consider attacks on secure systems malicious faults,
although there's a major difference between defending against random
failures and defending against malicious attacks.

Any more pointers? 

Thanks in advance.
--The Bug Hunter

------------------------------

From: CLSV <[EMAIL PROTECTED]>
Subject: NSA competitors
Date: Sat, 04 Dec 1999 18:13:27 +0000


The NSA is used as some sort of metaphor for high quality
cryptanalysis. Some claim that the NSA is 5 - 20 years ahead
of the public cryptographic community. Other sources point to
failing management, lack of focus in the organization that
could bring down the technical advantage pretty quick.

I'm wondering if there is any knowledge about non-US 
government institutes that are specialized in cryptography and
cryptanalysis? I'm thinking about countries that invest a lot 
in mathematical education like China, Russia, India.
Maybe institutes in those countries have some information that
could give a better picture about the state of the art.

Regards,

        Coen Visser

------------------------------

From: CLSV <[EMAIL PROTECTED]>
Crossposted-To: comp.security.misc
Subject: Re: Literature on secure systems engineering methodology--pointers 
Date: Sat, 04 Dec 1999 18:19:13 +0000

The Bug Hunter wrote:
> 
> CLSV wrote:
> >
> > > I have my own process for engineering secure systems, but would like to
> > > know what processes others use or advocate. Pointers anyone?
> >
> > Have you read:
> >
> > http://www.counterpane.com/attacktrees.pdf

> Thanks for responding, but I've seen that one.
> [...] Any more pointers?

How about:

http://www.radium.ncsc.mil/tpep/process

And specifically:

http://www.radium.ncsc.mil/tpep/process/faq-sect2.html#Q2

Regards,

        Coen Visser

------------------------------

From: [EMAIL PROTECTED] (Bill Unruh)
Subject: Re: NP-hard Problems
Date: 4 Dec 1999 18:57:49 GMT

In <829hg8$sep$[EMAIL PROTECTED]> [EMAIL PROTECTED] writes:

]Bill Unruh wrote:

]> Well, it is not known if there are any NP hard problems.
][...]

]Check your referenes.  NP-complete is a proper
]subset of NP-hard, so they do exist.

Yes, I got myself confused. Sorry for spreading the confusion.

------------------------------

From: "r.e.s." <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: Why Aren't Virtual Dice Adequate?
Date: Sat, 4 Dec 1999 11:21:16 -0800

"Trevor Jackson, III" <[EMAIL PROTECTED]> wrote ...
: r.e.s. wrote:
[...]
: > In the scenarios under discussion, an opponent cannot
: > introduce a message key of his own making because he doesn't
: > have the key for inserting it into the ciphertext.
:
: What does hiding a block cipher under an OTP buy you that the OTP alone
does
: not?  In what sense is your message kay a key to the message if it is not
a key
: to a hidden cipher?

The point under discusssion in the thread is that a "pure OTP" is
not secure when used to send identical plaintext to two different
recipients, because it may compromise the key of one of them.  Any
addition or change to the OTP, serving to remedy this, will result
in something other than a "pure OTP".  Adding a mesage key is just
one possible attempted remedy.





------------------------------

From: "r.e.s." <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: Why Aren't Virtual Dice Adequate?
Date: Sat, 4 Dec 1999 11:16:32 -0800

"Guy Macon" <[EMAIL PROTECTED]> wrote ...
: [EMAIL PROTECTED] (r.e.s.) wrote:
: >"Guy Macon" <[EMAIL PROTECTED]> wrote ...
: >:
: >: Maybe it's my background as an engineer, but if I was actually
: >: implementing an OTP (instead of using PGP and just discussing
: >: OTP as a learning experience), I would go full overkill on the
: >: randomizer, XORing in everything from the number of microseconds
: >: between keystrokes to a the digitized output of my local AM
: >: station, the theory being that if any one of my "random" inputs
: >: is true random, the OTP will be random.  I would then pad my
: >: plaintext to a standard length and XOR it with the OTP (which will
: >: probably be on a CD-R).  If (given the limitations that have been
: >: pointed out) an OTP is unbreakable, how can it be strengthened?
: >
: >By "strengthen" I meant incorporate another stage of encipherment
: >to produce a cipher (now no longer an OTP) that, in various attack
: >scenarios, is more secure than an OTP alone. By "OTP alone" I mean
: >direct transmission of XOR'd bits, granting "true" randomness of
: >the key. (And "implementation" was intended to include modes of use.)
:
: Let me rephrase that and see if I understand.  You are saying to
: take the OTP I described (which is unbreakable when used for
: two way communication between trusted parties) then taking the
: result and using it as the plaintext for an encryption system
: that is in theory breakable but has other good qualities that
: address various methods of beating OTP without trying to decode
: an OTP encrypted message.

No.
The point under discusssion in the thread is that a "pure OTP" is
*not* secure when used to send identical plaintext to two different
recipients, because it may compromise the key of one of them.  Any
addition or change to the OTP, serving to remedy this, will result
in something other than a "pure OTP".

--
r.e.s.
[EMAIL PROTECTED]






------------------------------

Crossposted-To: sci.math,sci.misc,alt.privacy
From: [EMAIL PROTECTED] (Brian Chase)
Subject: Re: Cryptological discovery, rediscovery, or fantasy?
Date: Sat, 4 Dec 1999 19:27:03 GMT

In article <[EMAIL PROTECTED]>,
Johnny Bravo <[EMAIL PROTECTED]> wrote:
>On Sat, 20 Nov 1999 15:18:37 -0500, DSM
><[EMAIL PROTECTED]> wrote:

>> What good is an unbreakable cipher system if your
>> enemy is capable of capturing you and gaining
>> the key from you by force? 
>
>   Crypto makes no promises about human factors.  For that matter,
> there is no crypto you can use that would prevent the authorities from
> just capturing you and torturing the plain text info out of you. 

You can certainly make this more difficult by requiring some n>1 number of
people present to encode and decode messages.  Each person would have
1/nth the required key information.  You could even set things up such
that if any one of the partial keyholders was captured, they could hit
some sort of panic button to notify the others of a partial compromise. 

So it would still be possible to torture out the key, but you'd have to
capture everyone at just about the same time.  Depending on the importance
of the secret being hidden, it's possible that those holding the partial
keys could kill themselves to prevent the secret from being disclosed. 

It's probably possible to model some automated system based on the above
concept without actually requiring people to kill themselves off :-)  Say
each person has a personal key to pull their 1/nth of the real key out of
some black box.  If anyone hits their panic button, the black box deletes
all of the real key information.  This way torturing people may get you
their access key to the black box, but you won't actually be able to get
the real key. 

-brian.
-- 
--- Brian Chase | [EMAIL PROTECTED] | http://world.std.com/~bdc/ -----
For these reasons, and hundreds of others, I am forced to conclude that a
virtual frog is not as much fun as an actual frog.  -- K.

------------------------------

From: [EMAIL PROTECTED] (Lame I. Norky)
Crossposted-To: alt.security.pgp
Subject: Re: Any negative comments about Peekboo free win95/98 message encryptor
Date: Sat, 04 Dec 1999 19:41:20 GMT

"Trevor Jackson, III" <[EMAIL PROTECTED]> wrote:

>It appears to work for Underwriter Labs.  The brand name is worth something to
>consumers.  So the vendors pay for it to enhance their sales.  UL preserves
>their integrity because it is their only product.  One compromise in a safety
>standard that came to light would destroy their brand name's value.

This is an incredible coincidence! The following is the very next message
that I happened to read after this one, over on the alt.home.automation
group: news:82bine$[EMAIL PROTECTED]

-- 
"Lame I. Norky" is actually [EMAIL PROTECTED] (2483 519067).
 0123 4  56789 <- Use this key to decode my email address and name.
                Play Five by Five Poker at http://www.5X5poker.com.

------------------------------

From: [EMAIL PROTECTED] (UBCHI2)
Subject: 1 round Defeats Enigma attacks
Date: 04 Dec 1999 19:48:18 GMT

If you use 1 round of transposition to superencipher an enigma encryption, you
immediately counter the use of cribs, kisses and bombes.  The weakness of the
rotor machines is that they leave each character in the same order as in the
plaintext.



------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: What part of 'You need the key to know' don't you people get?
Date: Sat, 04 Dec 1999 14:23:25 -0600

In article <[EMAIL PROTECTED]>, "Peter K. Boucher"
<[EMAIL PROTECTED]> wrote:
> 
> Actually, science doesn't reject old ideas unless there is convincing
> evidence that the old ideas are wrong.

There is fairness there.  It is hard for some to realize that an old idea
is open to evidence that discounts it.  A good idea is timeless.
> 
> You are right that science can accept hypotheses without absolute
> proof.  However, scince does not accept a hypothesis unless a) there
> *is* good reason to believe it, and b) there *is not* good reason to
> reject it.  You are hinting that people should belive Scott's claims
> without applying both tests.

Believe him outright? No, true believers are best send to a wingnut church
somewhere.  Inquiry requires doubters and impartial observers, but don't
complain amount the announcement of claims unless you are willing to do
the grudge work of studying them.
...
> 
> A claim that is made without any supporting evidence may, indeed, be
> true, but you have no good reason to assume that it's true.  You are
> right that two contratictory claims that are both unsupported should
> both be doubted.  It's also possible that both are false.  Many times
> people try to support their claims by creating a "false dilemma" where
> they propose some alternative theory, knock it down, and then assert
> that their own theory must be correct.
> 
Games of rhetoric are the toys of thought.  I will do what I can to cut to
the real chase.
-- 
Love is blind, or at least figure that it has astigmatism. 

------------------------------

From: [EMAIL PROTECTED] (Brian Chase)
Subject: Re: cookies
Date: Sat, 4 Dec 1999 19:54:03 GMT

In article <eYW14.199$[EMAIL PROTECTED]>,
karl malbrain <[EMAIL PROTECTED]> wrote:
>Douglas A. Gwyn <[EMAIL PROTECTED]> wrote in message

> [snip]
>
>> There are numerous other vulnerabilities in the old Internet
>> protocols, but in almost every case it takes the active
>> participation of a program on your own computer to exploit them,
>> not just some procedure initated by the remote site.  In particular,
>> simply being attached to the Internet does *not* put your files at
>> risk, if your system is not providing any "services" to the net
>> and you are not initiating net protocols yourself.  (In many cases,
>> notably Windows, people have no clue what Internet processes their
>> computer is supporting.)
>
>You claim your hodge-podge is helpful???  1. What is a `hacker or otherwise
>suspect' site, anyway? 2. There is NO basis to presume that ActiveX and Java
>are the only way to coerce your browser to transmit information.  3. Windows
>so complicated you cannot demonstrate that somewhere in the 40MB of mapped
>code from hundreds of various and assorted DLL files is not 2 or 3 system
>calls transferring who knows what data where.  Your sense of SECURITY is
>entirely mis-placed.

I think your points are valid about us not entirely knowing what
weaknesses may within a browser or in the supporting OS files.  But you
have to draw the line somewhere as to how paranoid you want to be.  I'm
fairly comfortable with just disabling the obvious bits which allow remote
sites to execute code on my systems. Yeah, this doesn't preclude someone
from exploiting some bug which may or may not exist inherent to the OS...
well hell actually it'd almost have to be a backdoor deliberately put in
place and not a bug.

Even if you're worried about this, it still possible to put in a filtering
host between yourself and the rest of the world.  If you're worried about
someone exploiting a backdoor, just block all unexpected traffic to and
from your host.  And then monitor the allowed traffic for anomalies. 

If you're really so paranoid about security that even those precautions
aren't enough, then you probably shouldn't be using the Internet :-) 

-brian.
-- 
--- Brian Chase | [EMAIL PROTECTED] | http://world.std.com/~bdc/ -----
For these reasons, and hundreds of others, I am forced to conclude that a
virtual frog is not as much fun as an actual frog.  -- K.

------------------------------

From: [EMAIL PROTECTED] (Brian Chase)
Subject: DNA based brute-force attacks?
Date: Sat, 4 Dec 1999 20:15:08 GMT


I know there's lots of talk about using quantum computing to break crypto
problems, but has there been much discussion of using DNA based computing
to do the same?  There's a 1994 article from _Science_ which discusses
using DNA to solve the traveling salesman problem.

An online version of the article is available at:
  http://www.hks.net/~cactus/doc/science/molecule_comp.html

Does anyone know of work being done to break crypto using these types of
techniques?  Or are there fundamental problems with crypto that make them
unlikely candidates for being solved with DNA computing?

-brian.
-- 
--- Brian Chase | [EMAIL PROTECTED] | http://world.std.com/~bdc/ -----
For these reasons, and hundreds of others, I am forced to conclude that a
virtual frog is not as much fun as an actual frog.  -- K.

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: What part of 'You need the key to know' don't you people get?
Date: Sat, 04 Dec 1999 14:39:50 -0600

In article <[EMAIL PROTECTED]>, "Douglas A. Gwyn"
<[EMAIL PROTECTED]> wrote:

> You're missing the point.  There are *plenty* of ideas, far more
> than we scientists can ever pursue.  Therefore, some intelligent
> *pre-filtering* is required to keep the pursuit of new ideas
> manageable.  One important criterion for this filtering is that
> the idea should have some supporting evidence.  If it is a mere
> unsubstantiated assertion, then it deservedly will be ignored.

Cryptography is not guaranteed to be something which can be digested for
total consideration, but an unsupported claim might be an attempt to do
that.
> 
> An exception is sometimes made when the proponent of a wacky idea
> has a proven track record, which in effect substitutes trust in
> his intuition for immediate evidence.

I have seen some good in DS's work.  I keep an open mind short of blanket
trust, which is al I would expect either.
> 
> There is also a *negative* evaluation of the likelihood of
> productive inquiry when the proponent of the idea *acts* like a
> crackpot.  If you study the history of crank ideas in science,
> you'll find that they tend to have a lot of common characteristics,
> such as failure to reconcile the new claims with established
> knowledge that they blatantly contradict, and the proponents
> complaining about a conspiracy of the orthodoxy when their
> proposals are ignored.

The deck is so dealt against new and different ideas that getting noticed
is important.   We go on from that.  
> 
> If you want to propose a new idea and have it taken seriously,
> it is simply a fact that you need to present the idea in a way
> that addresses the legitimate criteria that scientists have to
> apply to filter incoming proposals.

Filters vary.  Consider those who are in the business of laughter,
tolerances differ as to which you might consider acceptable.  The sole
overall measure is whether humor to someones taste is involved.  Many
would not accept Rodney Dangerfield, or the group Bottoms Up, as two
examples.  Yet, some do.
-- 
Love is blind, or at least figure that it has astigmatism. 

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: NSA should do a cryptoanalysis of AES
Date: Sat, 04 Dec 1999 15:05:55 -0600

In article <829iuc$v18$[EMAIL PROTECTED]>, [EMAIL PROTECTED]
(SCOTT19U.ZIP_GUY) wrote:

>    Its amazing how often people make this statement that real
> security has to be built in from the ground up. But why when it
> comes to compression before encyption most people ignore the
> information added by  most compression methods? And they
> tend to use chaining that goes out of its way not to difuse information
> through out the file. It is as if people turn there brains off in these
> areas. Any one have theorys as to why? You already know by
> theories on the subject.
> 
Good security begins with good architecture, but it does not stop there. 
It's the old weak link problem: The desired is to make all contributing
factors work in your favor.  Few consider a complete approach as the
essential that it is....or if you depend on a suit of armor, be sure
everything is protected; you will get it, if you do, where there is a
security hole.
-- 
Love is blind, or at least figure that it has astigmatism. 

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: NSA should do a cryptoanalysis of AES
Date: Sat, 04 Dec 1999 14:57:33 -0600

In article <[EMAIL PROTECTED]>, Shawn Willden
<[EMAIL PROTECTED]> wrote:

> "Douglas A. Gwyn" wrote:
> 
> > This points up a supremely important fact:  Real computer security
> > has to be built in from the ground up, with no loopholes anywhere.
> > You could probably achieve it with a capability-based architecture
> > and extremely good security review throughout system design, but
> > even so, at some level some user will screw up and hand over the
> > keys to an untrustworthy agent.
> 
> Could you explain what you mean by a "capability-based" architecture? 
I've puzzled
> over the term a bit and I can't figure out what you mean by it.  Whose
capability
> and for what?  And how does it relate to architecture?
> 
I get it.  Capability-based for a boat means it is designed to actually
float at all times.  Computer architecture is the first consideration when
building security.
-- 
Love is blind, or at least figure that it has astigmatism. 

------------------------------

From: [EMAIL PROTECTED] (wtshaw)
Subject: Re: NSA should do a cryptoanalysis of AES
Date: Sat, 04 Dec 1999 15:17:17 -0600

In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] wrote:

> Iron bars are also expensive.  If iron bars were free, and no more hassle
> to lock and unlock than an ordinary door, I'm sure more people would use
> them, even if the attack they are protecting against (a lockpicking 
> attack) are not known to be common.

Iron bars are cheap, and ugly.  
> 
> As I see it, the situation in the two cases under discussion is that
> yes, added information during compression is no worse than a known
> plaintext, and yes, cyphers *should* be secure against known plaintext
> attacks alone.
> 
It is desirable.
....
> 
> There *many* are other ways of getting hold of partial keys, besides a bad
> RNG. A malfunctioning office shredder, incomplete combistion, a partial
> glance at a keyboard while a password is typed - I'm sure you can easily
> imagine more.
> 
> So, having established the utility of these types of defence under some
> circumstances, the main consideration in my mind is the cost.

Stange that people who pay more can get a ribbon shreader, where a cheaper
muncher will do a better job. Consider that LE likes to deal with results
of the former which are simply reversed.
-- 
Love is blind, or at least figure that it has astigmatism. 

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to