gnutella's problems (Re: network topology)

2002-03-28 Thread Adam Back

On Wed, Mar 27, 2002 at 04:56:32PM -0800, [EMAIL PROTECTED] wrote:
 I got the impression (maybe wrong) that guntella as it exists is
 something much worse than a tree, that connections are
 pretty much haphazard and when you send out a query it reaches
 the same node by multiple paths, and that you really need the
 query ID to keep from forwarding massive duplicates.  

I think you're right and that's how it works, which is why it scales
badly as the search messages flooded seven hops deep into the randomly
organized network even with duplicate suprresion based on query-id (or
whatever they use), end up consuming some significant proportion of
the network bandwidth.

The other problem apparently is that for a low bandwidth node (eg
dialup) the searches can saturate your link, so you can hardly do
anything but receive and answer search queries.

Apparently there are some hacks to reduce this problem, but Gnutella's
other big problem is that there are lots of independent clients so
some of the problems come from interoperability problems, bugs etc.

And gnutella is not able to resume a transfer that dies part way
through which is very bad for download reliability.  FastTrack/Kazza
(but no longer Morpheus since the Kazza / Morpheus fall-out) on the
other hand can resume, and in fact do multiple simultaneous downloads
from multiple nodes having the same content so that it gets the
content both much faster and much more reliably.  Also helps cope with
different link speeds as a group of slow nodes or asymmetric bandwidth
nodes (like cable with fast down but limited up) can satisfy the
download of cable and other broadband users.

There's a nice write-up about the gnutella's problem's on openp2p.com
[1].

Contrary to what article [2] claims FastTrack/Kazza really does blow
Gnutella away, the supernode concept with high performance nodes
elected to be search hubs makes all the difference.  Gnutella last I
tried it was barely functional for downloads, ~95% of downloads
failed, and searches were much slower.

Adam

[1] Gnutella: Alive, Well and Changing Fast, by Kelly Truelove

http://www.openp2p.com/pub/a/p2p/2001/01/25/truelove0101.html

[2] Gnutella Blown Away? Not Exactly, by Serguei Osokine

http://www.openp2p.com/pub/a/p2p/2001/07/11/numbers.html




Re: network topology

2002-03-28 Thread Tim May

On Wednesday, March 27, 2002, at 04:56  PM, [EMAIL PROTECTED] wrote:

 On 27 Mar 2002 at 22:43, Eugene Leitl wrote:

 On Wed, 27 Mar 2002 [EMAIL PROTECTED] wrote:

 I don't recall ever having read of this type of structure before,
 but it seems so obvious that I'm sure it's been discussed before.
 So is there a name for it? Does anyone use it? has it been
 shown to be utterly worthless?

 You don't mean something like this:
 http://www.perfdynamics.com/Papers/Gnews.html do you?


 Yeah, I think what I was describing was more or less what
 they call a hypercube, or maybe just a cube.
 I'm not one of those people that
 can actually envision multidimensional structures, so I only
 know this is a 4-cube if I see the coordinates.

No need to visualize 4D spaces, 5D spaces, and so on.

Think in terms of how many nearest neighbors a point is, a la my last 
post:

-- in a 1-dimensional space/topology, 2 nearest neighbors (left, right)

-- in a 2-dimensional space/topology, 4 nearest neighbors (left, right, 
above, below)

-- in a 3-dimensional space/topology, 6 nearest neighbors (6 faces of a 
cube)

-- in a 4-dimensional space/topology, 8 nearest neighbors

-- in an n-dimensional space/topology, 2n nearest neighbors

(all using the definition of distance in terms of unit vectors, not 
diagonals)

Actual machine hypercubes are just made in the obvious way: by 
connecting vertices/nodes the way a physical hypercube would be 
connected.

In fact, since wires are small [though not infinitely small, and 
certainly not of zero propagation delay], it's possible to connect every 
node to every other node.

--Tim May
That the said Constitution shall never be construed to authorize 
Congress to infringe the just liberty of the press or the rights of 
conscience; or to prevent the people of the United States who are 
peaceable citizens from keeping their own arms. --Samuel Adams




Re: FW: Homeland Deception (was RE: signal to noise proposal)

2002-03-28 Thread Major Variola (ret)

At 05:14 PM 3/27/02 -0800, Meyer Wolfsheim wrote:
-BEGIN PGP SIGNED MESSAGE-
Additionally, Aimee is an Outlook user, and mattd is a Eudora user. The

forgery referenced below was sent from Eudora.

And strings in exe's can't be edited?

I know of folks who've edited the PGP header line to flip off the
spooks..




RE: 1024-bit RSA keys in danger of compromise

2002-03-28 Thread Tom Holroyd

You know, Lucky, most of the people here have been around the block a
few times, and your previous post is just classic Usenet whinage.
Complaining about puncuation indeed.  Spare us, please.

Look, we've all read the background.  The improvement is a function
f(n) which for large n may approach 3.  What is f(1024)?  I don't
know, do you?  Your original post might have merit if f(1024) is also
close to 3 or more, but it may be very much less.

Here's a real question: if you could build a special purpose machine
to do 1024 bit RSA keys (that is, factor a 1024 bit number), how much
would that help with discrete logs in a safe prime field?

Dr. Tom Holroyd
I am, as I said, inspired by the biological phenomena in which
chemical forces are used in repetitious fashion to produce all
kinds of weird effects (one of which is the author).
-- Richard Feynman, _There's Plenty of Room at the Bottom_




Re: gnutella's problems (Re: network topology)

2002-03-28 Thread georgemw

On 28 Mar 2002 at 2:18, Adam Back wrote:

 And gnutella is not able to resume a transfer that dies part way
 through which is very bad for download reliability.  FastTrack/Kazza
 (but no longer Morpheus since the Kazza / Morpheus fall-out) on the
 other hand can resume, and in fact do multiple simultaneous downloads
 from multiple nodes having the same content so that it gets the
 content both much faster and much more reliably. 

Actually, the gnucleus client will do both of these,
so presumably the gnutella morpheus does also since
it's based on gnucleus.   

 Also helps cope with
 different link speeds as a group of slow nodes or asymmetric bandwidth
 nodes (like cable with fast down but limited up) can satisfy the
 download of cable and other broadband users.
 
 There's a nice write-up about the gnutella's problem's on openp2p.com
 [1].
 
 Contrary to what article [2] claims FastTrack/Kazza really does blow
 Gnutella away, the supernode concept with high performance nodes
 elected to be search hubs makes all the difference.  Gnutella last I
 tried it was barely functional for downloads, ~95% of downloads
 failed, and searches were much slower.
 
 Adam
 

I think the idea (used in alpine) of using UDP for search queries
and only establishing a persistent connection when you actually 
want to transfer a file is a good one.

George
 [1] Gnutella: Alive, Well and Changing Fast, by Kelly Truelove
 
 http://www.openp2p.com/pub/a/p2p/2001/01/25/truelove0101.html
 
 [2] Gnutella Blown Away? Not Exactly, by Serguei Osokine
 
 http://www.openp2p.com/pub/a/p2p/2001/07/11/numbers.html




Re: gnutella's problems (Re: network topology)

2002-03-28 Thread Ian Goldberg

In article [EMAIL PROTECTED],
Adam Back  [EMAIL PROTECTED] wrote:
And gnutella is not able to resume a transfer that dies part way
through which is very bad for download reliability.  FastTrack/Kazza
(but no longer Morpheus since the Kazza / Morpheus fall-out) on the
other hand can resume, and in fact do multiple simultaneous downloads
from multiple nodes having the same content so that it gets the
content both much faster and much more reliably.  Also helps cope with
different link speeds as a group of slow nodes or asymmetric bandwidth
nodes (like cable with fast down but limited up) can satisfy the
download of cable and other broadband users.

Wait; as far as I know, Fasttrack's and Gnutella's file-transfer
protocols are *identical* (but not their search protocols).  If Gnutella
doesn't support resuming downloads and grabbing from many people at
once, that's just a client-side issue, not a protocol issue.

[That being said, grabbing from multiple people at once requires
you know *who's got* the _very same file_.  The FastTrack protocol
supports search by hash value, but Gnutella doesn't seem to.
Should be easy to fix, though.]

   - Ian




Re: gnutella's problems (Re: network topology)

2002-03-28 Thread Anonymous

Adam Back writes:
 Contrary to what article [2] claims FastTrack/Kazza really does blow
 Gnutella away, the supernode concept with high performance nodes
 elected to be search hubs makes all the difference.  Gnutella last I
 tried it was barely functional for downloads, ~95% of downloads
 failed, and searches were much slower.

You should try again.  That article you quoted about Gnutella's problems
was over a year old.  The network has improved enormously in the
past year.  Today I would say that 70% of downloads succeed, although
of course it works better for more popular files.  We have restarting
of failed downloads, and simultaneous downloads from multiple users.
With a widely shared file I sometimes max out this DSL line at 80-90
KB/sec downloads (that's bytes not bits).

One problem is that searches are still slow and it is still hard to find
uncommon files.

FastTrack is going to die.  The spyware installed by the clients pisses
users off, and the use of centralized servers is just too big a target for
the infinitely deep pockets of the recording industry.  Gnutella is now
a very adequate replacement, which was certainly not true a year ago.
In the long run, decentralized networks are the only ones which can
survive.




Re: Homeland Deception (was RE: signal to noise proposal)

2002-03-28 Thread Faustine

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Gil wrote:
Faustine writes:

best is write code, write code. The main thing is to DO something, whatever
your skills and talents are. Spare everyone the hot air and just do it.

What *you* say is hot air; what *I* say is policy analysis.


But who's listening? 

It's all hot air until you start seeing results. 

I'm rather fond of the billions of taxpayer-dollars saved metric myself;
others might be lives saved, strategic assets protected etc. Once again:
what matters to you and what are you doing about it? 

I'll be the first to admit there are few things more intrinsically worthless
and boring than policy analysis done for its own sake in a vacuum. It's just a
tool to be put to USE, like any other. Tools can be shoddy or well-crafted,
simple or complex--but at the end of the day, can you say you really got the
job done with it or not. 

Despite anything certain people around here have said to the contrary,
precision and accuracy in analysis matter: I'm sure they wouldn't have any
confusion about whether it's better to arm themselves with a bag full of
rocks or a FN Herstal 5.7mm Weapons System. Think about it. You have all these
fucking idiots on Capitol Hill stumbling around making policy by the equivalent
of whacking each other over the head with stones. Crude tools that--despite
being messy, ugly and inefficient--get the job done, more or less.


I say it's time for libertarians to step up to the plate and start training with
the analytic equivalent of precision weaponry.


~~Faustine.



***

He that would make his own liberty secure must guard even his enemy from
oppression; for if he violates this duty he establishes a precedent that
will reach to himself.

- --Thomas Paine

-BEGIN PGP SIGNATURE-
Version: PGPsdk version 1.7.1 (C) 1997-1999 Network Associates, Inc. and its 
affiliated companies. (Diffie-Helman/DSS-only version)

iQA/AwUBPKN+//g5Tuca7bfvEQIesACg7Hyysg/3KyAVw3+thCM/da1KS+4AoKIs
kip/pU0+G5qlCzYTGTi90xTC
=cdAv
-END PGP SIGNATURE-




RE: 1024-bit RSA keys in danger of compromise

2002-03-28 Thread Lucky Green

[OK, let me try this again, since we clearly got off on the wrong foot
here. My apologies for overreacting to Damien's post; I have been
receiving dozens of emails from the far corners of the Net over the last
few days that alternatively claimed that I was a stooge of the NSA
because everybody knows that 8k RSA keys can be factored in real-time or
that 512-bit RSA keys were untouchable since nobody could perform even
perform an exhaustive search of a 128-bit key space...]

Damien wrote:
 I am disputing that the improvements as presented are 
 practically relevant. Since you saw fit to cross-post to 
 openssh-unix-dev, which is a list concerned with code (not 
 polemic), that is the context in which I chose to frame my reply.

My post reported on what was announced at an academic cryptographic
conference by a cryptographer that has written peer-reviewed papers on
the design of large-scale cryptographic processing machines in the past.
(I.e. how one would in practice build one of Rivest's MicroMint
machines). I believe my relaying these claims was responsible given the
potentially massive security implications to a good part of the
infrastructure. In addition, a reporter for the Financial Times was
present at the same event who announced his intent to write about it as
well.

Nowhere in the post did I make, or intent to make, claims of my own as
to the impact of Bernstein's paper on factoring. I did report on my
reaction to the claims which I witnessed and on which I therefore
reported. My reaction to those claims was to create larger keys. Other
may choose to react differently. Furthermore, I provided those concerned
with the new claims with what I believe are sound recommendations how to
counter the potential thread. Which was to increase the key size.

[On Nicko's rump session talk that they factored 512-bit keys on the
hardware in their office].
 You offer this aside in the context of an argument against 
 the insufficiency of 1024 bit RSA keys. Surely you don't 
 expect people to believe that you weren't including it to 
 bolster your argument?

To be perfectly honest, the thought that somebody on a mailing list
related to cryptographic software would consider my reporting on the
news that somebody factored 512-bit keys on the computers in their
office would believe I meant to imply this to have any bearing on a
potential ability to factor 1024-bit keys on purpose-built hardware
never even occurred to me.

I really, really meant coincidentally when I wrote coincidentally. The
two news came within a day of each other, so while reporting on one of
the news, I thought I'd make mention of the other news as well. That's
all.

Well, on second thought I suppose there actually is an, albeit removed,
connection between the two: many sites still use 512-bit keys; even if
one is unconcerned about 1024-bit keys being breakable, hopefully those
with 512-bit keys will take the fact that 512-bit keys can be broken by
some office hardware as a reason to upgrade key sizes.

[...]
 You post is hyperbole because it is very long on verbiage and 
 very short on justification. Large claims require a good 
 amount of proof: If you expect everyone to switch to 2048 bit 
 keys on the basis of your rant alone, you may be disappointed.

I don't really personally care what key sizes others use. For all I
care, others are welcome to employ 4-bit RSA keys, as long as they don't
use those keys to authenticate themselves to any of the machines under
my control.

Which brings me to an issue that I hope may be on-topic to this mailing
list: I would like to be able to enforce that the keys my users can use
to authenticate themselves to my sshd to be of a minimum size. Is there
a config option to sshd that will reject user keys below a minimum size?
I didn't see anything in the man pages or my first go through the code.

Thanks in advance,
--Lucky




RE: 1024-bit RSA keys in danger of compromise

2002-03-28 Thread Kevin Steves

On Thu, 28 Mar 2002, Lucky Green wrote:
:Which brings me to an issue that I hope may be on-topic to this mailing
:list: I would like to be able to enforce that the keys my users can use
:to authenticate themselves to my sshd to be of a minimum size. Is there
:a config option to sshd that will reject user keys below a minimum size?
:I didn't see anything in the man pages or my first go through the code.

no config option, but this change will be in the next release:

RCS file: /usr/OpenBSD/cvs/src/usr.bin/ssh/auth-rsa.c,v
retrieving revision 1.53
retrieving revision 1.54
diff -u -r1.53 -r1.54
--- src/usr.bin/ssh/auth-rsa.c  2002/03/25 09:21:13 1.53
+++ src/usr.bin/ssh/auth-rsa.c  2002/03/26 23:13:03 1.54
 -14,7 +14,7 
  */

 #include includes.h
-RCSID($OpenBSD: auth-rsa.c,v 1.53 2002/03/25 09:21:13 markus Exp $);
+RCSID($OpenBSD: auth-rsa.c,v 1.54 2002/03/26 23:13:03 markus Exp $);

 #include openssl/rsa.h
 #include openssl/md5.h
 -77,6 +77,13 
u_char buf[32], mdbuf[16];
MD5_CTX md;
int len;
+
+   /* don't allow short keys */
+   if (BN_num_bits(key-rsa-n)  768) {
+   error(auth_rsa_verify_response: n too small: %d bits,
+   BN_num_bits(key-rsa-n));
+   return (0);
+   }

/* The response is MD5 of decrypted challenge plus session id. */
len = BN_num_bytes(challenge);