Sorry about the last post, it was accidental...


        We've all been experiencing the recent server outages from the SETI@Home
lab.. I figured this would happen sometime because of the many users
SETI@Home is gaining. So, I've been thinking a little on how to improve the
situation. Here is my best idea so far:

1) The SETI@Home servers have a cap for every client connecting. I would
make this cap 7kb/s.. Yes, it would be slow. But, it would also allow
hundreds more users to download their necessary packets from the server.

        Real life example: I am using RoadRunner in Houston, TX [77077].. I always
get a download speed of at least 250kilobytes/second. Now, if the speed for
me were capped at 7kb/s, about 35 users [if I  did the math correctly] could
download their necessary packets. Like I said, it WOULD be slow, but almost
everyone could get their units.

        More proof that this can work:
        [from SETI@Home website]
        January 29, 2002

        The campus wide outbound bandwidth cap is set at 70Mb/s. We are getting
only around 10Mb/s of this during the day but are getting all we need at
night. The best time for connecting varies but is roughly for the 2 hours on
either side of 1:00 PST, or 9:00 UT.
        [/SETI@Home Site]

        Now, if this is the worst case scenario, and SETI@Home can only get 10mb/s,
this cap will work wonders. Let's just assume [for now] that everyone
downloads packets at about 40kb/s. If everyone were limited to 7kb/s
downloads, about 14,628 users [if I did my math right again] could download
their units at the same time...

Who knows, maybe this could be the ultimate solution to the problems!!

TJ
AKA [EMAIL PROTECTED]

PS: Please inform me if I did the math wrong. I just thought this up and
figured I'd tell everyone about my brilliant plan..

==
Unsubscribe instructions: http://www.talkspace.net/mlists/setiathome.html
This list sponsored by talkspace.net: building space communities online.
Mailing list services provided by klx.communications -- www.klx.com

Reply via email to