Re: hardware/optimizations for a download-webserver

2004-07-20 Thread Russell Coker
On Tue, 20 Jul 2004 20:05, Brett Parker <[EMAIL PROTECTED]> wrote:
> > (create large file)
> > [EMAIL PROTECTED]:~$ dd if=/dev/urandom of=public_html/large_file bs=1024
> > count=5 5+0 records in
> > 5+0 records out
> >
> > (get large file)
> > [EMAIL PROTECTED]:~$ wget www.lobefin.net/~steve/large_file
> > [...]
> > 22:46:09 (9.61 MB/s) - `large_file' saved [5120/5120]
> >
> > Of course, for reasonable sized files (where reasonable is <10MB),
> > I get transfer speeds closer to 11MB/s.  YMMV, but it is not a fault
> > of the tcp protocol.  Switched 10/100 connection here.  Of course real
> > internet travel adds some latency, but that's not the point - the NIC
> > is not the bottleneck, bandwidth is in the OP's question.
>
> *ARGH*... and of course, there's *definately* no compression going on
> there, is there...

If the files come from /dev/urandom then there won't be any significant 
compression.

http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.1/0257.html

Once again, see the above URL with Dave S. Miller's .sig on the topic.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page



Re: hardware/optimizations for a download-webserver

2004-07-20 Thread Russell Coker
On Tue, 20 Jul 2004 10:39, Michelle Konzack <[EMAIL PROTECTED]> wrote:
> >Other people get >10MB/s.  I've benchmarked some of my machines at 9MB/s.
>
> I do not belive it !

http://www.uwsg.iu.edu/hypermail/linux/kernel/9704.1/0257.html

See the above message from  David S. Miller <[EMAIL PROTECTED]> posted 
in 1997.  At the time Dave used that as his standard .sig because it was 
really ground-breaking performance from Linux of >11MB/s TCP!

When I did tests I never got 11MB/s on my machines, that is because my 
hardware was probably not as good, and because I used real-world applications 
such as FTP rather than TCP benchmarks.

100/8 == 12.5.  The wire is capable of 12.5MB/s, having a protocol do 11.26 
isn't so strange.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-20 Thread Brett Parker
On Mon, Jul 19, 2004 at 10:49:26PM -0400, Stephen Gran wrote:
> This one time, at band camp, Michelle Konzack said:
> > Am 2004-07-19 10:01:06, schrieb Russell Coker:
> > >On Mon, 19 Jul 2004 05:59, Michelle Konzack <[EMAIL PROTECTED]> wrote:
> > >> >Thinking of the expected 50KB/sec download rate i calculated a
> > >> >theoretical maximum of ~250 simultaneous downloads -- am i right ?
> > >>
> > >> With a 100 MBit NIC you can have a maximum of 7 MByte/sec
> > >
> > >What makes you think so?
> > >
> > >Other people get >10MB/s.  I've benchmarked some of my machines at 9MB/s.
> > 
> > I do not belive it !
> > 
> > Maybe with UDP but not TCP it is not possibel from the protocol.
> > I have high performanc NIC's and some servers which are killer 
> > but never gotten more as 7,4 MByte/second
> > 
> > How do you Benchmark ? 
> > Two computers with 2 feet cross-over cable ?
> > 
> > Maybe you will have zero errors, but in real it does not work.
> 
> (create large file)
> [EMAIL PROTECTED]:~$ dd if=/dev/urandom of=public_html/large_file bs=1024 count=5
> 5+0 records in
> 5+0 records out
> 
> (get large file)
> [EMAIL PROTECTED]:~$ wget www.lobefin.net/~steve/large_file
> [...]
> 22:46:09 (9.61 MB/s) - `large_file' saved [5120/5120]
> 
> Of course, for reasonable sized files (where reasonable is <10MB),
> I get transfer speeds closer to 11MB/s.  YMMV, but it is not a fault
> of the tcp protocol.  Switched 10/100 connection here.  Of course real
> internet travel adds some latency, but that's not the point - the NIC
> is not the bottleneck, bandwidth is in the OP's question.

*ARGH*... and of course, there's *definately* no compression going on
there, is there...

Cheers.
-- 
Brett Parker


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-19 Thread Stephen Gran
This one time, at band camp, Michelle Konzack said:
> Am 2004-07-19 10:01:06, schrieb Russell Coker:
> >On Mon, 19 Jul 2004 05:59, Michelle Konzack <[EMAIL PROTECTED]> wrote:
> >> >Thinking of the expected 50KB/sec download rate i calculated a
> >> >theoretical maximum of ~250 simultaneous downloads -- am i right ?
> >>
> >> With a 100 MBit NIC you can have a maximum of 7 MByte/sec
> >
> >What makes you think so?
> >
> >Other people get >10MB/s.  I've benchmarked some of my machines at 9MB/s.
> 
> I do not belive it !
> 
> Maybe with UDP but not TCP it is not possibel from the protocol.
> I have high performanc NIC's and some servers which are killer 
> but never gotten more as 7,4 MByte/second
> 
> How do you Benchmark ? 
> Two computers with 2 feet cross-over cable ?
> 
> Maybe you will have zero errors, but in real it does not work.

(create large file)
[EMAIL PROTECTED]:~$ dd if=/dev/urandom of=public_html/large_file bs=1024 count=5
5+0 records in
5+0 records out

(get large file)
[EMAIL PROTECTED]:~$ wget www.lobefin.net/~steve/large_file
[...]
22:46:09 (9.61 MB/s) - `large_file' saved [5120/5120]

Of course, for reasonable sized files (where reasonable is <10MB),
I get transfer speeds closer to 11MB/s.  YMMV, but it is not a fault
of the tcp protocol.  Switched 10/100 connection here.  Of course real
internet travel adds some latency, but that's not the point - the NIC
is not the bottleneck, bandwidth is in the OP's question.

-- 
 -
|   ,''`.Stephen Gran |
|  : :' :[EMAIL PROTECTED] |
|  `. `'Debian user, admin, and developer |
|`- http://www.debian.org |
 -


pgpVoO45EpZXz.pgp
Description: PGP signature


Re: hardware/optimizations for a download-webserver

2004-07-19 Thread Michelle Konzack
Am 2004-07-19 10:01:06, schrieb Russell Coker:
>On Mon, 19 Jul 2004 05:59, Michelle Konzack <[EMAIL PROTECTED]> wrote:
>> >Thinking of the expected 50KB/sec download rate i calculated a
>> >theoretical maximum of ~250 simultaneous downloads -- am i right ?
>>
>> With a 100 MBit NIC you can have a maximum of 7 MByte/sec
>
>What makes you think so?
>
>Other people get >10MB/s.  I've benchmarked some of my machines at 9MB/s.

I do not belive it !

Maybe with UDP but not TCP it is not possibel from the protocol.
I have high performanc NIC's and some servers which are killer 
but never gotten more as 7,4 MByte/second

How do you Benchmark ? 
Two computers with 2 feet cross-over cable ?

Maybe you will have zero errors, but in real it does not work.

Greetings
Michelle

-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/ 
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


Re: hardware/optimizations for a download-webserver

2004-07-18 Thread Russell Coker
On Mon, 19 Jul 2004 05:59, Michelle Konzack <[EMAIL PROTECTED]> wrote:
> >Thinking of the expected 50KB/sec download rate i calculated a
> >theoretical maximum of ~250 simultaneous downloads -- am i right ?
>
> With a 100 MBit NIC you can have a maximum of 7 MByte/sec

What makes you think so?

Other people get >10MB/s.  I've benchmarked some of my machines at 9MB/s.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-18 Thread Michelle Konzack
Am 2004-07-18 13:37:03, schrieb Henrik Heil:

>However the 50/150 concurrent requests are a guess (best i can get for now)
>What do you think is the request-limit with a
>Pentium IV 2 GHz, 1GB RAM, 100Mbit, IDE-disk ?
>
>Thinking of the expected 50KB/sec download rate i calculated a 
>theoretical maximum of ~250 simultaneous downloads -- am i right ?

With a 100 MBit NIC you can have a maximum of 7 MByte/sec

>What is the practical throughput with a 100Mbit (non-realtek) NIC ?

I use 3Com 3c905C-TX

>Thanks,
>Henrik

Greetings
Michelle

-- 
Linux-User #280138 with the Linux Counter, http://counter.li.org/ 
Michelle Konzack   Apt. 917  ICQ #328449886
   50, rue de Soultz MSM LinuxMichi
0033/3/8845235667100 Strasbourg/France   IRC #Debian (irc.icq.com)


signature.pgp
Description: Digital signature


Re: hardware/optimizations for a download-webserver

2004-07-18 Thread Johannes Formann
Henrik Heil <[EMAIL PROTECTED]> wrote:

> However the 50/150 concurrent requests are a guess (best i can get for now)
> What do you think is the request-limit with a
> Pentium IV 2 GHz, 1GB RAM, 100Mbit, IDE-disk ?

Since all your files could be cached into the RAM, with a fast webserver
like thttpd a few hundred

> What is the practical throughput with a 100Mbit (non-realtek) NIC ?

I think this depends on your upstream and webserver more than on the
NIC, I've seen Server with more than 40 MBit/s running fine.

regards Johannes



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-18 Thread Henrik Heil
Thanks for your advice -- seems i have been too chicken-hearted.
Summary: Don't bother with tuning the server and don't even think about
setting up a cluster for something like this - definitely overkill. ;o)
That's what i'll do ;-)
However the 50/150 concurrent requests are a guess (best i can get for now)
What do you think is the request-limit with a
Pentium IV 2 GHz, 1GB RAM, 100Mbit, IDE-disk ?
Thinking of the expected 50KB/sec download rate i calculated a 
theoretical maximum of ~250 simultaneous downloads -- am i right ?

What is the practical throughput with a 100Mbit (non-realtek) NIC ?
Thanks,
Henrik
--
Henrik Heil, zweipol Coy & Heil GbR
http://www.zweipol.net/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Russell Coker
On Sat, 17 Jul 2004 14:09, Nate Duehr <[EMAIL PROTECTED]> wrote:
> Other good ways to do this include a shared RAID'ed network filesystem
> on a central box and two front-end boxes that are load-balanced with a
> hardware load-balancer.  That gets into the "must be up 24/7" realm, or
> close to it.  I worked on an environment that did this with a hardware
> NFS server (NetApp) and the front-ends could be up or down, it just
> didn't matter... as long as enough of them were up to handle the
> current load.

There are two ways of doing the "storage is available to two machines.  One is 
to have a shared SCSI bus and clustering software - but this is a major cause 
of clusters being less reliable than stand-along machines in my experience.  
The other way is using an NFS server.

For an NFS server there are two main options, one is using a Linux NFS server 
and the other is a dedicated hardware box such as NetApp.  The problem with 
using a Linux machine is that Linux as an NFS server is probably no more 
reliable than Linux as an Apache server (and may be less reliable).  In 
addition you have network issues etc, so you may as well just have a single 
machine.  Using a NetApp is expensive but gives some nice features in terms 
of backup etc (most of which can be done on Linux if you have the time and 
knowledge).  A NetApp Filer should be more reliable than a Linux NFS server, 
but you still have issues with the Linux NFS client code.

My best idea for a clustered web server was to have a master machine that 
content is uploaded to via a modified FTP server.  The FTP server would 
launch rsync after the file transfer to update the affected tree.  Cron jobs 
would periodically rsync the lot in case the FTP server didn't correctly 
launch the rsync job.  That way there are machines that have no dependencies 
on each other.  The idea was to use IPVS to direct traffic to all the 
servers.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page



Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Nate Duehr
On Jul 16, 2004, at 8:28 PM, Russell Coker wrote:
Installing a single machine and hoping for the best often gives better
results.
I agree in most cases.
One possible better solution that is one step short of creating a 
cluster is installing a single machine, and making sure that rock-solid 
bare-metal backups happen regularly and that an identical "offline" 
machine is available on a few minutes notice if the site is manned 
24/7, and available on a PRE-agreed-to timeframe (including downtime) 
at a dark site.

The hard part about the above is people try to skip the step of buying 
the IDENTICAL hardware for the standby machine and then scramble to 
reconfigure or fight with other hardware issues when they swing to the 
machine manually.

Other good ways to do this include a shared RAID'ed network filesystem 
on a central box and two front-end boxes that are load-balanced with a 
hardware load-balancer.  That gets into the "must be up 24/7" realm, or 
close to it.  I worked on an environment that did this with a hardware 
NFS server (NetApp) and the front-ends could be up or down, it just 
didn't matter... as long as enough of them were up to handle the 
current load.

But I have a feeling judging by the original poster's file sizes and 
traffic load, that his machine is probably not a required 24/7 uptime 
type system.

It's fun to design systems like that, though.  Quite a good mental 
exercise thinking of all the possible points of failure and 
communicating them to those who have to make the money/redundancy-level 
decisions.

--
Nate Duehr, [EMAIL PROTECTED]

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Russell Coker
On Sat, 17 Jul 2004 10:39, Nate Duehr <[EMAIL PROTECTED]> wrote:
> On Jul 16, 2004, at 1:43 PM, Markus Oswald wrote:
> > Summary: Don't bother with tuning the server and don't even think about
> > setting up a cluster for something like this - definitely overkill. ;o)
>
> Unless there's a business requirement that it be available 24/7 with no
> maintenance downtime - that adds a level of complexity (and other
> questions that would need to be asked like "do we need a second machine
> at another data center?") to the equation.

That's a good point.  But keep in mind that when done wrong clusters decrease 
reliability and increase down-time.

I have never been involved in running a cluster where it worked as well as a 
single machine would have.  Clusters need good cluster software (which does 
not exist for Solaris, there's probably something good for linux), they need 
a lot of testing (most people don't test properly), and they need careful 
planning.

Installing a single machine and hoping for the best often gives better 
results.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Nate Duehr
On Jul 16, 2004, at 1:43 PM, Markus Oswald wrote:
Summary: Don't bother with tuning the server and don't even think about
setting up a cluster for something like this - definitely overkill. ;o)
Unless there's a business requirement that it be available 24/7 with no 
maintenance downtime - that adds a level of complexity (and other 
questions that would need to be asked like "do we need a second machine 
at another data center?") to the equation.

--
Nate Duehr, [EMAIL PROTECTED]
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]


Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Russell Coker
On Sat, 17 Jul 2004 05:42, Skylar Thompson <[EMAIL PROTECTED]> wrote:
> As long as we're not talking about 486-class machines, the processor is not
> going to be the bottleneck; the bandwidth is. Multiplying 150 peak users by
> 50kB/s gives 7.5MB/s, so your disks should be able to spit out at least
> 5MB/s. You should also make sure you have plenty of RAM (at least 512MB) to
> make sure you can cache as much of the files in RAM as possible.

As long as we are not talking about 486 class hardware then disks can handle 
>5MB/s.  In 1998 I bought the cheapest available Thinkpad with a 3G IDE disk 
and it could do that speed for the first gigabyte of the hard disk.  In 2000 
I bought a newer Thinkpad with a 7.5G IDE disk which could do >6MB/s over the 
entire disk and >9MB/s for the first 3G.  Also in 2000 I bought some cheap 
46G IDE disks which could do >30MB/s for the first 20G and >18MB/s over the 
entire disk.

If you buy one of the cheapest IDE disks available new (IE not stuff that's 
been on the shelf for a few years) and you connect it to an ATA-66 or ATA-100 
bus on the cheapest ATX motherboard available then you should expect to be 
able to do bulk reads at speeds in excess of 40MB/s easily, and probably 
>50MB/s for some parts of the disk.  I haven't had a chance to benchmark any 
of the 10,000rpm S-ATA disks, but I would hope that they could sustain bulk 
read speeds of 70MB/s or more.

The next issue is seek performance.  Getting large transfer rates when reading 
large amounts of data sequentially is easy.  Getting large transfer rates 
while reading smaller amounts of data is more difficult.  Hypothetically 
speaking if you wanted to read data in 1K blocks without any caching and it 
was not in order then you would probably find it difficult to sustain more 
than about 2MB/s on a RAID array.  Fortunately modern hard disks have 
firmware that implements read-ahead (the last time I was purchasing hard 
disks the model with 8M of read-ahead buffer was about $2 more than one with 
2M of read-ahead buffer).  When you write files to disk the OS will try to 
keep them contiguous as much as possible, to the read-ahead in the drive may 
help if the OS doesn't do decent caching.  However Linux does really 
aggressive caching of both meta-data and file data, and Apache should be 
doing reads with significantly larger block sizes than 1K.


I expect that if you get a P3-800 class machine with a 20G IDE disk and RAM 
that's more than twice the size of the data that's to be served (easy when 
it's only 150M of data) then there will not be any performance problems.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Markus Oswald
Am Fr, den 16.07.2004 schrieb Henrik Heil um 20:53:
> Hello,
> please excuse my general questions.
> 
> A customer asked me to setup a dedicated webserver that will offer ~30 
> files (each ~5MB) for download and is expected to receive a lot of 
> traffic. Most of the users will have cable modems and their download 
> speed should not drop below 50KB/sec.
> 
> My questions are:
> What would be an adequate hardware to handle i.e. 50(average)/150(peak) 
> concurrent downloads?
> What is the typical bottleneck in this setup?
> What optimizations should i apply to a standard woody or sarge 
> installation? (anything kernelwise?)

Maybe I'm too optimistic, but I really don't think you will max out any
halfway decent server with this load...

30 x 5 MB will give you 150MB content. This should be easily cached in
RAM, even without something like a ramdisk as linux does this by itself.
Disk I/O should not be a problem.

Furthermore the content seems to be static - no need for a fast CPU.

150 concurrent downloads will be no problem for Apache, even with the
default settings. Only if you want to spawn more than 512 (?)
child-processes you'll have to recompile and increase HARD_SERVER_LIMIT.

Summary: Don't bother with tuning the server and don't even think about
setting up a cluster for something like this - definitely overkill. ;o)

I've a Debian box here which currently serves more than 160 req/second
of dynamic content - no problem at all. The HTTP-cluster next to it is
intended to handle WAY bigger loads...

best regards,
  Markus
-- 
Markus Oswald <[EMAIL PROTECTED]>  \ Unix and Network Administration
Graz, AUSTRIA \ High Availability / Cluster
Mobile: +43 676 6485415\ System Consulting
Fax:+43 316 428896  \ Web Development


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]



Re: hardware/optimizations for a download-webserver

2004-07-16 Thread Skylar Thompson
On Fri, Jul 16, 2004 at 08:53:21PM +0200, Henrik Heil wrote:
> Hello,
> please excuse my general questions.
> 
> A customer asked me to setup a dedicated webserver that will offer ~30 
> files (each ~5MB) for download and is expected to receive a lot of 
> traffic. Most of the users will have cable modems and their download 
> speed should not drop below 50KB/sec.
> 
> My questions are:
> What would be an adequate hardware to handle i.e. 50(average)/150(peak) 
> concurrent downloads?
> What is the typical bottleneck in this setup?
> What optimizations should i apply to a standard woody or sarge 
> installation? (anything kernelwise?)

As long as we're not talking about 486-class machines, the processor is not
going to be the bottleneck; the bandwidth is. Multiplying 150 peak users by
50kB/s gives 7.5MB/s, so your disks should be able to spit out at least
5MB/s. You should also make sure you have plenty of RAM (at least 512MB) to
make sure you can cache as much of the files in RAM as possible.
 
> I have experiences with not so specialized servers (apache1.x/php4.x 
> hosting on debian/woody/sarge) but never really hit any limits with these.
> 
> I thought about:
> 
> - tuning apache (oviously) -- raising Max/MinSpareServers, AllowOverride 
> none, FollowSymLinks,...

StartServers and SpareServers are probably going to be the most important
options to tweak. You should experiment, but you probably should start up
at least 20 servers, and keep the number of spare servers above five, but
you'll have to experiment with it while in production to see what works
best.

You might also get some performance boost by turning off all the
unnecessary modules like mod_php and mod_perl if you don't need them.

> - putting the files on a ramdisk or using mod_mmap_static (only ~600MB 
> alltogether)

You could try putting everything in a RAM disk, but if it's relatively
static content and you have plenty of RAM the kernel will eventually cache
everything in RAM anyways.
 
> - replacing apache with fnord (http://www.fefe.de/fnord/) or cthulhu 
> (http://cthulhu.fnord.at/). Can anyone share experiences with these?

This might help, but these might have their own configuration problems. If
you're more familiar with Apache, you'll probably have an easier time
tweaking it than something unfamiliar.
 
> - (as a last resort) using 2 loadbalancing servers with lvs 
> (http://www.linuxvirtualserver.org/).

This might help, but it'll add another layer of complexity that could fail.
I'd build one good machine than two less-good machines.

-- 
-- Skylar Thompson ([EMAIL PROTECTED])
-- http://www.cs.earlham.edu/~skylar/


pgpbLH02Y5Xrs.pgp
Description: PGP signature


hardware/optimizations for a download-webserver

2004-07-16 Thread Henrik Heil
Hello,
please excuse my general questions.
A customer asked me to setup a dedicated webserver that will offer ~30 
files (each ~5MB) for download and is expected to receive a lot of 
traffic. Most of the users will have cable modems and their download 
speed should not drop below 50KB/sec.

My questions are:
What would be an adequate hardware to handle i.e. 50(average)/150(peak) 
concurrent downloads?
What is the typical bottleneck in this setup?
What optimizations should i apply to a standard woody or sarge 
installation? (anything kernelwise?)

I have experiences with not so specialized servers (apache1.x/php4.x 
hosting on debian/woody/sarge) but never really hit any limits with these.

I thought about:
- tuning apache (oviously) -- raising Max/MinSpareServers, AllowOverride 
none, FollowSymLinks,...

- putting the files on a ramdisk or using mod_mmap_static (only ~600MB 
alltogether)

- replacing apache with fnord (http://www.fefe.de/fnord/) or cthulhu 
(http://cthulhu.fnord.at/). Can anyone share experiences with these?

- (as a last resort) using 2 loadbalancing servers with lvs 
(http://www.linuxvirtualserver.org/).

Thanks,
Henrik
--
Henrik Heil, zweipol Coy & Heil GbR
http://www.zweipol.net/
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]