On 04/15/2010 11:35 AM, Brian Lavender wrote:
> Isn't there a SPECWeb test or something like that for this?
> I believe they cost $$$ though. You could also do a capture and replay
> it back to your web server.
>
> There must be some benchmarking tools or simulation methodologies.
Oh, tons. I'd j
Isn't there a SPECWeb test or something like that for this?
I believe they cost $$$ though. You could also do a capture and replay
it back to your web server.
There must be some benchmarking tools or simulation methodologies.
brian
On Wed, Apr 14, 2010 at 11:56:03PM -0700, Bill Broadley wrote:
On 04/14/2010 10:51 PM, Brian Lavender wrote:
> I would think that you need to tune your filesystem. How about this
> article on "On-demand readahead"
>
> http://lwn.net/Articles/235164/
Ah, I found this:
http://www.redhat.com/magazine/001nov04/features/vm/#tuning-vm
What I'd recommend is pick a
On 04/14/2010 11:23 PM, Brian Lavender wrote:
>> What gets ugly is if you have 2 or more clients accessing 2 or more
>> files. Suddenly it becomes very very important to intelligently handle
>> your I/O. Say you have 4 clients, reading 4 ISO files, and a relatively
>> stupid/straight forward I/O
On 04/14/2010 10:51 PM, Brian Lavender wrote:
> I would think that you need to tune your filesystem. How about this
> article on "On-demand readahead"
>
> http://lwn.net/Articles/235164/
I see a discussion of a few different approaches/patches, but nothing
that really shows what is currently in t
On Tue, Apr 13, 2010 at 12:23:23AM -0700, Bill Broadley wrote:
> On 03/31/2010 05:12 PM, Alex Mandel wrote:
> > I'm looking for some references and tips on how to tune a server
> > specifically for serving large files over the internet. ie 4 GB iso
> > files. I'm talking software config tweaks here
I would think that you need to tune your filesystem. How about this
article on "On-demand readahead"
http://lwn.net/Articles/235164/
brian
On Tue, Apr 13, 2010 at 08:16:05PM -0700, Bill Broadley wrote:
> BTW, as an example of how bad performance gets when you are randomly
> accessing 4G of data
BTW, as an example of how bad performance gets when you are randomly
accessing 4G of data:
http://broadley.org/bill/random-read.png
I've not seen any way to tune linux software RAID or apache that lets
you set the minimum read size/prefetch.
Note it's a log/log graph.
_
On 04/13/2010 01:09 AM, Alex Mandel wrote:
> Bill Broadley wrote:
>> On 03/31/2010 05:12 PM, Alex Mandel wrote:
>>> I'm looking for some references and tips on how to tune a server
>>> specifically for serving large files over the internet. ie 4 GB iso
>>> files. I'm talking software config tweaks
Quoting Alex Mandel (tech_...@wildintellect.com):
> Apache is least resistance as it's in use on all the machines in this
> cluster of various web services and all the admins know how to configure
> it. I'm open to exploring anything that's in Debian+Backports and is
> current/supported. Most of t
Bill Broadley wrote:
> On 03/31/2010 05:12 PM, Alex Mandel wrote:
>> I'm looking for some references and tips on how to tune a server
>> specifically for serving large files over the internet. ie 4 GB iso
>> files. I'm talking software config tweaks here.
>
> How many 4GB ISO files are there? How
On 03/31/2010 05:12 PM, Alex Mandel wrote:
> I'm looking for some references and tips on how to tune a server
> specifically for serving large files over the internet. ie 4 GB iso
> files. I'm talking software config tweaks here.
How many 4GB ISO files are there? How many simultaneous files?
Cli
I'm looking for some references and tips on how to tune a server
specifically for serving large files over the internet. ie 4 GB iso
files. I'm talking software config tweaks here.
The system is using a RAID with striping, the filesystem is XFS (some
tuning there maybe?) and will be running Apache
13 matches
Mail list logo