Dear adrian,
i will try an alternative disk controller and update the result.
Regards,
Adi.
Adrian Chadd-3 wrote:
>
> How much disk IO is going on when the CPU shows 70% IOWAIT? Far too
> much. The CPU time spent in CPU IOWAIT shouldn't be that high. I think
> you really should consider trying
How much disk IO is going on when the CPU shows 70% IOWAIT? Far too
much. The CPU time spent in CPU IOWAIT shouldn't be that high. I think
you really should consider trying an alternative disk controller.
adrian
2009/8/4 smaugadi :
>
> Dear Adrian and Heinz,
> Sorry for the delayed replay and
Dear Adrian and Heinz,
Sorry for the delayed replay and thanks for all the help so far.
I have tried changing the file system (ext2 and ext3), changed the
partitioning geometry (fdisk -H 224 -S 56) as I read that this would improve
performance with SSD.
I tried ufs, aufs and even coss (downgrade t
Generally large amounts of CPU being spent in IO wait means that the
driver is not well-written or the hardware requires extra upkeep to
handle IO operations.
What hardware in particular are you using?
This was one of those big differences between IDE and SATA in the past
btw. At least under Linu
Well I'm seeing that the CPU is taking a lot of time waiting for outstanding
disk I/O request.
Adi
Adrian Chadd-3 wrote:
>
> Are you seeing high IO wait CPU use, or high IO wait times on IO?
>
>
>
> Adrian
>
> 2009/8/2 smaugadi :
>>
>> Dear Adrian,
>> Well my conclusion that this is an IO pr
Well, from what I've read, SSDs don't necessarily provide very high
random write throughput over time. You should do some further research
into how they operate to understand what the issues may be.
In any case, the much more important information is what IO pattern(s)
are occuring on your storage
Are you seeing high IO wait CPU use, or high IO wait times on IO?
Adrian
2009/8/2 smaugadi :
>
> Dear Adrian,
> Well my conclusion that this is an IO problem came from the fact that I see
> huge IO waits as the volume of traffic increase (with tools such as mpstat),
> when using ramdisk there i
Dear Waitman,
Testing the SSD drive, before installing it on the squid, showed huge
performance advantage in IOPS, read/write. So, I thought that this will
solve the problems I had with HDD.
But it was not so, look at this output:
12:39:35 PM CPU %user %nice%sys %iowait%irq %soft
Dear Adrian,
Well my conclusion that this is an IO problem came from the fact that I see
huge IO waits as the volume of traffic increase (with tools such as mpstat),
when using ramdisk there is no such issue.
I have configured the SSD drive with ext2, no journal, noatime. Used the
“noop” I/O sched
smaugadi wrote:
Dear ALL,
We have a squid server with high volume of traffic, 200 – 300 MB.
The server is in transparent mode and using 18GB of ramdisk. With this
configuration performance is very good (after optimizing the squid and the
linux machine).
The problem is the small size of cache d
2009/8/2 smaugadi :
>
> Dear Adrian,
> During the implementation we encountered issues with all kind of variables
> such as:
> Limit of file descriptors (now the squid is using 204800).
> TCP port range was low (increased to 1024 65535) TCP timers (changed them)
> The ip_conntrack and hash size wer
Dear Adrian,
During the implementation we encountered issues with all kind of variables
such as:
Limit of file descriptors (now the squid is using 204800).
TCP port range was low (increased to 1024 65535) TCP timers (changed them)
The ip_conntrack and hash size were low (now 524288 262144 respecti
Have you actually done any system profiling to get an understanding of
what wall(s) you're hitting?
For example, is there perhaps some strange issue you're hitting with
the disk controller?
I'm pushing around 100-150mbit at peak through one server right now
but I've not yet finished deploying COS
Dear ALL,
We have a squid server with high volume of traffic, 200 – 300 MB.
The server is in transparent mode and using 18GB of ramdisk. With this
configuration performance is very good (after optimizing the squid and the
linux machine).
The problem is the small size of cache directory.
Since IO i
14 matches
Mail list logo