If the elevated iowait from iostat is on the host you might be able to
find something hogging you io bandwidth with iotop.  Also look for D
state procs with ps auxr.  Are you on a software raid?

If you are on linux soft raid you might check your disks for errors
with smartmontools.  Other than that the only thing I can think of is
something like a performance regression in the ide/scsi/sata
controller (on host or virtual) or mdadm on host.  If the host system
is bogged before starting vmware instances I would suspect the former
(host controller or mdadm).

On 3/11/10, Stefan G. Weichinger <li...@xunil.at> wrote:
> Am 11.03.2010 16:54, schrieb Kyle Bader:
>> If you use the cfq scheduler (linux default) you might try turning off
>> low latency mode (introduced in 2.6.32):
>>
>> Echo 0 > /sys/class/block/<device name>/queue/iosched/low_latency
>>
>> http://kernelnewbies.org/Linux_2_6_32
>
> That sounded good, but unfortunately it is not really doing the trick.
> The VM still takes minutes to boot ... and this after I copied it back
> to the RAID1-array which should in theory be faster than the
> noraid-partition before.
>
> Thanks anyway, I will test that setting ...
>
> Stefan
>
>
>

-- 
Sent from my mobile device


Kyle

Reply via email to