Hi,

Like Greg and Ian said, this slowdown is due to the lack of log device.

There are 2 kinds of writes:

-async (asynchronous) writes calls gets buffered and the write call returns 
without waiting for the actual write to disk to complete. This is used to add 
performance for unimportant data (e.g.: log files,..) that could be missing on 
a panic or powerloss.

-sync (synchronous) writes calls wait for the confirmation that data is on 
permanent storage before returning. This is used especially in databases and 
filesystem internal  to ensure coherence of data.


You could disable the sync writes with:
 zfs set sync=disabled zones/<uuid>-disk0

This will speed your sync writes quite dramatically. Downside is you could end 
up with corrupted data or filesystem if the Host system goes down unexpectedly. 
You could limit damage if you snapshot regularly. Snapshots are coherent, so if 
you rollback to the latest snapshot you’re safe.
I’ve done just this with some Windows 7 VMs running stuff with non critical 
data. The speedup is quite dramatic for IO bound workloads.

Another way around this I think, is to use the recently released LX brand zones 
where you run Linux natively in SmartOS (with translated system calls). From 
the release notes, I think they implement async writes.

That being said, if you use a database with sync writes that is important to 
you, you better invest in a pair of small and fast ssds  (log devices) to 
speed-up the sync writes.

—

Michel


On KVM zones only does sync writes.

> On 27 Apr 2015, at 09:31, Gavin Ang <[email protected]> wrote:
> 
> Hi,
>    Am trying to optimize the disk I/O performance for KVM guests running on 
> my SmartOS Host (RAID 10 with 4 disks), 8 physical cores, 100G+ RAM.
>  
> Running this command: time sh -c "dd if=/dev/zero of=ddfile bs=8k 
> count=250000 && sync"; rm ddfile
>  
> On the Host gives:
> real    0m16.051s
> user    0m0.186s
> sys     0m3.181s
>  
> On a Centos guest gives:
> real    0m43.182s
> user    0m0.032s
> sys     0m1.505s
>  
> -          More than 50% degradation on write throughput performance. Been 
> surfing through many sites, and some suggest using io=native and cache=none 
> as qemu options to eliminate double caching etc. However, I can’t seem to 
> find a way to do this. Tried editing the startvm file at /zones/{$UUID}/root 
> at the option. This is also true of any other image I am running where the 
> guest io performance is a far cry from the host performance. 
>  
> "-drive" "file=/dev/zvol/rdsk
> /zones/d0a2fc0f-e623-4423-bcba-b73a56d73c18-disk0,if=virtio,index=0,media=disk,boot=on"
>  by adding io=native,cache=none to the option, but this file gets overwritten 
> every time the VM is restarted. Tried putting updating a VM with these 
> parameters also for the disk, but would not accept this. I know there is an 
> option called qemu_extra_opts, but don’t think the syntax would be right. BTW 
> the Centos image above is an image downloaded from Joyent public images.
>  
> The setting "echo zfs_vdev_max_pending/W0t35 | mdb -kw" seemed to give some 
> improvement on 1 server, but not on another host.
>  
> Any help on this would be much appreciated.
>  
> Thanks,
> Gavin
> smartos-discuss | Archives 
> <https://www.listbox.com/member/archive/184463/=now>  
> <https://www.listbox.com/member/archive/rss/184463/23529323-02f95a03> | 
> Modify <https://www.listbox.com/member/?&;> Your Subscription  
> <http://www.listbox.com/>



-------------------------------------------
smartos-discuss
Archives: https://www.listbox.com/member/archive/184463/=now
RSS Feed: https://www.listbox.com/member/archive/rss/184463/25769125-55cfbc00
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=25769125&id_secret=25769125-7688e9fb
Powered by Listbox: http://www.listbox.com

Reply via email to