On 01/25/2014 01:31 AM, Steve Dainard wrote:
Not sure what a good method to bench this would be, but:

An NFS mount point on virt host:
[root@ovirt001 iso-store]# dd if=/dev/zero of=test1 bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 3.95399 s, 104 MB/s

Raw brick performance on gluster server (yes, I know I shouldn't write
directly to the brick):
[root@gluster1 iso-store]# dd if=/dev/zero of=test bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 3.06743 s, 134 MB/s

Gluster mount point on gluster server:
[root@gluster1 iso-store]# dd if=/dev/zero of=test bs=4k count=100000
100000+0 records in
100000+0 records out
409600000 bytes (410 MB) copied, 19.5766 s, 20.9 MB/s

The storage servers are a bit older, but are both dual socket quad core
opterons with 4x 7200rpm drives.

A block size of 4k is quite small so that the context switch overhead involved with fuse would be more perceivable.

Would it be possible to increase the block size for dd and test?


I'm in the process of setting up a share from my desktop and I'll see if
I can bench between the two systems. Not sure if my ssd will impact the
tests, I've heard there isn't an advantage using ssd storage for glusterfs.

Do you have any pointers to this source of information? Typically glusterfs performance for virtualization work loads is bound by the slowest element in the entire stack. Usually storage/disks happen to be the bottleneck and ssd storage does benefit glusterfs.

-Vijay



Does anyone have a hardware reference design for glusterfs as a backend
for virt? Or is there a benchmark utility?

*Steve Dainard *
IT Infrastructure Manager
Miovision <http://miovision.com/> | /Rethink Traffic/
519-513-2407 ex.250
877-646-8476 (toll-free)

*Blog <http://miovision.com/blog>  | **LinkedIn
<https://www.linkedin.com/company/miovision-technologies>  | Twitter
<https://twitter.com/miovision>  | Facebook
<https://www.facebook.com/miovision>*
------------------------------------------------------------------------
Miovision Technologies Inc. | 148 Manitou Drive, Suite 101, Kitchener,
ON, Canada | N2C 1L3
This e-mail may contain information that is privileged or confidential.
If you are not the intended recipient, please delete the e-mail and any
attachments and notify us immediately.


On Thu, Jan 23, 2014 at 7:18 PM, Andrew Cathrow <acath...@redhat.com
<mailto:acath...@redhat.com>> wrote:

    Are we sure that the issue is the guest I/O - what's the raw
    performance on the host accessing the gluster storage?

    ------------------------------------------------------------------------

        *From: *"Steve Dainard" <sdain...@miovision.com
        <mailto:sdain...@miovision.com>>
        *To: *"Itamar Heim" <ih...@redhat.com <mailto:ih...@redhat.com>>
        *Cc: *"Ronen Hod" <r...@redhat.com <mailto:r...@redhat.com>>,
        "users" <users@ovirt.org <mailto:users@ovirt.org>>, "Sanjay Rao"
        <s...@redhat.com <mailto:s...@redhat.com>>
        *Sent: *Thursday, January 23, 2014 4:56:58 PM
        *Subject: *Re: [Users] Extremely poor disk access speeds in
        Windows guest


        I have two options, virtio and virtio-scsi.

        I was using virtio, and have also attempted virtio-scsi on
        another Windows guest with the same results.

        Using the newest drivers, virtio-win-0.1-74.iso.

        *Steve Dainard *
        IT Infrastructure Manager
        Miovision <http://miovision.com/> | /Rethink Traffic/
        519-513-2407 <tel:519-513-2407> ex.250
        877-646-8476 <tel:877-646-8476> (toll-free)

        *Blog <http://miovision.com/blog>  | **LinkedIn
        <https://www.linkedin.com/company/miovision-technologies>  |
        Twitter <https://twitter.com/miovision>  | Facebook
        <https://www.facebook.com/miovision>*
        ------------------------------------------------------------------------
        Miovision Technologies Inc. | 148 Manitou Drive, Suite 101,
        Kitchener, ON, Canada | N2C 1L3
        This e-mail may contain information that is privileged or
        confidential. If you are not the intended recipient, please
        delete the e-mail and any attachments and notify us immediately.


        On Thu, Jan 23, 2014 at 4:24 PM, Itamar Heim <ih...@redhat.com
        <mailto:ih...@redhat.com>> wrote:

            On 01/23/2014 07:46 PM, Steve Dainard wrote:

                Backing Storage: Gluster Replica
                Storage Domain: NFS
                Ovirt Hosts: CentOS 6.5
                Ovirt version: 3.3.2
                Network: GigE
                # of VM's: 3 - two Linux guests are idle, one Windows
                guest is
                installing updates.

                I've installed a Windows 2008 R2 guest with virtio disk,
                and all the
                drivers from the latest virtio iso. I've also installed
                the spice agent
                drivers.

                Guest disk access is horribly slow, Resource monitor
                during Windows
                updates shows Disk peaking at 1MB/sec (scale never
                increases) and Disk
                Queue Length Peaking at 5 and looks to be sitting at
                that level 99% of
                the time. 113 updates in Windows has been running
                solidly for about 2.5
                hours and is at 89/113 updates complete.


            virtio-block or virtio-scsi?
            which windows guest driver version for that?


                I can't say my Linux guests are blisteringly fast, but
                updating a guest
                from RHEL 6.3 fresh install to 6.5 took about 25 minutes.

                If anyone has any ideas, please let me know - I haven't
                found any tuning
                docs for Windows guests that could explain this issue.

                Thanks,


                *Steve Dainard *



                _________________________________________________
                Users mailing list
                Users@ovirt.org <mailto:Users@ovirt.org>
                http://lists.ovirt.org/__mailman/listinfo/users
                <http://lists.ovirt.org/mailman/listinfo/users>




        _______________________________________________
        Users mailing list
        Users@ovirt.org <mailto:Users@ovirt.org>
        http://lists.ovirt.org/mailman/listinfo/users





_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users


_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to