[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-10-01 Thread Amit Bawer
On Tue, Oct 1, 2019 at 12:49 PM Vrgotic, Marko 
wrote:

> Thank you very much Amit,
>
>
>
> I hope the result of suggested tests allows us improve the speed for
> specific IO test case as well.
>
>
>
> Apologies for not being more clear, but I was referring  to changing mount
> options for storage where SHE also runs. It cannot be put in Maintenance
> mode since the engine is running on it.
> What to do in this case? Its clear that I need to power it down, but where
> can I then change the settings?
>

You can see similar question about changing mnt_options of hosted engine
and answer here [1]
[1] https://lists.ovirt.org/pipermail/users/2018-January/086265.html

>
>
> Kindly awaiting your reply.
>
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> *Marko Vrgotic*
>
>
>
>
>
>
>
> *From: *Amit Bawer 
> *Date: *Saturday, 28 September 2019 at 20:25
> *To: *"Vrgotic, Marko" 
> *Cc: *Tony Brian Albers , "hunter86...@yahoo.com" <
> hunter86...@yahoo.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
>
>
>
>
> On Fri, Sep 27, 2019 at 4:02 PM Vrgotic, Marko 
> wrote:
>
> Hi oVirt gurus,
>
>
>
> Thank s to Tony, who pointed me into discovery process, the performance of
> the IO seems greatly dependent on the flags.
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 0.108962 s, *470 MB/s*
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
> *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 322.314 s, *159 kB/s*
>
>
>
> Dsync flag tells dd to ignore all buffers, cache except certain kernel
> buffers and write data physically to the disc, before writing further.
> According to number of sites I looked at, this is the way to test Server
> Latency in regards to IO operations. Difference in performance is huge, as
> you can see (below I have added results from tests with 4k and 8k block)
>
>
>
> Still, certain software component we run tests with writes data in
> this/similar way, which is why I got this complaint in the first place.
>
>
>
> Here is my current NFS mount settings:
>
>
> rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5
>
>
>
> *If you have any suggestions on possible NFS tuning options, to try to
> increase performance, I would highly appreciate it.*
>
> *Can someone tell me how to change NFS mount options in oVirt for already
> existing/used storage?*
>
>
>
> Taking into account your network configured MTU [1] and Linux version [2],
> you can tune wsize, rsize mount options.
>
> Editing mount options can be done from Storage->Domains->Manage Domain
> menu.
>
>
>
> [1]  https://access.redhat.com/solutions/2440411
>
> [2] https://access.redhat.com/solutions/753853
>
>
>
>
>
> Test results with 4096 and 8192 byte size.
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 1.49831 s, *273 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 349.041 s, *1.2 MB/s*
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 11.6553 s, *70.3 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 393.035 s, *2.1 MB/s*
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Thursday, 26 September 2019 at 09:51
> *To: *Amit Bawer 
> *Cc: *Tony Brian Albers , "hunter86...@yahoo.com" <
> hunter86...@yahoo.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
> Dear all,
>
>
>
> I very much appreciate all help and suggestions so far.
>
>
>
> Today I will send the test results and current mount settings for NFS4.
> Our production setup is using Netapp based NFS server.
>
>
>
> I am surprised with results from Tony

[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-28 Thread Amit Bawer
On Fri, Sep 27, 2019 at 4:02 PM Vrgotic, Marko 
wrote:

> Hi oVirt gurus,
>
>
>
> Thank s to Tony, who pointed me into discovery process, the performance of
> the IO seems greatly dependent on the flags.
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 0.108962 s, *470 MB/s*
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
> *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 322.314 s, *159 kB/s*
>
>
>
> Dsync flag tells dd to ignore all buffers, cache except certain kernel
> buffers and write data physically to the disc, before writing further.
> According to number of sites I looked at, this is the way to test Server
> Latency in regards to IO operations. Difference in performance is huge, as
> you can see (below I have added results from tests with 4k and 8k block)
>
>
>
> Still, certain software component we run tests with writes data in
> this/similar way, which is why I got this complaint in the first place.
>
>
>
> Here is my current NFS mount settings:
>
>
> rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5
>
>
>
> *If you have any suggestions on possible NFS tuning options, to try to
> increase performance, I would highly appreciate it.*
>
*Can someone tell me how to change NFS mount options in oVirt for already
> existing/used storage?*
>

Taking into account your network configured MTU [1] and Linux version [2],
you can tune wsize, rsize mount options.
Editing mount options can be done from Storage->Domains->Manage Domain menu.

[1]  https://access.redhat.com/solutions/2440411
[2] https://access.redhat.com/solutions/753853

>
>
>
>
> Test results with 4096 and 8192 byte size.
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 1.49831 s, *273 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 40960 bytes (410 MB) copied, 349.041 s, *1.2 MB/s*
>
>
>
> [root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=10
>
> 10+0 records in
>
> 10+0 records out
>
> 81920 bytes (819 MB) copied, 11.6553 s, *70.3 MB/s*
>
> [root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192
> count=10 *oflag=dsync*
>
> 10+0 records in
>
> 10+0 records out
>
> 819200000 bytes (819 MB) copied, 393.035 s, *2.1 MB/s*
>
>
>
>
>
> *From: *"Vrgotic, Marko" 
> *Date: *Thursday, 26 September 2019 at 09:51
> *To: *Amit Bawer 
> *Cc: *Tony Brian Albers , "hunter86...@yahoo.com" <
> hunter86...@yahoo.com>, "users@ovirt.org" 
> *Subject: *Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage
>
>
>
> Dear all,
>
>
>
> I very much appreciate all help and suggestions so far.
>
>
>
> Today I will send the test results and current mount settings for NFS4.
> Our production setup is using Netapp based NFS server.
>
>
>
> I am surprised with results from Tony’s test.
>
> We also have one setup with Gluster based NFS, and I will run tests on
> those as well.
>
> Sent from my iPhone
>
>
>
> On 25 Sep 2019, at 14:18, Amit Bawer  wrote:
>
>
>
>
>
> On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers  wrote:
>
> Guys,
>
> Just for info, this is what I'm getting on a VM that is on shared
> storage via NFSv3:
>
> --snip--
> [root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
> count=100
> 100+0 records in
> 100+0 records out
> 409600 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s
>
> real0m18.171s
> user0m1.077s
> sys 0m4.303s
> [root@proj-000 ~]#
> --snip--
>
> my /etc/exports:
> /data/ovirt
> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>
> and output from 'mount' on one of the hosts:
>
> sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
> 001.kac.lokalnet:_data_ovirt type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
> .41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,

[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-27 Thread Vrgotic, Marko
Hi oVirt gurus,

Thank s to Tony, who pointed me into discovery process, the performance of the 
IO seems greatly dependent on the flags.

[root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10
10+0 records in
10+0 records out
5120 bytes (51 MB) copied, 0.108962 s, 470 MB/s
[root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=512 count=10 
oflag=dsync
10+0 records in
10+0 records out
5120 bytes (51 MB) copied, 322.314 s, 159 kB/s

Dsync flag tells dd to ignore all buffers, cache except certain kernel buffers 
and write data physically to the disc, before writing further. According to 
number of sites I looked at, this is the way to test Server Latency in regards 
to IO operations. Difference in performance is huge, as you can see (below I 
have added results from tests with 4k and 8k block)

Still, certain software component we run tests with writes data in this/similar 
way, which is why I got this complaint in the first place.

Here is my current NFS mount settings:
rw,relatime,vers=4.0,rsize=65536,wsize=65536,namlen=255,soft,nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,clientaddr=172.17.28.11,local_lock=none,addr=172.17.28.5

If you have any suggestions on possible NFS tuning options, to try to increase 
performance, I would highly appreciate it.
Can someone tell me how to change NFS mount options in oVirt for already 
existing/used storage?


Test results with 4096 and 8192 byte size.
[root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=10
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 1.49831 s, 273 MB/s
[root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=4096 count=10 
oflag=dsync
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 349.041 s, 1.2 MB/s

[root@lgu215 ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=10
10+0 records in
10+0 records out
81920 bytes (819 MB) copied, 11.6553 s, 70.3 MB/s
[root@lgu215-admin ~]# dd if=/dev/zero of=/tmp/test1.img bs=8192 count=10 
oflag=dsync
10+0 records in
10+0 records out
81920 bytes (819 MB) copied, 393.035 s, 2.1 MB/s


From: "Vrgotic, Marko" 
Date: Thursday, 26 September 2019 at 09:51
To: Amit Bawer 
Cc: Tony Brian Albers , "hunter86...@yahoo.com" 
, "users@ovirt.org" 
Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage

Dear all,

I very much appreciate all help and suggestions so far.

Today I will send the test results and current mount settings for NFS4. Our 
production setup is using Netapp based NFS server.

I am surprised with results from Tony’s test.
We also have one setup with Gluster based NFS, and I will run tests on those as 
well.
Sent from my iPhone


On 25 Sep 2019, at 14:18, Amit Bawer  wrote:


On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers 
mailto:t...@kb.dk>> wrote:
Guys,

Just for info, this is what I'm getting on a VM that is on shared
storage via NFSv3:

--snip--
[root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
count=100
100+0 records in
100+0 records out
409600 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s

real0m18.171s
user0m1.077s
sys 0m4.303s
[root@proj-000 ~]#
--snip--

my /etc/exports:
/data/ovirt
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

and output from 'mount' on one of the hosts:

sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
001.kac.lokalnet:_data_ovirt type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
.41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
16.216.41)

Worth to compare mount options with the slow shared NFSv4 mount.

Window size tuning can be found at bottom of [1], although its relating to 
NFSv3, it could be relevant to v4 as well.
[1] https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html


connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
SATA disks in RAID10. NFS server is running CentOS 7.6.

Maybe you can get some inspiration from this.

/tony



On Wed, 2019-09-25 at 09:59 +, Vrgotic, Marko wrote:
> Dear Strahil, Amit,
>
> Thank you for the suggestion.
> Test result with block size 4096:
> Network storage:
> avshared:
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 40960 bytes (410 MB) copied, 275.522 s, 1.5 MB/s
>
> Local storage:
>
> avlocal2:
> [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 40960 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
> 10:38
> avlocal3:
> [root@mpollocalchec

[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-26 Thread Vrgotic, Marko
Dear all,

I very much appreciate all help and suggestions so far.

Today I will send the test results and current mount settings for NFS4. Our 
production setup is using Netapp based NFS server.

I am surprised with results from Tony’s test.
We also have one setup with Gluster based NFS, and I will run tests on those as 
well.

Sent from my iPhone

On 25 Sep 2019, at 14:18, Amit Bawer  wrote:




On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers 
mailto:t...@kb.dk>> wrote:
Guys,

Just for info, this is what I'm getting on a VM that is on shared
storage via NFSv3:

--snip--
[root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
count=100
100+0 records in
100+0 records out
409600 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s

real0m18.171s
user0m1.077s
sys 0m4.303s
[root@proj-000 ~]#
--snip--

my /etc/exports:
/data/ovirt
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

and output from 'mount' on one of the hosts:

sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
001.kac.lokalnet:_data_ovirt type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
.41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
16.216.41)

Worth to compare mount options with the slow shared NFSv4 mount.

Window size tuning can be found at bottom of [1], although its relating to 
NFSv3, it could be relevant to v4 as well.
[1] https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html


connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
SATA disks in RAID10. NFS server is running CentOS 7.6.

Maybe you can get some inspiration from this.

/tony



On Wed, 2019-09-25 at 09:59 +, Vrgotic, Marko wrote:
> Dear Strahil, Amit,
>
> Thank you for the suggestion.
> Test result with block size 4096:
> Network storage:
> avshared:
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 40960 bytes (410 MB) copied, 275.522 s, 1.5 MB/s
>
> Local storage:
>
> avlocal2:
> [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 40960 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
> 10:38
> avlocal3:
> [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 40960 bytes (410 MB) copied, 46.0392 s, 8.9 MB/s
>
> As Amit suggested, I am also going to execute same tests on the
> BareMetals and between BareMetal and NFS to compare results.
>
>
> — — —
> Met vriendelijke groet / Kind regards,
>
> Marko Vrgotic
>
>
>
>
> From: Strahil mailto:hunter86...@yahoo.com>>
> Date: Tuesday, 24 September 2019 at 19:10
> To: "Vrgotic, Marko" 
> mailto:m.vrgo...@activevideo.com>>, Amit 
>  .com>
> Cc: users mailto:users@ovirt.org>>
> Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared
> Storage
>
> Why don't you try with 4096 ?
> Most block devices have a blcok size of 4096 and anything bellow is
> slowing them down.
> Best Regards,
> Strahil Nikolov
> On Sep 24, 2019 17:40, Amit Bawer 
> mailto:aba...@redhat.com>> wrote:
> have you reproduced performance issue when checking this directly
> with the shared storage mount, outside the VMs?
>
> On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko  .com> wrote:
> Dear oVirt,
>
> I have executed some tests regarding IO disk speed on the VMs,
> running on shared storage and local storage in oVirt.
>
> Results of the tests on local storage domains:
> avlocal2:
> [root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 5120 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
>
> avlocal3:
> [root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 5120 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
>
> Results of the test on shared storage domain:
> avshared:
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 5120 bytes (51 MB) copied, 283.499 s, 181 kB/s
>
> Why is it so low? Is there anything I can do to tune, configure VDSM
> or other service to speed this up?
> Any advice is appreciated.
>
> Shared storage is based on Netapp with 20Gbps

[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-25 Thread Amit Bawer
On Wed, Sep 25, 2019 at 2:44 PM Tony Brian Albers  wrote:

> Guys,
>
> Just for info, this is what I'm getting on a VM that is on shared
> storage via NFSv3:
>
> --snip--
> [root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
> count=100
> 100+0 records in
> 100+0 records out
> 409600 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s
>
> real0m18.171s
> user0m1.077s
> sys 0m4.303s
> [root@proj-000 ~]#
> --snip--
>
> my /etc/exports:
> /data/ovirt
> *(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)
>
> and output from 'mount' on one of the hosts:
>
> sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
> 001.kac.lokalnet:_data_ovirt type nfs
> (rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
> nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
> .41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
> 16.216.41)
>

Worth to compare mount options with the slow shared NFSv4 mount.

Window size tuning can be found at bottom of [1], although its relating to
NFSv3, it could be relevant to v4 as well.
[1] https://www.ovirt.org/develop/troubleshooting-nfs-storage-issues.html


> connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
> SATA disks in RAID10. NFS server is running CentOS 7.6.
>
> Maybe you can get some inspiration from this.
>
> /tony
>
>
>
> On Wed, 2019-09-25 at 09:59 +, Vrgotic, Marko wrote:
> > Dear Strahil, Amit,
> >
> > Thank you for the suggestion.
> > Test result with block size 4096:
> > Network storage:
> > avshared:
> > [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 40960 bytes (410 MB) copied, 275.522 s, 1.5 MB/s
> >
> > Local storage:
> >
> > avlocal2:
> > [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 40960 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
> > 10:38
> > avlocal3:
> > [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 40960 bytes (410 MB) copied, 46.0392 s, 8.9 MB/s
> >
> > As Amit suggested, I am also going to execute same tests on the
> > BareMetals and between BareMetal and NFS to compare results.
> >
> >
> > — — —
> > Met vriendelijke groet / Kind regards,
> >
> > Marko Vrgotic
> >
> >
> >
> >
> > From: Strahil 
> > Date: Tuesday, 24 September 2019 at 19:10
> > To: "Vrgotic, Marko" , Amit  > .com>
> > Cc: users 
> > Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared
> > Storage
> >
> > Why don't you try with 4096 ?
> > Most block devices have a blcok size of 4096 and anything bellow is
> > slowing them down.
> > Best Regards,
> > Strahil Nikolov
> > On Sep 24, 2019 17:40, Amit Bawer  wrote:
> > have you reproduced performance issue when checking this directly
> > with the shared storage mount, outside the VMs?
> >
> > On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko  > .com> wrote:
> > Dear oVirt,
> >
> > I have executed some tests regarding IO disk speed on the VMs,
> > running on shared storage and local storage in oVirt.
> >
> > Results of the tests on local storage domains:
> > avlocal2:
> > [root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 5120 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
> >
> > avlocal3:
> > [root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 5120 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
> >
> > Results of the test on shared storage domain:
> > avshared:
> > [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> > count=10 oflag=dsync
> > 10+0 records in
> > 10+0 records out
> > 5120 bytes (51 MB) copied, 283.499 s, 181 kB/s
> >
> > Why is it so low? Is there anything I can do to tune, configure VDSM
> > or other service to speed this up?
> > Any advice is appreciated.
> >
> > Shared stor

[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-25 Thread Tony Brian Albers
Guys,

Just for info, this is what I'm getting on a VM that is on shared
storage via NFSv3:

--snip--
[root@proj-000 ~]# time dd if=/dev/zero of=testfile bs=4096
count=100
100+0 records in
100+0 records out
409600 bytes (4.1 GB) copied, 18.0984 s, 226 MB/s

real0m18.171s
user0m1.077s
sys 0m4.303s
[root@proj-000 ~]#
--snip--

my /etc/exports:
/data/ovirt
*(rw,sync,no_subtree_check,all_squash,anonuid=36,anongid=36)

and output from 'mount' on one of the hosts:

sto-001.kac.lokalnet:/data/ovirt on /rhev/data-center/mnt/sto-
001.kac.lokalnet:_data_ovirt type nfs
(rw,relatime,vers=3,rsize=1048576,wsize=1048576,namlen=255,soft,nolock,
nosharecache,proto=tcp,timeo=600,retrans=6,sec=sys,mountaddr=172.16.216
.41,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=172.
16.216.41)

connected via single 10gbit ethernet. Storage on NFS server is 8 x 4TB
SATA disks in RAID10. NFS server is running CentOS 7.6.

Maybe you can get some inspiration from this.

/tony



On Wed, 2019-09-25 at 09:59 +, Vrgotic, Marko wrote:
> Dear Strahil, Amit,
>  
> Thank you for the suggestion.
> Test result with block size 4096:
> Network storage:
> avshared:
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 40960 bytes (410 MB) copied, 275.522 s, 1.5 MB/s
>  
> Local storage:
>  
> avlocal2:
> [root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 40960 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
> 10:38
> avlocal3:
> [root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 40960 bytes (410 MB) copied, 46.0392 s, 8.9 MB/s
>  
> As Amit suggested, I am also going to execute same tests on the
> BareMetals and between BareMetal and NFS to compare results.
>  
>  
> — — —
> Met vriendelijke groet / Kind regards,
> 
> Marko Vrgotic
>  
>  
>  
>  
> From: Strahil 
> Date: Tuesday, 24 September 2019 at 19:10
> To: "Vrgotic, Marko" , Amit  .com>
> Cc: users 
> Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared
> Storage
>  
> Why don't you try with 4096 ?
> Most block devices have a blcok size of 4096 and anything bellow is
> slowing them down.
> Best Regards,
> Strahil Nikolov
> On Sep 24, 2019 17:40, Amit Bawer  wrote:
> have you reproduced performance issue when checking this directly
> with the shared storage mount, outside the VMs?
>  
> On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko  .com> wrote:
> Dear oVirt,
>  
> I have executed some tests regarding IO disk speed on the VMs,
> running on shared storage and local storage in oVirt.
>  
> Results of the tests on local storage domains:
> avlocal2:
> [root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 5120 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
>  
> avlocal3:
> [root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 5120 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
>  
> Results of the test on shared storage domain:
> avshared:
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
> 10+0 records in
> 10+0 records out
> 5120 bytes (51 MB) copied, 283.499 s, 181 kB/s
>  
> Why is it so low? Is there anything I can do to tune, configure VDSM
> or other service to speed this up?
> Any advice is appreciated.
>  
> Shared storage is based on Netapp with 20Gbps LACP path from
> Hypervisor to Netapp volume, and set to MTU 9000. Used protocol is
> NFS4.0.
> oVirt is 4.3.4.3 SHE.
>  
>  
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: <
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/communit
> y-guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/7XYSFEGAHCWXIY2JILDE24EVAC5ZVKWU/
-- 
Tony Albers - Systems Architect - IT Development
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark
Tel: +4

[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-25 Thread Vrgotic, Marko
Dear Strahil, Amit,

Thank you for the suggestion.
Test result with block size 4096:
Network storage:
avshared:
[root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096 
count=10 oflag=dsync
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 275.522 s, 1.5 MB/s

Local storage:

avlocal2:
[root@mpollocalcheck22 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096 
count=10 oflag=dsync
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 53.093 s, 7.7 MB/s
10:38<https://activevideo.slack.com/archives/D18TM318W/p1569400729001600>
avlocal3:
[root@mpollocalcheck3 ~]# dd if=/dev/zero of=/tmp/test2.img bs=4096 
count=10 oflag=dsync
10+0 records in
10+0 records out
40960 bytes (410 MB) copied, 46.0392 s, 8.9 MB/s

As Amit suggested, I am also going to execute same tests on the BareMetals and 
between BareMetal and NFS to compare results.


— — —
Met vriendelijke groet / Kind regards,

Marko Vrgotic




From: Strahil 
Date: Tuesday, 24 September 2019 at 19:10
To: "Vrgotic, Marko" , Amit 
Cc: users 
Subject: Re: [ovirt-users] Re: Super Low VM disk IO via Shared Storage


Why don't you try with 4096 ?
Most block devices have a blcok size of 4096 and anything bellow is slowing 
them down.

Best Regards,
Strahil Nikolov
On Sep 24, 2019 17:40, Amit Bawer  wrote:
have you reproduced performance issue when checking this directly with the 
shared storage mount, outside the VMs?

On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko 
mailto:m.vrgo...@activevideo.com>> wrote:

Dear oVirt,



I have executed some tests regarding IO disk speed on the VMs, running on 
shared storage and local storage in oVirt.



Results of the tests on local storage domains:

avlocal2:

[root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512 
count=10 oflag=dsync

10+0 records in

10+0 records out

5120 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s



avlocal3:

[root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512 
count=10 oflag=dsync

10+0 records in

10+0 records out

5120 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s



Results of the test on shared storage domain:

avshared:

[root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 
count=10 oflag=dsync

10+0 records in

10+0 records out

5120 bytes (51 MB) copied, 283.499 s, 181 kB/s



Why is it so low? Is there anything I can do to tune, configure VDSM or other 
service to speed this up?

Any advice is appreciated.



Shared storage is based on Netapp with 20Gbps LACP path from Hypervisor to 
Netapp volume, and set to MTU 9000. Used protocol is NFS4.0.

oVirt is 4.3.4.3<http://4.3.4.3> SHE.




___
Users mailing list -- users@ovirt.org<mailto:users@ovirt.org>
To unsubscribe send an email to 
users-le...@ovirt.org<mailto:users-le...@ovirt.org>
Privacy Statement: <
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/7XYSFEGAHCWXIY2JILDE24EVAC5ZVKWU/


[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-24 Thread Strahil
Why don't you try with 4096 ?
Most block devices have a blcok size of 4096 and anything bellow is slowing 
them down.

Best Regards,
Strahil NikolovOn Sep 24, 2019 17:40, Amit Bawer  wrote:
>
> have you reproduced performance issue when checking this directly with the 
> shared storage mount, outside the VMs?
>
> On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko  
> wrote:
>>
>> Dear oVirt,
>>
>>  
>>
>> I have executed some tests regarding IO disk speed on the VMs, running on 
>> shared storage and local storage in oVirt.
>>
>>  
>>
>> Results of the tests on local storage domains:
>>
>> avlocal2:
>>
>> [root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512 
>> count=10 oflag=dsync
>>
>> 10+0 records in
>>
>> 10+0 records out
>>
>> 5120 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
>>
>>  
>>
>> avlocal3:
>>
>> [root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512 
>> count=10 oflag=dsync
>>
>> 10+0 records in
>>
>> 10+0 records out
>>
>> 5120 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
>>
>>  
>>
>> Results of the test on shared storage domain:
>>
>> avshared:
>>
>> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512 
>> count=10 oflag=dsync
>>
>> 10+0 records in
>>
>> 10+0 records out
>>
>> 5120 bytes (51 MB) copied, 283.499 s, 181 kB/s
>>
>>  
>>
>> Why is it so low? Is there anything I can do to tune, configure VDSM or 
>> other service to speed this up?
>>
>> Any advice is appreciated.
>>
>>  
>>
>> Shared storage is based on Netapp with 20Gbps LACP path from Hypervisor to 
>> Netapp volume, and set to MTU 9000. Used protocol is NFS4.0.
>>
>> oVirt is 4.3.4.3 SHE.
>>
>>  
>>
>>  
>>
>> ___
>> Users mailing list -- users@ovirt.org
>> To unsubscribe send an email to users-le...@ovirt.org
>> Privacy Statement: <___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/MA2JZQCT5QGLGT5TCGDSAC3AKAJJCSNB/


[ovirt-users] Re: Super Low VM disk IO via Shared Storage

2019-09-24 Thread Amit Bawer
have you reproduced performance issue when checking this directly with the
shared storage mount, outside the VMs?

On Tue, Sep 24, 2019 at 4:53 PM Vrgotic, Marko 
wrote:

> Dear oVirt,
>
>
>
> I have executed some tests regarding IO disk speed on the VMs, running on
> shared storage and local storage in oVirt.
>
>
>
> Results of the tests on local storage domains:
>
> avlocal2:
>
> [root@mpollocalcheck22 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 45.9756 s, 1.1 MB/s
>
>
>
> avlocal3:
>
> [root@mpollocalcheck3 ~]#  dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 43.6179 s, 1.2 MB/s
>
>
>
> Results of the test on shared storage domain:
>
> *avshared*:
>
> [root@mpoludctest4udc-1 ~]# dd if=/dev/zero of=/tmp/test2.img bs=512
> count=10 oflag=dsync
>
> 10+0 records in
>
> 10+0 records out
>
> 5120 bytes (51 MB) copied, 283.499 s, 181 kB/s
>
>
>
> Why is it so low? Is there anything I can do to tune, configure VDSM or
> other service to speed this up?
>
> Any advice is appreciated.
>
>
>
> Shared storage is based on Netapp with 20Gbps LACP path from Hypervisor to
> Netapp volume, and set to MTU 9000. Used protocol is NFS4.0.
>
> oVirt is 4.3.4.3 SHE.
>
>
>
>
> ___
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/NZYTH3XEM3NWZJWAXFUDC3J6N4L6WOUI/
>
___
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/X2IUTRS62EF6MB4BBBHN2LTVY2MT5APG/