On Mon, Jun 4, 2018, 11:20 PM Thomas Fecke <thomas.fe...@eset.de> wrote:

> Hey Juan,
>
>
>
> That would be perfect. Ill searching for weeks now and can´t find the
> Bottleneck
>
>
>
> The Storage is attached to an 10 gig switch
>
> Storage:
>
>
>
> 1HE 19'' Chassis mit 4 hot-swap Einschüben
>
> 600W Platinum PSU
>
> Intel Xeon E5-2620 v4 CPU
>
> 16GB DDR4-2400 regECC RAM (2x8)
>
> 4x 1TB SATA3 SSD (Samsung Pro)
>
> LSI3108 Raidcontroller
>
> 2x Intel 10G-BaseT LAN
>
> 1x ded. KVM Port (IPMI2.0)
>
>
>
>
>
> The Storage is attached to an 10 gig Switch. Just our Hypervisors are
> connected aswell to that switch. I don’t know the Switch Model, its rented
> from our Hoster ( like the Hardware ).
>
>
>
> The Data Domain is shared via NFS.
>
>
>
> We are working a lot with Templates – so a single Template gets deployed
> like 20 Times. Dunno if its important. The Guests running Win 10 and Win
> Server 2016 – Guest tools are installed.
>
>
>
> “Iotop” show about 100 MB/Sec
>
>
>
> “Io sta”t show Storage Timeouts
>
>
>
> dd if=/dev/zero of=/root/testfile bs=1G count=1 oflag=sync
>

oflag=direct , bs=1m and bs=1000 makes a lot more sense.
/root/testfile is on that storage? So where's the network medium?

1+0 Datensätze ein
>
> 1+0 Datensätze aus
>
> 1073741824 Bytes (1,1 GB) kopiert, 10,7527 s, 99,9 MB/s
>

Can you verify your 10g did not by mistake auto-negotiated to 1g?
Y.


>
> DD ist way to low. When I reboot the Server its up to 800-900 MB/s. It
> drops slowly to under 100 in about 5 Minutes. Like a Cache that is filling
> up.
>
>
>
> RAM and CPU are fine ( maximum 50% system load – average 30% ).
>
>
>
> File system is XFS – Raid 4 is used
>
>
>
>
>
>
>
> *From:* Juan Pablo <pablo.localh...@gmail.com>
> *Sent:* Montag, 4. Juni 2018 21:10
> *To:* Thomas Fecke <thomas.fe...@eset.de>
> *Cc:* users@ovirt.org
> *Subject:* Re: [ovirt-users] Storage IO
>
>
>
> Hi Thomas, so you are seeing high load on your storage and you are asking
> 'why'? an answer with the facts you give would be: you are using your
> storage, so, you have storage IO.
>
>
>
> so, if you want to dive deeper:
>
> -which storage are you using, specs would be nice.
>
> -which host model are you using?
>
> -network specs? card model, etc. switch model, etc.
>
>
>
> hows your setup made? iscsi? nfs? gluster?
>
>
>
> based on the former, we might get a better idea and after this some tests
> could be made if needed to find if there's a bottleneck or if the
> environment is working as expected..
>
>
>
> regards,
>
>
>
>
>
> 2018-06-04 14:29 GMT-03:00 Thomas Fecke <thomas.fe...@eset.de>:
>
> Hey Guys,
>
>
>
> sorry i need to ask again.
>
>
>
> We got 2 Hypervisor with about 50 running VM´s and a single Storage with
> 10 Gig connection.
>
>
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               3,00   694,00 1627,00  947,00 103812,00 61208,00
> 128,22     6,78    2,63    2,13    3,49   0,39  99,70
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,70   31,37    0,00   64,93
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               1,00   805,00  836,00  997,00 43916,00 57900,00
> 111,09     6,00    3,27    1,87    4,44   0,54  99,30
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,54   29,96    0,00   66,50
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               2,00   822,00 1160,00 1170,00 46700,00 52176,00
> 84,87     5,68    2,44    1,57    3,30   0,43  99,50
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    5,05   31,46    0,00   63,50
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               3,00  1248,00 2337,00 1502,00 134932,00 48536,00
> 95,58     6,59    1,72    1,53    2,01   0,26  99,30
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,95   31,79    0,00   64,26
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               0,00   704,00  556,00 1292,00 19908,00 72600,00
> 100,12     5,50    2,99    1,83    3,48   0,54  99,50
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,03   28,90    0,00   68,07
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               0,00   544,00  278,00 1095,00  7848,00 66124,00
> 107,75     5,31    3,87    1,49    4,47   0,72  99,10
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,03   29,32    0,00   67,65
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               0,00   464,00  229,00 1172,00  6588,00 72384,00
> 112,74     5,44    3,88    1,67    4,31   0,71  99,50
>
>
>
>
>
>
>
>
>
> and this is our Problem. Anyone know why our Storage recive that much of
> Precesses?
>
>
>
> Thanks in advance
>
>
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/YENR6R4ESX3JCOS7DYA354EOPNL6WGUN/
>
>
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct:
> https://www.ovirt.org/community/about/community-guidelines/
> List Archives:
> https://lists.ovirt.org/archives/list/users@ovirt.org/message/VU3OMW456MUM2SCWEERGQBJIPCVCB57N/
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/4UX6ZMCYUDZW2UBKKT67M6CFIBPC7HS6/

Reply via email to