Hi Thomas, so you are seeing high load on your storage and you are asking
'why'? an answer with the facts you give would be: you are using your
storage, so, you have storage IO.

so, if you want to dive deeper:
-which storage are you using, specs would be nice.
-which host model are you using?
-network specs? card model, etc. switch model, etc.

hows your setup made? iscsi? nfs? gluster?

based on the former, we might get a better idea and after this some tests
could be made if needed to find if there's a bottleneck or if the
environment is working as expected..

regards,


2018-06-04 14:29 GMT-03:00 Thomas Fecke <thomas.fe...@eset.de>:

> Hey Guys,
>
>
>
> sorry i need to ask again.
>
>
>
> We got 2 Hypervisor with about 50 running VM´s and a single Storage with
> 10 Gig connection.
>
>
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               3,00   694,00 1627,00  947,00 103812,00 61208,00
> 128,22     6,78    2,63    2,13    3,49   0,39  99,70
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,70   31,37    0,00   64,93
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               1,00   805,00  836,00  997,00 43916,00 57900,00
> 111,09     6,00    3,27    1,87    4,44   0,54  99,30
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,54   29,96    0,00   66,50
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               2,00   822,00 1160,00 1170,00 46700,00 52176,00
> 84,87     5,68    2,44    1,57    3,30   0,43  99,50
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    5,05   31,46    0,00   63,50
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               3,00  1248,00 2337,00 1502,00 134932,00 48536,00
> 95,58     6,59    1,72    1,53    2,01   0,26  99,30
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,95   31,79    0,00   64,26
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               0,00   704,00  556,00 1292,00 19908,00 72600,00
> 100,12     5,50    2,99    1,83    3,48   0,54  99,50
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,03   28,90    0,00   68,07
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               0,00   544,00  278,00 1095,00  7848,00 66124,00
> 107,75     5,31    3,87    1,49    4,47   0,72  99,10
>
>
>
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>
>            0,00    0,00    3,03   29,32    0,00   67,65
>
>
>
> Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz
> avgqu-sz   await r_await w_await  svctm  %util
>
> sda               0,00   464,00  229,00 1172,00  6588,00 72384,00
> 112,74     5,44    3,88    1,67    4,31   0,71  99,50
>
>
>
>
>
>
>
>
>
> and this is our Problem. Anyone know why our Storage recive that much of
> Precesses?
>
>
>
> Thanks in advance
>
> _______________________________________________
> Users mailing list -- users@ovirt.org
> To unsubscribe send an email to users-le...@ovirt.org
> Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> oVirt Code of Conduct: https://www.ovirt.org/community/about/community-
> guidelines/
> List Archives: https://lists.ovirt.org/archives/list/users@ovirt.org/
> message/YENR6R4ESX3JCOS7DYA354EOPNL6WGUN/
>
>
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/P2YH5SLSFSHA6BNZHSIIJUUTZLUOOMGK/

Reply via email to