Looks ok to me:

top - 08:38:07 up 103 days, 22:05,  1 user,  load average: 0.68, 0.62,
0.57
Tasks: 565 total,   1 running, 564 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.0 us,  0.5 sy,  0.0 ni, 98.5 id,  0.0 wa,  0.0 hi,  0.0
si,  0.0 st
KiB Mem : 52807689+total, 22355988 free, 10132873+used,
40439219+buff/cache
KiB Swap:  4194300 total,  4193780 free,      520 used. 42492028+avail
Mem 

   PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+
COMMAND                  
 14187 qemu      20   0 9144668   8.1g  14072 S  12.6  1.6   6506:46
qemu-kvm                 
 11153 qemu      20   0 9244680   8.1g  13700 S   4.3  1.6  18881:11
qemu-kvm                 
 12436 qemu      20   0 9292936   8.1g  13712 S   3.3  1.6  21071:56
qemu-kvm                 
  5517 qemu      20   0 9128268   8.1g  14084 S   3.0  1.6   5801:03
qemu-kvm                 
 11764 qemu      20   0 9185364   8.1g  13720 S   3.0  1.6  10585:14
qemu-kvm                 
  7938 qemu      20   0 9252876   8.1g  13744 S   2.6  1.6  21912:46
qemu-kvm                 
 12791 qemu      20   0 9182292   8.1g  14140 S   2.6  1.6  17299:36
qemu-kvm                 
  4602 vdsm       0 -20 4803160 114132  13860 S   2.3  0.0   2123:45
vdsmd                    
  7621 qemu      20   0 9187424   4.8g  14264 S   2.3  1.0   3114:25
qemu-kvm                 
 12066 qemu      20   0 9188436   8.1g  13708 S   2.3  1.6  10629:53
qemu-kvm                 
135526 qemu      20   0 9298060   8.1g  13664 S   2.0  1.6   5792:05
qemu-kvm                 
  6587 qemu      20   0 4883036   4.1g  13744 S   1.3  0.8   2334:54
qemu-kvm                 
  3814 root      20   0 1450200  25096  14208 S   1.0  0.0 368:03.80
libvirtd                 
  6902 qemu      20   0 9110480   8.0g  13580 S   1.0  1.6   1787:57
qemu-kvm                 
  7249 qemu      20   0 4913084   1.6g  13712 S   0.7  0.3   1367:32
qemu-kvm                 


It looks like it's only in oVirt-engine that there's an issue. The host
seems happy enough.

/tony



On Mon, 2018-12-10 at 20:14 -0600, Darrell Budic wrote:
> Grab a shell on your hosts and check top memory use quick. Could be
> VDSMD, in which case restarting the process will give you a temp fix.
> If you’re running hyperconvered, check your gluster version, there
> was a leak in versions 3.12.7 - 3.1.12 or so, updating ovirt/gluster
> is the best fix for that.
> 
> > On Dec 10, 2018, at 7:36 AM, Tony Brian Albers <t...@kb.dk> wrote:
> > 
> > Hi guys,
> > 
> > We have a small test installation here running around 30 vms on 2
> > hosts.
> > 
> > oVirt 4.2.5.3
> > 
> > The hosts each have 512 GB memory, and the vms are sized with 4-8
> > GB
> > each.
> > 
> > I have noticed that over the last months, the memory usage in the
> > dashboard has been increasing and is now showing 946.8 GB used of
> > 1007.2 GB.
> > 
> > What can be causing this?
> > 
> > TIA,
> > 
> > -- 
> > -- 
> > Tony Albers
> > Systems Architect
> > Systems Director, National Cultural Heritage Cluster
> > Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.
> > Tel: +45 2566 2383 / +45 8946 2316
> > _______________________________________________
> > Users mailing list -- users@ovirt.org
> > To unsubscribe send an email to users-le...@ovirt.org
> > Privacy Statement: https://www.ovirt.org/site/privacy-policy/
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/commun
> > ity-guidelines/
> > List Archives: https://lists.ovirt.org/archives/list/us...@ovirt.or
> > g/message/SDDH2OC5RBOVYYCLGPOUF6HO676HWI5U/
> 
> 
-- 
Tony Albers
Systems Architect
Systems Director, National Cultural Heritage Cluster
Royal Danish Library, Victor Albecks Vej 1, 8000 Aarhus C, Denmark.
Tel: +45 2566 2383 / +45 8946 2316
_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NDL5QF4N332PPYCAORROVDOVRKWLLOHF/

Reply via email to