Hello,

sry for HTML Mail, it is auto configuration of Outlook. 

We have testet adding

device_model_version="qemu-xen"

to the config and rebooting the VMs. With this configuration, no change at 
memory usage.



Mit freundlichen Grüßen

Michael Schinzel
- Geschäftsführer -


IP-Projects GmbH & Co. KG
Am Vogelherd 14
D - 97295 Waldbrunn 
Telefon: 09306 - 76499-0
FAX: 09306 - 76499-15
E-Mail: i...@ip-projects.de
Geschäftsführer: Michael Schinzel
Registergericht Würzburg: HRA 6798
Komplementär: IP-Projects Verwaltungs GmbH



-----Ursprüngliche Nachricht-----
Von: Paul Durrant [mailto:paul.durr...@citrix.com] 
Gesendet: Donnerstag, 31. August 2017 10:44
An: Michael Schinzel <schin...@ip-projects.de>; xen-devel@lists.xen.org
Cc: Thomas Toka <t...@ip-projects.de>
Betreff: RE: Memory Issue HVM guest after Upgrade from 4.4 to 4.8

De-htmling... My response indented...

From: Xen-devel [mailto:xen-devel-boun...@lists.xen.org] On Behalf Of Michael 
Schinzel
Sent: 31 August 2017 08:43
To: xen-devel@lists.xen.org
Cc: Thomas Toka <t...@ip-projects.de>
Subject: [Xen-devel] Memory Issue HVM guest after Upgrade from 4.4 to 4.8

Hello,

cause of the not longer support of xen 4.4 hypervisor, we actually upgrade all 
of our xen hosts from 4.4 - debian 8 to 4.8 - debian 9. 

Till this update, we run a host for example with 16 GB memory for dom0 with 
about 93 VMs. Till the upgrade, all memory usage was fine. The Host use about 
1.4 - 3 GB memory of the allocated 16 GB.

At each host, we mix HVM and Para VMs. After the Upgrade, the HVM VMs 
constantly use more and more memory. About 100 MB more each 2-3 Minutes until 
the Host swaps. The Problem is only with HVM VMs, Para is all fine.



top - 09:40:43 up 1 day,  2:28,  1 user,  load average: 0,94, 1,03, 1,27
Tasks: 1313 total,   5 running, 1308 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0,2 us,  0,8 sy,  0,0 ni, 98,3 id,  0,0 wa,  0,0 hi,  0,3 si,  0,5 st 
KiB Mem : 30315388 total, 10791316 free, 18483368 used,  1040704 buff/cache KiB 
Swap: 15998972 total, 15919092 free,    79880 used. 11492116 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
17962 root      20   0 1202580 680040  17356 R   5,2  2,2   1:20.06 
qemu-system-i38
21679 root      20   0 1454316 922580  17760 R   5,2  3,0   1:26.99 
qemu-system-i38
27772 root      20   0 1635108 1,082g  17524 S   5,2  3,7   1:27.50 
qemu-system-i38
29731 root      20   0 1374844 896052  17016 S   5,2  3,0   1:17.82 
qemu-system-i38
14209 root      20   0 1130476 597120  17724 S   4,9  2,0   1:17.24 
qemu-system-i38
19846 root      20   0 1417076 921928  16952 S   4,6  3,0   1:31.96 
qemu-system-i38
4830 root      20   0 2092624 1,496g  17640 S   3,9  5,2   1:53.88 
qemu-system-i38
18897 root      20   0 2013120 1,353g  17932 S   3,9  4,7   1:30.63 
qemu-system-i38
7832 root      20   0   46296   5160   3176 R   2,3  0,0   0:00.38 top
31832 root      20   0 1373044 835140  17688 S   2,3  2,8   0:48.82 
qemu-system-i38
28744 root      20   0 1053868 530680  17944 S   2,0  1,8   0:34.11 
qemu-system-i38
13307 root      20   0  913684 424984  17168 S   1,6  1,4   0:27.57 
qemu-system-i38
15248 root      20   0 1411316 887892  17608 S   1,6  2,9   0:43.97 
qemu-system-i38
16135 root      20   0 1204240 640644  17776 S   1,6  2,1   0:37.28 
qemu-system-i38
20763 root      20   0 1036288 513848  17484 S   1,6  1,7   0:35.76 
qemu-system-i38
22663 root      20   0  851712 301236  17588 S   1,6  1,0   0:28.24 
qemu-system-i38
24849 root      20   0 1164908 644736  17824 S   1,6  2,1   0:39.93 
qemu-system-i38
25871 root      20   0 1113684 571548  17616 S   1,6  1,9   0:37.14 
qemu-system-i38
26840 root      20   0 1045604 515216  17888 S   1,6  1,7   0:35.42 
qemu-system-i38
30693 root      20   0 2329944 1,734g  17644 S   1,6  6,0   1:32.11 
qemu-system-i38
23743 root      20   0 1470544 929708  17500 S   1,3  3,1   0:47.23 
qemu-system-i38


The config file of one HVM guest:

#kernel = "hvmloader"
builder='hvm'
memory = 512
maxmem = 512
shadow_memory = 8
name = "vmanager1157"
vif = [ 'vifname=vmanager1157, rate=100Mb/s, bridge=xenbr0.240, mac=xxx, ip=xxx 
 2001:1608:10:3:0:0:c:1' ] vif_other_config = [ 'xxx, 'tbf', 'rate=100Mb/s', 
'bps_read=150Mb/s', 'bps_write=150Mb/s', 'iops_read=150000IOPS', 
'iops_write=150000IOPS' ] disk = [ 'phy:/dev/vm/vmanager1157-root,xvda,w', 
'file:/root/vmanager/iso/CentOS-7.0-1406-x86_64-NetInstall.iso,xvdc:cdrom,r' ] 
boot="cd"
vcpus = 1
sdl=0
vnc=1
vnclisten="0.0.0.0"
vncdisplay=69
vncpasswd='3s2Xwv65'
vncunused=0
stdvga=0
serial='pty'
usbdevice='tablet'
on_poweroff = 'destroy'
on_reboot   = 'restart'
on_crash    = 'destroy'


Is this normal with the new Xen Hypervisor? Actually we use Kernel Version 
4.12.10. So the newest Kernel with Xen Support.

> The default choice for QEMU changed between 4.4 and 4.8. In 4.4 is was trad 
> and in 4.8 it is upstream. If you force use of trad in your config, do you 
> still see the apparent leak?
>
>    Paul

Yours sincerely

Michael Schinzel

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel

Reply via email to