Comment #14 on issue 695 by [email protected]: support kvm run-time memory changes (ballooning)
https://code.google.com/p/ganeti/issues/detail?id=695

but what if i want an instance to have for example 2G per default (on startup) but the ability to increase it to a max of 10G on demand?

I wanted that too, but it turns out that for kvm at least, this is very difficult to do.

Short version: you can create a guest with an initial amount of memory, and whilst you can balloon *down* from that value (and back up again), you cannot balloon above it without hot-plugging virtual DIMMs. And this doesn't always take immediate effect in the guest - you may need to login to the guest to bring the new DIMM online.

So if you want to run a VM with 4GB and have the ability to seamlessly increase to 8GB, by far the easiest way is to boot it with 8GB, wait until the OS is running, and then balloon it down to 4GB.

This can be demonstrated without ganeti, just running kvm at the command line, with a suitable Linux live CD ISO:

qemu-system-x86_64 -enable-kvm -m size=4096M,slots=16,maxmem=8192M -balloon virtio -cdrom /data/iso/CentOS-6.5-x86_64-LiveCD.iso -vnc 0.0.0.0:10 -monitor stdio

The kvm process starts. Login at the console (vncviewer x.x.x.x:10) and in a terminal type "free" and you'll see you have 4GB.

Using the qemu monitor, you can balloon down, but not above 4GB.

(qemu) info balloon
balloon: actual=4096
(qemu) balloon 2048
(qemu) info balloon
balloon: actual=2048
(qemu) balloon 6144
(qemu) info balloon
balloon: actual=4096
(qemu)

And "free" inside the guest doesn't go above 4G.

But notice I started qemu with 16 memory slots, and so 15 of these are still free. Thus I can plug in some more memory:

# See http://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/memory-hotplug.txt;hb=HEAD

(qemu) object_add memory-backend-ram,id=mem1,size=2G
(qemu) device_add pc-dimm,id=dimm1,memdev=mem1
(qemu) balloon 6144
(qemu) info balloon
balloon: actual=6144

... and now the guest has 6GB, confirmed by "free" in the guest.

Not all guests support this immediately. With an Ubuntu 16.04 guest, I found I had to login to the guest and do:

grep -l offline /sys/devices/system/memory/memory*/state | while read i; do echo online >$i; done

(although that test was done using libvirt which handles things slightly differently, requiring you to configure NUMA memory before you can do hot-plugging)

What this means is: if we wanted ganeti to be able to grow kvm above the initial memory allocation, it would have to hot-plug a DIMM. If you kept increasing the memory in small increments, it would consume a DIMM slot each time. (The smallest increment supported by kvm is 128MiB). And ganeti would probably have to keep track of this, to decide whether there are any spare slots, and whether a particular request needs a new DIMM inserting or not. It's all pretty horrendous.

Whether this is easier with Xen, I don't know.

Aside: the gory details for doing this with libvirt - which after all is just another front-end to kvm flags - are at https://medium.com/@juergen_thomann/memory-hotplug-with-qemu-kvm-and-libvirt-558f1c635972

libvirt also insists that you configure NUMA if you configure maxMemory > memory, but running kvm at the command line, I didn't need to.

--
You received this message because this project is configured to send all issue notifications to this address.
You may adjust your notification preferences at:
https://code.google.com/hosting/settings

Reply via email to