[libvirt] incorrect memory size inside vm

Hi. I have issue with incorrect memory side inside vm. I'm try utilize memory balloon (not memory hotplug, because i have guest without memory hotplug (may be)). When domain started with static memory all works fine, but then i'm specify in libvirt memory = 16384 , maxMemory = 16384 and currentMemory = 1024, guest in f/rpoc/meminfo says that have only 603608 Kb memory. Then i set memory via virsh setmem to 2Gb, guest see only 1652184 Kb memory. software versions libvirt: 1.2.10 qemu: 2.3.0 Guest OS: centos 6. qemu.log: LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/bin/kvm -name 26543 -S -machine pc-i440fx-1.7,accel=kvm,usb=off -m 1024 -realtime mlock=off -smp 1,maxcpus=4,sockets=4,cores=1,threads=1 -uuid 4521fb01-c2ca-4269-d2d6-0000035fd910 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/26543.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,num_queues=1,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/dev/vg4/26543,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none,discard=unmap,aio=native,iops=5000 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive if=none,id=drive-scsi0-0-1-0,readonly=on,format=raw -device scsi-cd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=52 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:00:34:f7,bus=pci.0,addr=0x3,rombar=0 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/26543.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-mouse,id=input0 -device usb-kbd,id=input1 -vnc [::]:8,password -device VGA,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object rng-random,id=rng0,filename=/dev/random -device virtio-rng-pci,rng=rng0,max-bytes=1024,period=2000,bus=pci.0,addr=0x7 -msg timestamp=on -- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru

On Wed, Jun 17, 2015 at 4:35 PM, Vasiliy Tolstov <v.tolstov@selfip.ru> wrote:
Hi. I have issue with incorrect memory side inside vm. I'm try utilize memory balloon (not memory hotplug, because i have guest without memory hotplug (may be)).
When domain started with static memory all works fine, but then i'm specify in libvirt memory = 16384 , maxMemory = 16384 and currentMemory = 1024, guest in f/rpoc/meminfo says that have only 603608 Kb memory. Then i set memory via virsh setmem to 2Gb, guest see only 1652184 Kb memory.
software versions libvirt: 1.2.10 qemu: 2.3.0 Guest OS: centos 6.
qemu.log: LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/bin/kvm -name 26543 -S -machine pc-i440fx-1.7,accel=kvm,usb=off -m 1024 -realtime mlock=off -smp 1,maxcpus=4,sockets=4,cores=1,threads=1 -uuid 4521fb01-c2ca-4269-d2d6-0000035fd910 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/26543.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-scsi-pci,id=scsi0,num_queues=1,bus=pci.0,addr=0x4 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x6 -drive file=/dev/vg4/26543,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none,discard=unmap,aio=native,iops=5000 -device scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive if=none,id=drive-scsi0-0-1-0,readonly=on,format=raw -device scsi-cd,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0 -netdev tap,fd=30,id=hostnet0,vhost=on,vhostfd=52 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:00:34:f7,bus=pci.0,addr=0x3,rombar=0 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev socket,id=charchannel0,path=/var/lib/libvirt/qemu/26543.agent,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-mouse,id=input0 -device usb-kbd,id=input1 -vnc [::]:8,password -device VGA,id=video0,bus=pci.0,addr=0x2 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -object rng-random,id=rng0,filename=/dev/random -device virtio-rng-pci,rng=rng0,max-bytes=1024,period=2000,bus=pci.0,addr=0x7 -msg timestamp=on
-- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru
The rest of visible memory is eaten by reserved kernel areas, for us this was a main reason to switch to a hotplug a couple of years ago. You would not be able to scale a VM by an order of magnitude with regular balloon mechanism without mentioned impact, unfortunately. Igor Mammedov posted hotplug-related patches for 2.6.32 a while ago, though RHEL6 never adopted them by some reason.

2015-06-17 17:09 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
The rest of visible memory is eaten by reserved kernel areas, for us this was a main reason to switch to a hotplug a couple of years ago. You would not be able to scale a VM by an order of magnitude with regular balloon mechanism without mentioned impact, unfortunately. Igor Mammedov posted hotplug-related patches for 2.6.32 a while ago, though RHEL6 never adopted them by some reason.
Hmm.. Thanks for info, from what version of kernel memory hotplug works? -- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru

On Wed, Jun 17, 2015 at 6:33 PM, Vasiliy Tolstov <v.tolstov@selfip.ru> wrote:
2015-06-17 17:09 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
The rest of visible memory is eaten by reserved kernel areas, for us this was a main reason to switch to a hotplug a couple of years ago. You would not be able to scale a VM by an order of magnitude with regular balloon mechanism without mentioned impact, unfortunately. Igor Mammedov posted hotplug-related patches for 2.6.32 a while ago, though RHEL6 never adopted them by some reason.
Hmm.. Thanks for info, from what version of kernel memory hotplug works?
-- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru
Currently QEMU memory hotplug should work with 3.8 and onwards. Mentioned patches are an adaptation for an older frankenkernel of 3.8` functionality.

2015-06-17 18:38 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
Currently QEMU memory hotplug should work with 3.8 and onwards. Mentioned patches are an adaptation for an older frankenkernel of 3.8` functionality.
This is band news =( i have debian wheezy that have old kernel... -- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru

2015-06-17 19:26 GMT+03:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
This is band news =( i have debian wheezy that have old kernel...
Does it possible to get proper results with balloon ? For example by patching qemu or something like this? -- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru

On Thu, Jun 18, 2015 at 12:21 AM, Vasiliy Tolstov <v.tolstov@selfip.ru> wrote:
2015-06-17 19:26 GMT+03:00 Vasiliy Tolstov <v.tolstov@selfip.ru>:
This is band news =( i have debian wheezy that have old kernel...
Does it possible to get proper results with balloon ? For example by patching qemu or something like this?
Yes, but I`m afraid that I don`t fully understand why do you need this when pure hotplug mechanism is available, aside may be nice memory stats from balloon and easy-to-use deflation. Just populate a couple of static dimms with small enough 'base' e820 memory and use balloon on this setup, you`ll get the reserved memory footprint as small as it would be in setup with equal overall amount of memory populated via BIOS. For example, you may use -m 128 ... {handful amount of memory placed in memory slots} setup to achieve the thing you want.

2015-06-18 1:40 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
Yes, but I`m afraid that I don`t fully understand why do you need this when pure hotplug mechanism is available, aside may be nice memory stats from balloon and easy-to-use deflation. Just populate a couple of static dimms with small enough 'base' e820 memory and use balloon on this setup, you`ll get the reserved memory footprint as small as it would be in setup with equal overall amount of memory populated via BIOS. For example, you may use -m 128 ... {handful amount of memory placed in memory slots} setup to achieve the thing you want.
I have debian wheezy guests with 3.4 kernels (or 3.2..) and many others like 32 centos 6, opensue , ubuntu, and other. Does memory hotplug works with this distros (kernels)? -- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru

On Thu, Jun 18, 2015 at 1:44 AM, Vasiliy Tolstov <v.tolstov@selfip.ru> wrote:
2015-06-18 1:40 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
Yes, but I`m afraid that I don`t fully understand why do you need this when pure hotplug mechanism is available, aside may be nice memory stats from balloon and easy-to-use deflation. Just populate a couple of static dimms with small enough 'base' e820 memory and use balloon on this setup, you`ll get the reserved memory footprint as small as it would be in setup with equal overall amount of memory populated via BIOS. For example, you may use -m 128 ... {handful amount of memory placed in memory slots} setup to achieve the thing you want.
I have debian wheezy guests with 3.4 kernels (or 3.2..) and many others like 32 centos 6, opensue , ubuntu, and other. Does memory hotplug works with this distros (kernels)?
Whoosh... technically it is possible but it would be an incompatible fork for the upstreams for both SeaBIOS and Qemu, because the generic way of plugging DIMMs in is available down to at least generic 2.6.32. Except may be Centos where broken kabi would bring great consequences, it may be better to just provide a backport repository with newer kernels, but it doesn`t sound very optimistic. For the history records, the initial hotplug support proposal provided by Vasilis Liaskovitis a couple of years ago worked in an exact way you are suggesting to, but its resurrection would mean emulator and rom code alteration, as I said above.

2015-06-18 1:52 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
Whoosh... technically it is possible but it would be an incompatible fork for the upstreams for both SeaBIOS and Qemu, because the generic way of plugging DIMMs in is available down to at least generic 2.6.32. Except may be Centos where broken kabi would bring great consequences, it may be better to just provide a backport repository with newer kernels, but it doesn`t sound very optimistic. For the history records, the initial hotplug support proposal provided by Vasilis Liaskovitis a couple of years ago worked in an exact way you are suggesting to, but its resurrection would mean emulator and rom code alteration, as I said above.
Ok, i'm try to build latest libvirt and check all oses for memory hotplug support =). -- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru

W dniu 2015-06-18 o 00:57, Vasiliy Tolstov pisze:
2015-06-18 1:52 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
Whoosh... technically it is possible but it would be an incompatible fork for the upstreams for both SeaBIOS and Qemu, because the generic way of plugging DIMMs in is available down to at least generic 2.6.32. Except may be Centos where broken kabi would bring great consequences, it may be better to just provide a backport repository with newer kernels, but it doesn`t sound very optimistic. For the history records, the initial hotplug support proposal provided by Vasilis Liaskovitis a couple of years ago worked in an exact way you are suggesting to, but its resurrection would mean emulator and rom code alteration, as I said above.
Ok, i'm try to build latest libvirt and check all oses for memory hotplug support =).
Hi guys. I'm actually investigating mem waste issue at my lab. I'm using libvirt + qemu on gentoo. # libvirtd -v 2015-06-18 14:50:56.619+0000: 11720: info : libvirt version: 1.2.16 # qemu-x86_64 -version qemu-x86_64 version 2.3.0, Copyright (c) 2003-2008 Fabrice Bellard # uname -a Linux vms06 3.18.14-gentoo #1 SMP Wed Jun 17 14:55:27 CEST 2015 x86_64 AMD Opteron(tm) Processor 6380 AuthenticAMD GNU/Linux When not using dimm (static or hotplugged) - only 'main' memory, waste is huge, especially if one define big limit. my test with only 'plain' memory: define domain.xml with different mem setting, and see mem sizes (virsh domstats DOMAIN --balloon from host and 'free' command from guest) libvirt max: 2GB, curr: 2GB system total: 2001 balloon.current=2097152 balloon.maximum=2097152 libvirt max: 4GB, curr: 4GB system total: 3953 balloon.current=4194304 balloon.maximum=4194304 libvirt max: 4GB, curr: 2GB system total: 1905 balloon.current=2097152 balloon.maximum=4194304 libvirt max: 8GB, curr: 8GB system total: 7985 balloon.current=8388608 balloon.maximum=8388608 libvirt max: 8GB, curr: 4GB system total: 3889 balloon.current=4194304 balloon.maximum=8388608 libvirt max: 8GB, curr: 2GB system total: 1841 balloon.current=2097152 balloon.maximum=8388608 libvirt max: 16GB, curr: 16GB system total: 16049 balloon.current=16777216 balloon.maximum=16777216 libvirt max: 16GB, curr: 8GB system total: 7857 balloon.current=8388608 balloon.maximum=16777216 libvirt max: 16GB, curr: 4GB system total: 3761 balloon.current=4194304 balloon.maximum=16777216 libvirt max: 16GB, curr: 2GB system total: 1713 balloon.current=2097152 balloon.maximum=16777216 As You can see, when one set 16GB max mem and define current mem to 2GB, guest only see 1713 MB. When You set 2GB max mem and leave 2GB mem, guest see 2001. So there are 288 MB wasted. Later, i tried to define 15 static 1GB dimms and 1GB 'main' memory, and later on decreased guest memory via balloon to 1GB, and there was no waste in guest. But, when i checked RES size of qemu process on host, It was about 5,5GB! In contrast, when using only 'main' memory, results were as expected (res mem for qemu was amount ram for guest + qemu itself). Do You see similar results at Your side? Best regards Piotr Rybicki

Do You see similar results at Your side?
Best regards
Would you mind to share you argument set to an emulator? As far as I understood you are using plain ballooning with most results from above for which those numbers are expected. The case with 5+gig memory consumption for deflated 1G guest looks like a bug with mixed dimm/balloon configuration if you are tried against latest qemu, so please describe a setup a bit more verbosely too.

Hello. Actually it was my mistake. After some time using memory in guest (find /, cp bigfine, etc), res size of qemu process shrinks to expected value. Sorry for disturbing. Now i don't see any memory waste in guest and host when using 'base' memory + 'dimm' memory. Although i have one issue. When i start qemu via libvirt with 16GB mem and 1G current mem: (...) <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>1048576</currentMemory> (...) Qemu starts, and balloon can't free memmory, so guest doesn't boot (It hangs or is looped in ' virtio_balloon virtio2: Out of puff! Can't get 1 pages'). I think this is because dimm memmory is not yet onlined, and balloon already tries to shring memory in guest. Best regards Piotr Rybicki W dniu 2015-06-18 o 23:23, Andrey Korolyov pisze:
Do You see similar results at Your side?
Best regards
Would you mind to share you argument set to an emulator? As far as I understood you are using plain ballooning with most results from above for which those numbers are expected. The case with 5+gig memory consumption for deflated 1G guest looks like a bug with mixed dimm/balloon configuration if you are tried against latest qemu, so please describe a setup a bit more verbosely too.
participants (3)
-
Andrey Korolyov
-
Piotr Rybicki
-
Vasiliy Tolstov