[libvirt] [RFC] support memory reserved feature and optimize mlock guest memory propose

Hi all: Currently, we use cgroup(memory) to support memory QoS on KVM platform, and use "mlock" on qemu to support "memory reserved". The "mlock" seems to be not appropriate. Now qemu "mlock" memory in the main thread, which would lock iothread (qemu_mutex_lock_iothread), if the memory size is large, that will consume lots of time. It means whenever we want to set a new 'mlock', the VM would be blocked for a while. Here is my optimization: 1. Add a global variable (lock_ram_size) to save the value of "memory reserved"; 2. Add a qmp commond "set_ram_minguarantee" to change lock_ram_size; 3. Create a new thread to mlock(lock_ram_zie), while is waked up by the "set_ram_minguarantee" qmp command. Flow chart: main funciton qmp command "set_ram_minguarantee" | | | | create "mlock" thread change value of lock_ram_size | | | | |------>thread wait<-------------wake up "mlock" thread | | | | | | |-------mlock(lock_ram_zie) We have tested this demo a few days, it seems to be worked well. But we are not sure is there any other problems , if the main thread and mlock thread access one memory zone at one time without a mutex lock. Is it workable?Or Is there any other idea to support "memory reserved" ? Thanks zhanghailiang

Il 05/03/2014 09:01, Zhanghailiang ha scritto:
Hi all:
Currently, we use cgroup(memory) to support memory QoS on KVM platform, and use "mlock" on qemu to support "memory reserved".
The "mlock" seems to be not appropriate.
Now qemu "mlock" memory in the main thread, which would lock iothread (qemu_mutex_lock_iothread), if the memory size is large, that will consume lots of time.
It means whenever we want to set a new 'mlock', the VM would be blocked for a while.
I'm not sure I understand how the mlock-ed memory is used. Are you using a custom malloc, for example with g_mem_set_vtable? Paolo

Il 05/03/2014 09:01, Zhanghailiang ha scritto:
Hi all:
Currently, we use cgroup(memory) to support memory QoS on KVM platform, and use "mlock" on qemu to support "memory reserved".
The "mlock" seems to be not appropriate.
Now qemu "mlock" memory in the main thread, which would lock iothread (qemu_mutex_lock_iothread), if the memory size is large, that will consume lots of time.
It means whenever we want to set a new 'mlock', the VM would be blocked for a while.
I'm not sure I understand how the mlock-ed memory is used. Are you using a custom malloc, for example with g_mem_set_vtable?
Paolo
Hi Paolo: Thanks for your reply. As you know qemu has an option "-mlock", I think it has some problems. If we set "-realtime mlock=on", then qemu will mlockall vm's memory, It is a very time consuming action, and it will block the libvirt api until it finished. so I think it is better to do 'mlock' asynchronously, the flow chart can be described like below. Is it ok? Flow chart: main funciton qmp command "set_ram_minguarantee" | | | | create "mlock" thread change value of lock_ram_size | | | | |------>thread wait<-------------wake up "mlock" thread | | | | | | |-------mlock(lock_ram_zie) zhanghailiang

Il 06/03/2014 09:06, Zhanghailiang ha scritto:
Il 05/03/2014 09:01, Zhanghailiang ha scritto:
Hi all:
Currently, we use cgroup(memory) to support memory QoS on KVM platform, and use "mlock" on qemu to support "memory reserved".
The "mlock" seems to be not appropriate.
Now qemu "mlock" memory in the main thread, which would lock iothread (qemu_mutex_lock_iothread), if the memory size is large, that will consume lots of time.
It means whenever we want to set a new 'mlock', the VM would be blocked for a while.
I'm not sure I understand how the mlock-ed memory is used. Are you using a custom malloc, for example with g_mem_set_vtable?
Paolo
Hi Paolo:
Thanks for your reply. As you know qemu has an option "-mlock", I think it has some problems. If we set "-realtime mlock=on", then qemu will mlockall vm's memory, It is a very time consuming action, and it will block the libvirt api until it finished. so I think it is better to do 'mlock' asynchronously, the flow chart can be described like below.
Is an asynchronous mlock valid for all workloads? Until the mlock finishes, there is no guarantee that the guest will have "real-time--friendly" response to memory allocation.
Is it ok? Flow chart: main funciton qmp command "set_ram_minguarantee" | | | | create "mlock" thread change value of lock_ram_size | | | | |------>thread wait<-------------wake up "mlock" thread | | | | | | |-------mlock(lock_ram_zie)
Also, I'm not sure what the arguments to mlock are. How do you find the address range to pass to mlock? Paolo
participants (2)
-
Paolo Bonzini
-
Zhanghailiang