On 04/16/2012 09:43 PM, Daniel P. Berrange wrote:
On Mon, Apr 16, 2012 at 06:00:22PM +0200, Marc-André Lureau wrote:
> Hi
>
> On Mon, Apr 16, 2012 at 2:32 PM, Srivatsa S. Bhat
> <srivatsa.bhat(a)linux.vnet.ibm.com> wrote:
>> On 04/16/2012 05:34 PM, Marc-André Lureau wrote:
>> Did you happen to perform a suspend/resume or a hibernation/restore
>> on your computer? (Or did you do CPU hotplug manually?)
>>
>> If yes, you might be seeing the problem reported at:
>>
https://bugzilla.redhat.com/show_bug.cgi?id=714271
>
> yep, thank you very much, that seems to be the reason
Thanks for the confirmation. The script below might work as a
temporary workaround.
This problem has existed for ages now. I wonder if there's a way we get get
a userspace workaround implemented.
IIRC, pm-utils has the ability to run arbitrary shell scripts upon
restore from suspend/hibernate. We could put a temp hack in libvirt
which resets the CPU affinity in the top level libvirt cgroup. That
would at least make new VMs start with good affinity. To deal with
existing running VMs, we would need to record existing affinity
before suspend & re-store it fully afterwards which is more complicated
I don't think that would be all that complicated.. Below is a script
that should do the trick, for all cases, including existing running VMs.
This script is not at all specific to libvirt by the way (as the problem
itself is not at all libvirt specific). It saves the cpuset configuration
before suspend and restores it after resume, for all cpusets (not only
the ones controlled by libvirt). Of course, it hooks onto the pm-utils
mechanism, as you mentioned.
Regards,
Srivatsa S. Bhat
IBM Linux Technology Center
Note: Put this script in /usr/<lib or lib64>/pm-utils/sleep.d by prefixing
appropriate number (eg: /usr/lib64/pm-utils/sleep.d/02cpusets.sh). And then
give execute permissions to the script. It will save all the cpusets' cpu
configuration before suspend and restore them after resume. This is a
workaround until the problem gets fixed in the kernel.
cpusets.sh
----
#! /bin/bash
# Script to save and restore the cpusets' cpu configuration after a
# suspend/resume cycle.
# This is a workaround for a Linux kernel bug which results in cpusets
# being reduced to a single cpu (boot cpu) after a suspend/resume
# cycle.
#
# Author: Srivatsa S. Bhat <srivatsa.bhat(a)linux.vnet.ibm.com>
# 16 March 2012
#
save_cpusets()
{
rm -f saved_cpusets.txt
# Check if cpusets are mounted separately
dir=`mount | grep cpuset`
if [ -z "$dir" ]; then
# Check if atleast cgroup is mounted
dir=`mount | grep cgroup`
if [ -z "$dir" ]; then
#Nothing to be done, as cpusets are not being used
exit 0
fi
fi
dir=`mount | grep cpuset | cut -d' ' -f3`
# Omit the root cpuset (using mindepth parameter) since it is read-only
# (and will be restored properly after resume by the kernel itself).
# Sorting is necessary because otherwise the hierarchy is not preserved
# and restore_cpusets() will have trouble restoring the cpusets.
# This is because of the rule that a child cpuset cannot have a cpu that
# is not present in its parent cpuset.
find $dir -mindepth 2 -name "cpuset.cpus" -type f -print | sort >
saved_path.txt
# Delete saved_path.txt if it is NULL
val=`cat saved_path.txt`
if [ -z "$val" ]; then
rm -f saved_path.txt
exit 0
fi
for cpuset in `cat saved_path.txt`
do
cat $cpuset >> saved_cpusets.txt
done
paste saved_path.txt saved_cpusets.txt > temp.txt
mv temp.txt saved_cpusets.txt
rm -f saved_path.txt
}
restore_cpusets()
{
if [ ! -f saved_cpusets.txt ]; then
# Nothing to be done
exit 0
fi
while read line
do
cpuset_path=`echo $line | cut -d' ' -f1`
value=`echo $line | cut -d' ' -f2`
echo "$value" > $cpuset_path
done < saved_cpusets.txt
rm -f saved_cpusets.txt
}
case "$1" in
suspend|hibernate)
save_cpusets
;;
resume|thaw)
restore_cpusets
;;
*)
;;
esac
exit $?