[libvirt-users] savevm and qemu 2.1.1
by Thomas Stein
hello.
I have an issue with libvirt-1.2.6 and qemu-2.1.1. As soon as i do:
"savevm $domain"
the domain ist gone. Console says "unable to connect to monitor". Libvirt.log
says:
qemu-system-x86_64: /var/tmp/portage/app-
emulation/qemu-2.1.1/work/qemu-2.1.1/hw/net/virtio-net.c:1348:
virtio_net_save: Assertion `!n->vhost_started' fail
ed.
Googling for this error leads to a barely related qemu-dev thread. Has someone
else experiencing this behaviour?
Uh. Almost forgot to mention. Going back to qemu-2.1.0 solves the issue.
best regards
t.
10 years, 2 months
[libvirt-users] 1.2.7 and 1.2.8 fail to start container: libvirt_lxc[4904]: segfault at 0 ip ...error 4 in libc-2.17.so[
by mxs kolo
HI all
Centos 7, 3.10.0-123.6.3.el7.x86_64
libvirt 1.27, libvirt 1.2.8 builded from source with
./configure --prefix=/usr
make && make install
LXC with direct network failed to start:
Sep 16 19:19:38 node01 kernel: device br502 entered promiscuous mode
Sep 16 19:19:39 node01 kernel: device br502 left promiscuous mode
Sep 16 19:19:39 node01 avahi-daemon[1532]: Withdrawing workstation
service for macvlan0.
Sep 16 19:19:39 node01 kernel: XFS (dm-16): Mounting Filesystem
Sep 16 19:19:39 node01 kernel: XFS (dm-16): Ending clean mount
Sep 16 19:19:39 node01 kernel: libvirt_lxc[4904]: segfault at 0 ip
00007ffe3cbf0df6 sp 00007ffe3fa03c98 error 4 in
libc-2.17.so[7ffe3cabf000+1b6000]
Sep 16 19:19:39 node01 abrt-hook-ccpp: Saved core dump of pid 1
(/usr/lib/systemd/systemd) to /var/tmp/abrt/ccpp-2014-09-16-19:19:39-1
(716800 bytes)
Sep 16 19:19:39 node01 journal: Cannot recv data: Connection reset by peer
Sep 16 19:19:39 node01 journal: internal error: guest failed to start:
In libvirt-1.2.6 work fine.
LXC config:
<domain type='lxc' id='5933'>
<name>ce7-t1</name>
<uuid>f80ad54d-6560-4bd0-aa6d-df3e29888914</uuid>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<memtune>
<hard_limit unit='KiB'>2097152</hard_limit>
<soft_limit unit='KiB'>2097152</soft_limit>
<swap_hard_limit unit='KiB'>3145728</swap_hard_limit>
</memtune>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64'>exe</type>
<init>/sbin/init</init>
</os>
<features>
<privnet/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<filesystem type='block' accessmode='passthrough'>
<source dev='/dev/data/ce7-t1'/>
<target dir='/'/>
</filesystem>
<filesystem type='ram' accessmode='passthrough'>
<source usage='524288' units='KiB'/>
<target dir='/dev/shm'/>
</filesystem>
<interface type='direct'>
<mac address='02:00:00:58:d8:15'/>
<source dev='br502' mode='bridge'/>
</interface>
<console type='pty' tty='/dev/pts/2'>
<source path='/dev/pts/2'/>
<target type='lxc' port='0'/>
<alias name='console0'/>
</console>
</devices>
<seclabel type='none'/>
</domain>
Where br502 attached to vlan interface :
#brctl show br502
bridge name bridge id STP enabled interfaces
br502 8000.002590e2da34 no eno1.502
In 1.2.7 and 1.2.8 lxc with nat network work fine.
b.r.
Maxim Kozin
10 years, 2 months
[libvirt-users] cgroups inside LXC containers losts memory limits after some time
by mxs kolo
Hi all
I have CentOS Linux release 7.0.1406, libvirt 1.2.7 installed.
Just after create and start inside LXC container present cgroups.
Example for memory:
[root@ce7-t1 /]# ls -la /sys/fs/cgroup/memory/
total 0
drwxr-xr-x 2 root root 0 Sep 15 17:14 .
drwxr-xr-x 12 root root 280 Sep 15 17:14 ..
-rw-r--r-- 1 root root 0 Sep 15 17:14 cgroup.clone_children
--w--w--w- 1 root root 0 Sep 15 17:14 cgroup.event_control
-rw-r--r-- 1 root root 0 Sep 15 17:15 cgroup.procs
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.failcnt
--w------- 1 root root 0 Sep 15 17:14 memory.force_empty
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.kmem.failcnt
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.kmem.limit_in_bytes
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.kmem.max_usage_in_bytes
-r--r--r-- 1 root root 0 Sep 15 17:14 memory.kmem.slabinfo
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.kmem.tcp.failcnt
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.kmem.tcp.limit_in_bytes
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.kmem.tcp.max_usage_in_bytes
-r--r--r-- 1 root root 0 Sep 15 17:14 memory.kmem.tcp.usage_in_bytes
-r--r--r-- 1 root root 0 Sep 15 17:14 memory.kmem.usage_in_bytes
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.limit_in_bytes
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.max_usage_in_bytes
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.memsw.failcnt
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.memsw.limit_in_bytes
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.memsw.max_usage_in_bytes
-r--r--r-- 1 root root 0 Sep 15 17:14 memory.memsw.usage_in_bytes
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.move_charge_at_immigrate
-r--r--r-- 1 root root 0 Sep 15 17:14 memory.numa_stat
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.oom_control
---------- 1 root root 0 Sep 15 17:14 memory.pressure_level
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.soft_limit_in_bytes
-r--r--r-- 1 root root 0 Sep 15 17:14 memory.stat
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.swappiness
-r--r--r-- 1 root root 0 Sep 15 17:14 memory.usage_in_bytes
-rw-r--r-- 1 root root 0 Sep 15 17:14 memory.use_hierarchy
-rw-r--r-- 1 root root 0 Sep 15 17:14 notify_on_release
-rw-r--r-- 1 root root 0 Sep 15 17:14 tasks
Command "free" inside LXC showed almost normal values:
[root@ce7-t1 /]# free
total used free shared buffers cached
Mem: 1048576 32972 1015604 4473848 0 -4445364
-/+ buffers/cache: 4478336 -3429760
Swap: 1048576 0 1048576
(some problem with negative values)
After unpredictable time passed (1-5 day ?), cgroups inside LXC
magicaly removed. "free" in such containers show 2^53-1 as maximum
values:
[root@puppet01 /]# free
total used free shared buffers cached
Mem: 9007199254740991 591180 9007199254149811 0
0 267924
-/+ buffers/cache: 323256 9007199254417735
Swap: 0 0 0
And no more any cgroups presented at least in memory category:
[root@puppet01 /]# ls -la /sys/fs/cgroup/memory/
total 0
b.r.
Maxim Kozin
10 years, 2 months
[libvirt-users] Using custom QEMU binaries with libvirt
by Joaquim Barrera
Hi all,
I compiled a custom version of QEMU 2.0.0 and I am having hard times to
make it available to libvirt. Just to clarify, if I execute
/usr/local/bin/qemu-system-x86_64
it does performs good. But when I put this very same path to <emulator>
tag in a domain configutation, when i start the domain I get
error: Failed to start domain vm1
error: internal error: process exited while connecting to monitor:
libvirt: error : cannot execute binary
/usr/local/bin/qemu-system-x86_64: Permission denied
I tried setting +x permission to all the binaries in /usr/local/bin,
disabling apparmor profile for libvirtd, creating and putting to
complain a profile for
/usr/local/bin/qemu-system-x86_64, creating a softlink to
/usr/bin/kvm-spice to the custom binary and leaving <emulator> as default...
At the end of http://www.gossamer-threads.com/lists/openstack/dev/40033
I found something about AppArmor, and enabling bios.bin reading
somewhere, but I got a little confused here.
The most disapointing thing here, is that using qemu 1.7 I could use my
custom build, but apparently something changed with 2.0 (or with libvirt
integration).
Any ideas?
Thanks!
10 years, 2 months
[libvirt-users] VM status when guest poweroff
by Hong-Hua.Yin@freescale.com
Hi,
I tried QEMU on ARM/PPC platforms.
If running poweroff command on guest, it should also shutoff the VM.
But it shows the VM status still be 'running' instead of 'shutoff'.
I guess it should be qemu issue, is it right?
Best Regards,
Olivia
10 years, 2 months
[libvirt-users] qemu:///system and qemu:///session
by navin p
Hi,
I try to connect from virsh -c qemu:///session and it shows the VMs
created by by that particular user (testuser). But when i try from root
virsh -c qemu:///system i don't see the vms created by testuser. How do i
make it appear in the root's virsh ie the vms created by testuser ?
I need this because i need information for global VM statistics created by
all users.
Can someone help me regarding this ?
Regards,
Navin
10 years, 2 months
[libvirt-users] grep ip address from KVM DHCP log
by Jianfeng Tang
>
Hi,
I plan to use KVM internal network 'default' and grep dhcp log to figure
out the IP address that assigned to my VM.
I know I can configure static ip but I like to assign ip dynamically to
avoid mgmt cost.
Does anyone know where the dhcp log is? My KVM host is running Ubuntu
Raring (13.04). It does not have file /var/log/daemon.log as some online
doc mentioned.
Thanks,
~Jianfeng
>
10 years, 2 months
[libvirt-users] Inconsistent behavior between x86_64 and ppc64 when creating guests with NUMA node placement
by Michael Turek
Hello all,
I was recently trying out NUMA placement for my guests on both x86_64
and ppc64 machines. When booting a guest on the x86_64 machine, the
following specs were valid (obviously, just notable excepts from the xml):
<memory unit='KiB'>8388608</memory>
<currentMemory unit='KiB'>8388608</currentMemory>
<vcpu placement='static'>4</vcpu>
...
<cpu>
<topology sockets='4' cores='1' threads='1'/>
<numa>
<cell cpus='0-2' memory='6144'/>
<cell cpus='3' memory='2048'/>
</numa>
</cpu>
However, on ppc64 this causes the following error:
error: Failed to create domain from sample_guest.xml
error: internal error: early end of file from monitor: possible problem:
2014-09-11T18:44:25.502140Z qemu-system-ppc64: total memory for NUMA
nodes (8388608) should equal RAM size (200000000)
The 200000000 is actually 8192 MB in bytes and hexidecimal. This is
apparently just an issue with the error message.
The following specs work on ppc64:
<cpu>
<topology sockets='4' cores='1' threads='1'/>
<numa>
<cell cpus='0-2' memory='6291456'/>
<cell cpus='3' memory='2097152'/>
</numa>
</cpu>
Note that the memory for each cell is 6144*1024 and 2048*1024
respectively. The issue is that the memory size for each NUMA cell
should be specified in KiB, not MB
(http://libvirt.org/formatdomain.html#resPartition "|memory| specifies
the node memory in kibibyte").
In short, it seems that specifying NUMA cell memory in MB works on
x86_64 but not on ppc64. Does anyone have any insight to what's causing
this, or if I'm misunderstanding something? Any help is appreciated,
thank you!
Regards,
Michael Turek
10 years, 2 months
[libvirt-users] ntpd in VM
by Adam King
----- Original Message -----
From: "Adam King" <kinga(a)sghs.org.uk>
To: "Gary Hook" <garyrhook(a)gmail.com>
Sent: Friday, September 12, 2014 1:39:38 PM
Subject: Re: [libvirt-users] ntpd in VM
----- Original Message -----
From: "Gary Hook" <garyrhook(a)gmail.com>
To: libvirt-users(a)redhat.com
Sent: Friday, September 12, 2014 1:25:47 PM
Subject: Re: [libvirt-users] ntpd in VM
While I agree that running a time server in a VM is, at best, problematic, most of those nay-sayers have experience with VMware, xen and the like. Those aren't the only hypervisors out there, and the decision process should depend upon the hypervisor (to a significant degree), not just the idea of "can I do this in a VM?" Not all hypervisors are created equally.
On Fri, Sep 12, 2014 at 7:14 AM, Pierre Schweitzer < pierre(a)reactos.org > wrote:
Hi,
It is still a bad idea. I invite you to read here for the reasons why:
http://serverfault.com/a/106509/150152
Cheers,
On 09/12/2014 12:51 PM, Mauricio Tavares wrote:
> I was taught in kitty school that running a ntp server in a vm
> was a bad idea. Is that still the case?
>
My experience is somewhat different. VMWare has always been problematic with this, especially Linux guests.
I've run KVM based NTP servers inside vm's (2 as active:standby) and dozens of ntp clients inside vm's for over a year now and seen no adverse results.
Admittedly I've not been checking the drift but I can vouch for the results.
Regards
Adam King
<blockquote>
</blockquote>
10 years, 2 months