[libvirt-users] Ceph RBD locking for libvirt-managed LXC (someday) live migrations
by Joshua Dotson
Hi,
I'm trying to build an active/active virtualization cluster using a Ceph
RBD as backing for each libvirt-managed LXC. I know live migration for LXC
isn't yet possible, but I'd like to build my infrastructure as if it were.
That is, I would like to be sure proper locking is in place for live
migrations to someday take place. In other words, I'm building things as
if I were using KVM and live migration via libvirt.
I've been looking at corosync, pacemaker, virtlock, sanlock, gfs2, ocfs2,
glusterfs, cephfs, ceph RBD and other solutions. I admit that I'm quite
confused. If oVirt, with its embedded GlusterFS and its planned
self-hosted engine option, supported LXC, I'd use that. However the stars
have not yet aligned for that.
It seems that the most elegant and scalable approach may be to utilize
Ceph's RBD with its native locking mechanism plus corosync and pacemaker
for fencing, for a number of reasons out of scope for this email.
*My question now is in regards to proper locking. Please see the following
links. The libvirt hook looks good, but is there any expectation that this
arrangement will become a patch to libvirt itself, as is suggested by the
second link?*
http://www.wogri.at/en/linux/ceph-libvirt-locking/
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2013-August/003887.html
Can anyone guide me on how to theoretically build a very "lock" safe 5-node
active-active KVM cluster atop Ceph RBD? Must I use sanlock with its NFS
or GFS2 with its performance bottlenecks? Does your answer work for LXC
(sans the current state of live migration)?
Thanks,
Joshua
--
Joshua Dotson
Founder, Wrale Ltd
10 years, 8 months
[libvirt-users] Live migration finish threshold
by Joaquim Barrera
Hello everyone,
Can somebody point where I can find the code where libvirt makes the
decision to complete a live migration?
I mean, at some point syncronising the VM state, it has to decide that
the delta left to be migrate is low enough to achieve downtime 0, so
libvirt finishes the migration. This "low enough" must be defined
somewhere in the code, but I am unable to find it.
Thank you very much.
10 years, 8 months
[libvirt-users] vnc port/listen address ignored when setting machine?
by hubert depesz lubaczewski
Hi,
First of all, I hope it's not a big problem - I'm running on Debian, not
Redhat.
To my problem: I'm starting to learn virtualization, libvirt, and
decided to create some test machine. I did it with:
virt-install --name debian-test \
--os-type=linux \
--os-variant=debianwheezy \
--cdrom /media/media/software/iso/debian-testing-amd64-netinst-2014-01-16.iso \
--graphics vnc,listen=0.0.0.0,port=20001 \
--disk pool=default,format=raw,size=20 \
--ram 2048 \
--vcpus=2 \
--network bridge=virbr0 \
--hvm \
--virt-type=kvm
Machine starts, but domdisplay shows:
=# virsh domdisplay debian-test
vnc://localhost:14101
When I tried setting it with --grpahics=vnc,...port=40001, I got it to listen
on port 34101 (which is weird anyway, but at least I can change it).
But whatever I do - I can't make it to listen on 0.0.0.0.
I know I can use ssh port forwarding to get to this VNC, but I would very much
prefer to be able to set VNC listen to 0.0.0.0 for at least some of my test
domains.
What am I doing wrong?
Best regards,
depesz
--
The best thing about modern society is how easy it is to avoid contact with it.
http://depesz.com/
10 years, 8 months
[libvirt-users] Does libvirt lxc driver support "cpuset" attribute?
by WANG Cheng D
Dear all
I allocate only one vcpu for the container by the following statement, that is, I want to pin the vcpu to physical core "2".
<vcpu placement='static' cpuset="2" >1</vcpu>
My host has 4 physical cores. Before test, all the 4 cores are idle. After I run 4 processes in the container, I found all the 4 cores in the host are 100% used. That is, the container is pinned to all the available physical CPUs. This is not what I want.
My libvirt version is 1.0.3.
Although "vcpupin" element also can be used to pin vcpu, according to http://libvirt.org/formatdomain.html , "vcpupin" element is not supported by lxc driver.
I wonder if it is the older version of libvirt that causes the problem?
Thank you in advance.
Cheng Wang
10 years, 8 months
[libvirt-users] Fake Network Interface
by Andrew Martin
Hello,
Is there a supported method for creating a fake network interface in a VM's configuration file? I was using the below construct, however it is no longer working for me in recent versions of libvirt (libvirt 1.0.2 with qemu-kvm 1.4.0). Is there a different, preferred method for creating a fake virtual interface on the guest which does not exist on the host?
<interface type='network'>
<mac address='aa:aa:aa:aa:aa:aa'/>
<source network='fake'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
Thanks,
Andrew
10 years, 8 months
[libvirt-users] virDBusCallMethod:1173 : Launch helper exited with unknown return code
by Karoline Haus
Good morning,
I'm using libvirtd on Gentoo.
This is libvirt version: 1.1.3.1
I have trouble starting a VM using virsh start $vm. I do this as root, because as non-root user it did not work at all (especially it failed attaching to the networks). So, when I run the command (with sudo), I get the following error in libvirtd.log:
2014-01-15 07:51:00.423+0000: 16158: warning : qemuDomainObjTaint:1573 : Domain id=5 name='vader' uuid=f5b8c05b-9c7a-3211-49b9-2bd635f7e2aa is tainted: high-privileges
2014-01-15 07:51:00.428+0000: 16158: error : virDBusCallMethod:1173 : Launch helper exited with unknown return code 1
At the same time I get an error in /var/log/messages which seems related:
Jan 15 07:51:00 dbus[15845]: [system] Activating service name='org.freedesktop.machine1' (using servicehelper)
Jan 15 07:51:00 dbus[15845]: [system] Activated service 'org.freedesktop.machine1' failed: Launch helper exited with unknown return code 1
Anyone ever seen this issue? I have no idea where to look for errors because the message don't really tell me much. I have tried to execute the qemu-kvm command on the command line directly and that worked immediately. So the problem must be in libvirt.
Thanks for any pointers.
10 years, 8 months
[libvirt-users] Best practice for custom iptables rules
by ZeroUno
Hi,
I'm using libvirt to manage some VMs on a CentOS host, and I need some
custom iptables rules to always be in place for some communications to
happen, e.g. between the VMs and the outside world in both directions.
Some of these rules need to be at the top of the iptables chain,
otherwise the default rules added by libvirt would block the
communications I need.
So I cannot just add the rules in /etc/sysconfig/iptables, because
libvirt adds its own rules _before_ the rules contained in this config file.
I was looking at filters, but maybe not every rule can be made into a
filter?
Specifically, I need a rule for the POSTROUTING chain in the "nat"
table. Can it be added through filters?
Also, regarding the "iptables restart problem" described in the last
paragraph at <http://libvirt.org/firewall.html>, is there really no
acceptable way to make libvirt add its rules back automatically upon
iptables/network restart?
Thanks for any info.
Marco
--
01
10 years, 8 months
[libvirt-users] how to detect if qemu supports live disk snapshot
by Francesco Romani
Hi everyone,
Using the QEMU hypervisor, when a live disk snapshot is requested through libvirt,
the request can fail if the underyling qemu binary lacks the snapshotting
support.
In python, we have something like
libvirtError: Operation not supported: live disk snapshot not supported with this QEMU binary
I'd like to detect ahead of time if the underlying QEMU can or cannot do snapshotting,
and disable right from the start the snapshotting support in my application, avoiding to throw
errors to the user.
After reading from the docs, it seems that libvirt doesn't reports yet this information about
the hypervisor in the capabilities XML: http://libvirt.org/formatcaps.html
a quick test against libvirt 1.1.3 (qemu 1.6.1) on Fedora 20 seems to confirm (see XML dump below[1]).
So, if I'm not mistaken, looks like the only way to test for the support in QEMU is to actually
request a snapshot and see what happens.
Is this right?
If this is correct, I would like to craft and propose a patch to libvirt to export this information,
maybe into a form of some kind of warning list/blacklist: disk snapshotting it is supposed to be
available, but this particular QEMU doesn't support it.
What would be the best form to export such information? the options I see are
* enhance the capabilities XML
* add a new API: something like (very rough first draft)
int virDomainIsSnapshotSupported(virConnectPtr conn, unsigned int flags);
Thoughts? Any suggestion is welcome.
Best regards,
+++
[1]
# virsh -c qemu:///system
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh # capabilities
<capabilities>
<host>
<uuid>01c4dc83-b652-cb11-9e6d-f3bf03b35d26</uuid>
<cpu>
<arch>x86_64</arch>
<model>SandyBridge</model>
<vendor>Intel</vendor>
<topology sockets='1' cores='2' threads='2'/>
<feature name='erms'/>
<feature name='smep'/>
<feature name='fsgsbase'/>
<feature name='rdrand'/>
<feature name='f16c'/>
<feature name='osxsave'/>
<feature name='pcid'/>
<feature name='pdcm'/>
<feature name='xtpr'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='smx'/>
<feature name='vmx'/>
<feature name='ds_cpl'/>
<feature name='monitor'/>
<feature name='dtes64'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
<feature name='vme'/>
</cpu>
<power_management>
<suspend_mem/>
<suspend_disk/>
<suspend_hybrid/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num='1'>
<cell id='0'>
<memory unit='KiB'>7871524</memory>
<cpus num='4'>
<cpu id='0' socket_id='0' core_id='0' siblings='0-1'/>
<cpu id='1' socket_id='0' core_id='0' siblings='0-1'/>
<cpu id='2' socket_id='0' core_id='1' siblings='2-3'/>
<cpu id='3' socket_id='0' core_id='1' siblings='2-3'/>
</cpus>
</cell>
</cells>
</topology>
<secmodel>
<model>selinux</model>
<doi>0</doi>
</secmodel>
<secmodel>
<model>dac</model>
<doi>0</doi>
</secmodel>
</host>
<guest>
<os_type>hvm</os_type>
<arch name='i686'>
<wordsize>32</wordsize>
<emulator>/usr/bin/qemu-system-i386</emulator>
<machine canonical='pc-i440fx-1.6' maxCpus='255'>pc</machine>
<machine maxCpus='255'>pc-q35-1.4</machine>
<machine maxCpus='255'>pc-q35-1.5</machine>
<machine canonical='pc-q35-1.6' maxCpus='255'>q35</machine>
<machine maxCpus='1'>isapc</machine>
<machine maxCpus='255'>pc-0.10</machine>
<machine maxCpus='255'>pc-0.11</machine>
<machine maxCpus='255'>pc-0.12</machine>
<machine maxCpus='255'>pc-0.13</machine>
<machine maxCpus='255'>pc-0.14</machine>
<machine maxCpus='255'>pc-0.15</machine>
<machine maxCpus='255'>pc-1.0</machine>
<machine maxCpus='255'>pc-1.1</machine>
<machine maxCpus='255'>pc-1.2</machine>
<machine maxCpus='255'>pc-1.3</machine>
<machine maxCpus='255'>pc-i440fx-1.4</machine>
<machine maxCpus='255'>pc-i440fx-1.5</machine>
<machine maxCpus='1'>none</machine>
<domain type='qemu'>
</domain>
<domain type='kvm'>
<emulator>/usr/bin/qemu-kvm</emulator>
<machine canonical='pc-i440fx-1.6' maxCpus='255'>pc</machine>
<machine maxCpus='255'>pc-q35-1.4</machine>
<machine maxCpus='255'>pc-q35-1.5</machine>
<machine canonical='pc-q35-1.6' maxCpus='255'>q35</machine>
<machine maxCpus='1'>isapc</machine>
<machine maxCpus='255'>pc-0.10</machine>
<machine maxCpus='255'>pc-0.11</machine>
<machine maxCpus='255'>pc-0.12</machine>
<machine maxCpus='255'>pc-0.13</machine>
<machine maxCpus='255'>pc-0.14</machine>
<machine maxCpus='255'>pc-0.15</machine>
<machine maxCpus='255'>pc-1.0</machine>
<machine maxCpus='255'>pc-1.1</machine>
<machine maxCpus='255'>pc-1.2</machine>
<machine maxCpus='255'>pc-1.3</machine>
<machine maxCpus='255'>pc-i440fx-1.4</machine>
<machine maxCpus='255'>pc-i440fx-1.5</machine>
<machine maxCpus='1'>none</machine>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
<pae/>
<nonpae/>
</features>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='x86_64'>
<wordsize>64</wordsize>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<machine canonical='pc-i440fx-1.6' maxCpus='255'>pc</machine>
<machine maxCpus='255'>pc-q35-1.4</machine>
<machine maxCpus='255'>pc-q35-1.5</machine>
<machine canonical='pc-q35-1.6' maxCpus='255'>q35</machine>
<machine maxCpus='1'>isapc</machine>
<machine maxCpus='255'>pc-0.10</machine>
<machine maxCpus='255'>pc-0.11</machine>
<machine maxCpus='255'>pc-0.12</machine>
<machine maxCpus='255'>pc-0.13</machine>
<machine maxCpus='255'>pc-0.14</machine>
<machine maxCpus='255'>pc-0.15</machine>
<machine maxCpus='255'>pc-1.0</machine>
<machine maxCpus='255'>pc-1.1</machine>
<machine maxCpus='255'>pc-1.2</machine>
<machine maxCpus='255'>pc-1.3</machine>
<machine maxCpus='255'>pc-i440fx-1.4</machine>
<machine maxCpus='255'>pc-i440fx-1.5</machine>
<machine maxCpus='1'>none</machine>
<domain type='qemu'>
</domain>
<domain type='kvm'>
<emulator>/usr/bin/qemu-kvm</emulator>
<machine canonical='pc-i440fx-1.6' maxCpus='255'>pc</machine>
<machine maxCpus='255'>pc-q35-1.4</machine>
<machine maxCpus='255'>pc-q35-1.5</machine>
<machine canonical='pc-q35-1.6' maxCpus='255'>q35</machine>
<machine maxCpus='1'>isapc</machine>
<machine maxCpus='255'>pc-0.10</machine>
<machine maxCpus='255'>pc-0.11</machine>
<machine maxCpus='255'>pc-0.12</machine>
<machine maxCpus='255'>pc-0.13</machine>
<machine maxCpus='255'>pc-0.14</machine>
<machine maxCpus='255'>pc-0.15</machine>
<machine maxCpus='255'>pc-1.0</machine>
<machine maxCpus='255'>pc-1.1</machine>
<machine maxCpus='255'>pc-1.2</machine>
<machine maxCpus='255'>pc-1.3</machine>
<machine maxCpus='255'>pc-i440fx-1.4</machine>
<machine maxCpus='255'>pc-i440fx-1.5</machine>
<machine maxCpus='1'>none</machine>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
</capabilities>
--
Francesco Romani
10 years, 8 months
[libvirt-users] --persistent/--config confusion
by Dennis Jacobfeuerborn
Hi,
I'm wondering how to attach a network device properly to a domain.
According to the man page without "--config" a device created with
"attach-device" will be added to the domain yet with "--config" it will
only be added to the config and only be available after a restart of the
guest.
Is there no way to attach the device immediately *and* also make that
change persistent?
Regards,
Dennis
10 years, 8 months
[libvirt-users] Dedicated GDM session for a virt-manager virtual machine?
by Alex GS
Dear list,
Something has been perplexing me lately. I have a bunch of Windows 7
virtual machines running on top of Fedora 20 in KVM using virt-manager.
One of the big problems is that users who need the Windows VM's don't know
how to user virt-manager and when they accidentally reboot a machine I have
to manually setup the virt-manager session for them and make it
full-screen. That means if I'm not physically there the users are unable
to use the Windows sessions and cannot do their work.
Is there a way to create a GDM desktop session that just launches a virtual
machine via virsh and then logs directly into a full-screen virt-viewer
instance of that virtual machine?
Best,
AGS
10 years, 8 months