[libvirt-users] Determine ongoing shutdown via libvirt or preventing migrations to host
by Ondřej Kunc
Hi list,
Is there any function in libvirt to determine runlevel of host or any
possibility to prevent VM migrations to that host ?
I'm developing simple cluster of 4 servers running KVM VMs. For
management I decided to use python-libvirt. It is working in that way,
that every VM is diskless and booting via PXE so I can run it on every
host server. Every VM has assigned one host as monitor which is
responsible to periodically check, that VM is running on one of other
hosts. If it don't run, it will be started on first available server
assigned for that VM. When one of hosts is shut down, it will migrate
all VMs to another hosts, but that monitor which checks VMs will migrate
it after reboot. But I have race condition there that monitors can check
that host is running before it is shutdown and will migrate VM's back to
server which will reboot.
Thank you for any help
--
Ondřej Kunc, Senior Linux Administrator
IGNUM s.r.o., Vinohradská 190, Praha 3, 130 61, CZE
Mobil: +420 603 111 111 | Fax: +420 296 332 222
www.ignum.cz | www.domena.cz | www.webcloud.cz
11 years
Re: [libvirt-users] gentoo linux, problem starting vm´s when cache=none
by Daniel P. Berrange
On Tue, Dec 10, 2013 at 03:21:59PM +0100, Marko Weber | ZBF wrote:
> Hello Daniel,
>
> Am 2013-12-10 11:23, schrieb Daniel P. Berrange:
> >On Tue, Dec 10, 2013 at 11:20:35AM +0100, Marko Weber | ZBF wrote:
> >>
> >>hello mailinglist,
> >>
> >>on gentoo system with qemu-1.6.1, libvirt 1.1.4, libvirt-glib-0.1.7,
> >>virt-manager 0.10.0-r1
> >>
> >>when i set on virtual machine "cache=none" in the disk-menu, the
> >>machine faults to start with:
> >>
> >><<
> >>Fehler beim Starten der Domain: Interner Fehler: Prozess während der
> >>Verbindungsaufnahme zum Monitor beendet :qemu-system-x86_64:
> >>-drive file=/raid6/virtual/debian-kaspersky.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native:
> >>could not open disk image /raid6/virtual/debian-kaspersky.img:
> >>Invalid argument
> >>
> >>
> >>Traceback (most recent call last):
> >> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100,
> >>in cb_wrapper
> >> callback(asyncjob, *args, **kwargs)
> >> File "/usr/share/virt-manager/virtManager/asyncjob.py", line 122,
> >>in tmpcb
> >> callback(*args, **kwargs)
> >> File "/usr/share/virt-manager/virtManager/domain.py", line 1210,
> >>in startup
> >> self._backend.create()
> >> File "/usr/lib64/python2.7/site-packages/libvirt.py", line 708, in
> >>create
> >> if ret == -1: raise libvirtError ('virDomainCreate() failed',
> >>dom=self)
> >>libvirtError: Interner Fehler: Prozess während der
> >>Verbindungsaufnahme zum Monitor beendet :qemu-system-x86_64:
> >>-drive file=/raid6/virtual/debian-kaspersky.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native:
> >>could not open disk image /raid6/virtual/debian-kaspersky.img:
> >>Invalid argument
> >>>>
> >>
> >>when i switch back to cache=default, the machine start like a charm.
> >>
> >>any ideas why? i cant find anything in logs, only this "invalid
> >>argument".
> >
> >cache=none uses O_DIRECT flag. if you get "Invalid argument" it usually
> >means that your filesystem is bad and does not support O_DIRECT
>
> i use vanilla-kernel 3.10.23 and xfs on the partition where the vm
> .img is lying.
>
> the mount options i use: /raid6 type xfs
> (rw,noatime,nodiratime,largeio,nobarrier,logbsize=256k)
>
> any wrong here?
XFS should support O_DIRECT, but possibly one of the mount option is
causing trouble. I don't know enough about XFS to answer for sure.
BTW, please keep your replies on the mailing list...
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
11 years
[libvirt-users] gentoo linux, problem starting vm´s when cache=none
by Marko Weber | ZBF
hello mailinglist,
on gentoo system with qemu-1.6.1, libvirt 1.1.4, libvirt-glib-0.1.7,
virt-manager 0.10.0-r1
when i set on virtual machine "cache=none" in the disk-menu, the machine
faults to start with:
<<
Fehler beim Starten der Domain: Interner Fehler: Prozess während der
Verbindungsaufnahme zum Monitor beendet :qemu-system-x86_64: -drive
file=/raid6/virtual/debian-kaspersky.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native:
could not open disk image /raid6/virtual/debian-kaspersky.img: Invalid
argument
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 100, in
cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 122, in
tmpcb
callback(*args, **kwargs)
File "/usr/share/virt-manager/virtManager/domain.py", line 1210, in
startup
self._backend.create()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 708, in
create
if ret == -1: raise libvirtError ('virDomainCreate() failed',
dom=self)
libvirtError: Interner Fehler: Prozess während der Verbindungsaufnahme
zum Monitor beendet :qemu-system-x86_64: -drive
file=/raid6/virtual/debian-kaspersky.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none,aio=native:
could not open disk image /raid6/virtual/debian-kaspersky.img: Invalid
argument
>>
when i switch back to cache=default, the machine start like a charm.
any ideas why? i cant find anything in logs, only this "invalid
argument".
the machine is created with profile "linux / debian squeeze" cause its a
debian 6
marko
11 years
[libvirt-users] ANNOUNCE: ruby-libvirt 0.5.0
by Chris Lalancette
All,
I'm pleased to announce the release of ruby-libvirt 0.5.0. ruby-libvirt
is a ruby wrapper around the libvirt API. Version 0.5.0 brings new APIs, more
documentation, and bugfixes:
* Updated Network class, implementing almost all libvirt APIs
* Updated Domain class, implementing almost all libvirt APIs
* Updated Connection class, implementing almost all libvirt APIs
* Updated DomainSnapshot class, implementing almost all libvirt APIs
* Updated NodeDevice class, implementing almost all libvirt APIs
* Updated Storage class, implementing almost all libvirt APIs
* Add constants for almost all libvirt defines
* Improved performance in the library by using alloca
Version 0.5.0 is available from http://libvirt.org/ruby:
Tarball: http://libvirt.org/ruby/download/ruby-libvirt-0.5.0.tgz
Gem: http://libvirt.org/ruby/download/ruby-libvirt-0.5.0.gem
It is also available from rubygems.org; to get the latest version, run:
$ gem install ruby-libvirt
As usual, if you run into questions, problems, or bugs, please feel free to
mail me (clalancette(a)gmail.com) and/or the libvirt mailing list.
Thanks to everyone who contributed patches and submitted bugs.
Chris Lalancette
11 years
[libvirt-users] Question about setns recognising in libvirt autoconf
by hzguanqiang@corp.netease.com
Hi experts,
When I test lxc container with lxc-enter-namespace command, It reported an error as
following:
root@debian:~/github/libvirt# vir lxc-enter-namespace lxc --noseclabel /bin/df -hl
error: Cannot get namespaces for 3145: Function not implemented
It seems that setns is not supported by my kernel.
But from the following info, It seemed the reason is just libvirt/autoconf doesn't
recgonise setns.
root@debian:~/github/libvirt# grep setns /proc/kallsyms
ffffffff8105b78b T SyS_setns
ffffffff8105b78b T sys_setns
root@debian:~/github/libvirt# ./configure | grep setns
checking for setns... no
What the problem really is ? How can I fix this problem?
Thanks~
------------------
Best regards!
GuanQiang
2013-12-09
11 years
[libvirt-users] Libvirt support for KVM/ARM
by 林佳緯
Sorry for ask the similar question again.
( I have asked "libvirt support for arm platform"
but truely I want to ask libvirt for kvm/arm.)
My question:
Does libvirt API support KVM/ARM now?
Thanks for any answer.
Gavin
11 years
[libvirt-users] assign static external IP to container
by scar
hello i have a server colocated in a datacenter with several external IP
addresses available to use. the physical server is using one of these
IPs, and i want to assign another, unused IP to the virtual machine. i
thought i could just do this by editing the container's
/etc/network/interfaces, setting a static IP address for eth0 much like
i did for br0 on the host machine.... but doesn't seem to be working.
ifconfig shows eth0 has the external address but i can't resolve any
hostnames nor telnet to a direct IP address (no route to host). if i
change back to dhcp and let eth0 get an internal address, i can at least
access the internet but cannot access the virtual machine from the
internet. what is the trick to giving a VM a routable, external IP
address? thanks
11 years
[libvirt-users] correct way to hot-add cdrom ?
by Alexandr
Good day to all. i have problems with cdrom hot adding code. currently i
using virDomainAttachDevice with type=file, device=cdrom, dev=hdc, this
code works for machine with one ide hdd and one ide cdrom, but this not
work for machine with only one ide hdd, and i looking for solution to
hot add cdrom to machine independent of existing devices or i need way
to determinate which target device can be attached to vm. thanks in
advance for any help.
11 years
[libvirt-users] libvirt, Open vSwitch and iptables
by Yoann Juet
Hi all,
We're using since a long time libvirt with KVM guest machines and linux
bridges. Firewall rules based on iptables and defined on the host server
control inbound/outbound traffic to/from each VM. In order to improve
remote administration facility and get extra services, it makes sense
for us to replace linux bridges with Open vSwitch. However, the side
effect is the solution's inability to filter VM traffic since it's
impossible to set-up iptables rules with ovs bridges. OpenStack/Quantum
circumvents this problem (no talking about performance) by setting an
extra linux bridge and veth pair between the guest TAP and ovs.
Is there {a simple|an alternative} solution to achieve it without
installing the OpenStack/Quantum layer ?
Thanks,
Regards,
--
Université de Nantes - Direction des Systèmes d'Information
IM jabber: yoann.juet(a)univ-nantes.fr
11 years