[libvirt-users] Libvirt with multipath devices and/or FC on NPIV
by C.D.
Hello,
I am trying to find out a best practice for a specific scenario.
First of all I would like to know what is the proper way to set up
multipath, who should care about it the host or the guest. Right now I have
a setup where I have one multipath which sets my host to boot from FC SAN. I
have another multipathed LUN in the host which is essentially a dm which I
attached to a guest, however through virtio. I added the devices through
virsh with pool type of "mpath" and path through /dev/mapper/. So here is my
first question: with such a setup the multipath and eventual fail-over is
taken care by the host OS, right, the guest will not notice is I suppose?
But what about migration. What if I decide to migrate the guest to another
host. How would that work out? WIth shared directory it is easy, you just
have the images in it, but what about such a setup. I can always add the
aforementioned LUN, where the guest resides to another Storage Group on the
SAN where the new host has access and I can make sure that the mpath device
name is persistent across all hosts, but is that the right approach.
Here is another question that is bugging me. I have FC HBA-s on all hosts
and I would like to make a HBA visible through my guest. I stumbled upon a
bug that is documented in redhat's bugzilla that attempted creation of NPIV
on pci_0000_blah_blah instead on the scsi_hostX device, but I fixed that
easily in the xml and although virsh doesn't seem to think I completed the
nodedev-create NPIV_for_my_FC.xml the device is seen as a child in
nodedev-list and what is more important the WWN are seen on the SAN switch.
So this seems to be working OK, however I don't know how to attach this new
shiny device that I created to a guest. Could someone give me a hint? What
is the proper way to attach this new devices to a guest OS and do that
persistently. Do I do that with virsh's attach-device and if so what is the
proper XML format? Should I dump the XML for the newly created NPIV nodedev
and try to attach that? And again with multipathing. What if I decided to
create NPIV on my two FC cards in every host, then do the zoning and attach
those newly created NPIV nodes to the guests? Will that produce the same
effect that such a multipath does on the host? It doesn't really matter if I
would use it for root drive or a shared gfs2 storage between guests, I just
want to know is this possible and is it the right thing to do, or should I
stick to setup in my previous paragraph? And last question: what about
migration with such NPIV FC devices. I can move a guest around, but will
that move my NPIV FC with it's WWN so that I can continue using my zoning (I
think I just realized that this might be impossible because I'm migrating
the guest, not the entire setup of libvirt, but probably I'm wrong and I'm
not aware of a proper method to do it correctly?).
Probably those questions are easy to answer, but please bare with me, I'm
playing with libvirt in an enterprise setup only for the last couple of days
and I would really like to make it right and kick some VMware ass.
By the way, all my hosts are Fedora 14, with kernel 2.6.37 (rebuild of the
one for Fedora 15 from koji, because there was a small glitch with Qlogic's
driver in the stock 2.6.35). Most of my guests will be SL6,CentOS5/6(when it
arrives) and a couple of windows XP guests (but I'm not really concerned
with that).
Thanks for the support in advance
13 years, 8 months
[libvirt-users] Libvirt with multipath devices and/or FC on NPIV
by C.D.
Hello,
I am trying to find out a best practice for a specific scenario.
First of all I would like to know what is the proper way to set up
multipath, who should care about it the host or the guest. Right now I have
a setup where I have one multipath which sets my host to boot from FC SAN. I
have another multipathed LUN in the host which is essentially a dm which I
attached to a guest, however through virtio. I added the devices through
virsh with pool type of "mpath" and path through /dev/mapper/. So here is my
first question: with such a setup the multipath and eventual fail-over is
taken care by the host OS, right, the guest will not notice is I suppose?
But what about migration. What if I decide to migrate the guest to another
host. How would that work out? WIth shared directory it is easy, you just
have the images in it, but what about such a setup. I can always add the
aforementioned LUN, where the guest resides to another Storage Group on the
SAN where the new host has access and I can make sure that the mpath device
name is persistent across all hosts, but is that the right approach.
Here is another question that is bugging me. I have FC HBA-s on all hosts
and I would like to make a HBA visible through my guest. I stumbled upon a
bug that is documented in redhat's bugzilla that attempted creation of NPIV
on pci_0000_blah_blah instead on the scsi_hostX device, but I fixed that
easily in the xml and although virsh doesn't seem to think I completed the
nodedev-create NPIV_for_my_FC.xml the device is seen as a child in
nodedev-list and what is more important the WWN are seen on the SAN switch.
So this seems to be working OK, however I don't know how to attach this new
shiny device that I created to a guest. Could someone give me a hint? What
is the proper way to attach this new devices to a guest OS and do that
persistently. Do I do that with virsh's attach-device and if so what is the
proper XML format? Should I dump the XML for the newly created NPIV nodedev
and try to attach that? And again with multipathing. What if I decided to
create NPIV on my two FC cards in every host, then do the zoning and attach
those newly created NPIV nodes to the guests? Will that produce the same
effect that such a multipath does on the host? It doesn't really matter if I
would use it for root drive or a shared gfs2 storage between guests, I just
want to know is this possible and is it the right thing to do, or should I
stick to setup in my previous paragraph? And last question: what about
migration with such NPIV FC devices. I can move a guest around, but will
that move my NPIV FC with it's WWN so that I can continue using my zoning (I
think I just realized that this might be impossible because I'm migrating
the guest, not the entire setup of libvirt, but probably I'm wrong and I'm
not aware of a proper method to do it correctly?).
Probably those questions are easy to answer, but please bare with me, I'm
playing with libvirt in an enterprise setup only for the last couple of days
and I would really like to make it right and kick some VMware ass.
By the way, all my hosts are Fedora 14, with kernel 2.6.37 (rebuild of the
one for Fedora 15 from koji, because there was a small glitch with Qlogic's
driver in the stock 2.6.35). Most of my guests will be SL6,CentOS5/6(when it
arrives) and a couple of windows XP guests (but I'm not really concerned
with that).
Thanks for the support in advance
P.S. please keep me in CC, as I'm not on the list. Thank you
13 years, 8 months
[libvirt-users] Issue creating vm channel
by SAURAV LAHIRI
Hello,
I am trying to create a lucid guest VM with a vmchannel between the guest and the hypervisor. Details below:
KERNEL:
# uname -a
Linux saurav-desktop 2.6.38-2-generic #29~lucid1-Ubuntu SMP Mon Feb 7 13:35:14 UTC 2011 i686 GNU/Linux
QEMU:
# /usr/bin/qemu --version
QEMU emulator version 0.13.0, Copyright (c) 2003-2008 Fabrice Bellard
LIBVIRT:
# /usr/sbin/libvirtd --version
/usr/local/sbin/libvirtd (libvirt) 0.8.8
Below is a snippet of my libvirt xml file.
===============================================================
<domain type='kvm' id='16'>
<name>FinalTest</name>
<uuid>212bc1f1-b4f7-1ed3-7dc3-2b4b82ab68d1</uuid>
<memory>1048576</memory>
<currentMemory>1048576</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64' machine='pc-0.12'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>destroy</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<controller type='virtio-serial' index='0' max_ports='15' vectors='4'/>
<channel type='pty'>
<target type='virtio' name='org.linux-kvm.port.1'/>
<address type='virtio-serial' controller='0' bus='0'/>
</channel>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/user1/linux_installed/image.cow2'/>
<target dev='hda' bus='ide'/>
<alias name='ide-disk0'/>
<address type='drive' controller='0' bus='0' unit='0'/>
</disk>
================================================================
When i submit the xml file to virsh I get the following error:
===============================================================
# virsh create vmTemplete-1.xml
error: Failed to create domain from vmTemplete-1.xml
error: internal error Process exited while reading console log output: char device redirected to /dev/pts/3
char device redirected to /dev/pts/4
open /dev/kvm: No such file or directory
Could not initialize KVM, will disable KVM support
kvm: -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.linux-kvm.port.1: virtio-serial-bus: Out-of-range port id specified, max. allowed: 0
kvm: -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.linux-kvm.port.1: Device 'virtserialport' could not be initialized
===========================================================
What could be the problem ? Could any one please point to the correct direction with regard to this.
Regards
sl
13 years, 8 months
[libvirt-users] dnsmasq not started when no dhcp enabled ?
by jbd
Hello everybody,
i've defined a simple network, with no dhcp. I'd like to use dnsmasq
only as a dns server.
$ virsh net-dumpxml basicswitch
<network>
<name>basicswitch</name>
<uuid>60f491d2-d6c4-6b57-8a50-081cace8dedc</uuid>
<forward mode='nat'/>
<bridge name='virbr1' stp='on' delay='0' />
<ip address='192.168.100.1' netmask='255.255.255.0'>
</ip>
</network>
I start it :
$ virsh net-start basicswitch
Network basicswitch started
$ virsh net-list
Name State Autostart
-----------------------------------------
basicswitch active no
But i see no dnsmasq process :
$ ps fax|grep dnsmas[q]
Here the output of "virsh version" :
$ virsh version
Compiled against library: libvir 0.8.8
Using library: libvir 0.8.8
Using API: QEMU 0.8.8
Running hypervisor: QEMU 0.12.5
What is funny is that the dnsmasq process is created on another box with
different version :
$ virsh version
Compiled against library: libvir 0.8.3
Using library: libvir 0.8.3
Using API: QEMU 0.8.3
Running hypervisor: QEMU 0.12.5
$ ps fax|grep dnsmas[q]
7104 ? S 0:00 dnsmasq --strict-order --bind-interfaces
--pid-file=/var/run/libvirt/network/basicswitch.pid --conf-file=
--listen-address 192.168.100.1 --except-interface lo
Any advice on it ?
Regards,
Jean-Baptiste
13 years, 8 months
[libvirt-users] libvirt configuration problem
by Onkar Mahajan
Hi ,
I am getting errors while configuring libvirt to compile it from source.
There is a error in libnl
checking for UDEV... no
checking whether to compile with macvtap support... yes
checking whether to compile with virtual port support... no
checking for LIBNL... no
configure: error: libnl-devel >= 1.1 is required for macvtap support
I have already installed the libnl library in /lib/
[root@localhost lib]# ls libnl*
libnl.a libnl-cli.so.2 libnl-genl.so libnl-nf.a
libnl-nf.so.2.0.0 libnl-route.so.2 libnl.so.2.0.0
libnl-cli.a libnl-cli.so.2.0.0 libnl-genl.so.2 libnl-nf.la
libnl-route.a libnl-route.so.2.0.0
libnl-cli.la libnl-genl.a libnl-genl.so.2.0.0 libnl-nf.so
libnl-route.la libnl.so
libnl-cli.so libnl-genl.la libnl.la libnl-nf.so.2
libnl-route.so libnl.so.2
Please help me !!
Regards,
Onkar
13 years, 8 months
[libvirt-users] libvirt/kvm/qemu: pointopoint routed setup?
by Andreas Jellinghaus
Hi,
can anyone give an example for a pointopoint routed setup?
i.e. each virtual machine has one ip with pointopoint config to the
host machine, on a private interface - thus the virtual machines can
only talk to the host, which routes them and can use normal iptables
for filtering all traffic.
with xen this was a simple script doing
ifconfig ${vif} ${main_ip} netmask 255.255.255.255 up
ip route ${ipcmd} ${addr} dev ${vif} src ${main_ip}
and an optional
echo 1 >/proc/sys/net/ipv4/conf/${vif}/proxy_arp
has someone implemented something like this with libvirt/kvm setup?
can you give some pointers how to do this?
I don't need high speed communication between the virtual machines,
and the option to filter all traffic between them (without using
the bridge netfilter tables) would be nice.
Or is there a reason not to use such a setup, and a better option
to implement this?
Thanks for your help and best regards,
Andreas Jellinghaus
13 years, 8 months
[libvirt-users] Migrating existing debian system to KVM/libvirt
by Randall Webstuff
I have a dying machine running debian etch that acts as a web/mail server
that I would like to virtualize using debian/kvm/libvirt. Is it as simple as
removing the hard-disk (multiple ext2/ext3 partitions) from the dying
machine into my newer server and then writing a custom xml config file? Is
there a tutorial on doing this somewhere?
Thanks.
Randy
13 years, 8 months
[libvirt-users] client certificate path hard coded?
by Anthony Goddard
Hi all,
I'm trying to figure out how to get my desktop talking to two libvirt hosts using qemu+tls and I've read that virsh relies on hard coded paths to the certificates.. which seems to be true.
Is there a way to tell virsh to use a different path to a certificate, or another way people solve this presently?
Cheers,
Ant
13 years, 8 months
[libvirt-users] unable to connect to libvirtd at '*': No route to host
by Christoph Raible
Hi @all,
I have the following systems:
AMD Processor
Scientific Linux 6.0
Kernelversion: 2.6.32-71.18.2.el6.x86_64
KVM / Libvirt out of the repository
I've setup my TLS-Certificates with the following howto
/http://wiki.libvirt.org/page/TLSSetup/
Now I want to connect the servers with
/virsh -c qemu+tls://nebula3/system/(nebula3 is the hostname)
but the following error message always is displayed:
/error: unable to connect to libvirtd at 'nebula3': No route to host
error: failed to connect to the hypervisor/
SELinux is disabled and the servers are in the same IP-Range (ping works fine)
Passwordless ssh-login also works fine!
If I connect to a server with
qemu+ssh://nebula3/system
all wroks fine...
Installation with virt-manager an bridged networkinterface also works fine...
Did anyone of you have an idea where my problem is?
Thanks for Help :)
Regards,
Christoph
--
Vorstand/Board of Management:
Dr. Bernd Finkbeiner, Dr. Roland Niemeier,
Dr. Arno Steitz, Dr. Ingrid Zech
Vorsitzender des Aufsichtsrats/
Chairman of the Supervisory Board:
Michel Lepert
Sitz/Registered Office: Tuebingen
Registergericht/Registration Court: Stuttgart
Registernummer/Commercial Register No.: HRB 382196
13 years, 8 months