[libvirt-users] How to prevent IP-spoofing/traffic sniffing by the guest machines?
by guido
Hi,
In a libvirt/KVM setup, what is the best way to prevent untrusted guest
machines from (a) sending packets with a sender address different from their
own and (b) reading packets not intended for them?
Note that guests may have any number of IP addresses they are allowed to use
legitimately.
Guido
--
Too much multitasking isn't good for you.
14 years, 7 months
[libvirt-users] Full documentation for virsh?
by David Ehle
Hello,
Does anyone know where full documentation for virsh can be found? Or if
the there are any plans to update the man page?
The man page lists far fewer commands than virsh lists from its
interactive mode when you type help.
I noticed this when I was trying to figure out the difference between
pool-destroy, and pool-delete. The help entries in the virsh shell are not
very informative:
virsh # help pool-destroy
NAME
pool-destroy - destroy a pool
SYNOPSIS
pool-destroy <pool>
DESCRIPTION
Destroy a given pool.
OPTIONS
<pool> pool name or uuid
virsh # help pool-delete
NAME
pool-delete - delete a pool
SYNOPSIS
pool-delete <pool>
DESCRIPTION
Delete a given pool.
OPTIONS
<pool> pool name or uuid
... unless you already know how virsh defines delete vs destroy.
(anyone care to explain it to me?)
The word pool does not occur in the virsh man page for Ubuntu Karmic
release at least.
Thanks for the great list!
14 years, 7 months
[libvirt-users] libvirtd loses all of its data after restart
by guido
Hi,
I'm having a problem with libvirtd (backend being kvm) losing its if I restart
it with /etc/init.d/libvirtd restart.
What I did was:
Start libvirtd
Connect to it using virsh
Create a new storage pool with pool-create-as
Create some volumes with vol-create-as
Create some virtual machines with create
Restart libvirtd using /etc/init.d/libvirtd restart
Reconnect with virsh
After this, all the previously defined pools, volumes and virtual machines
would no longer be listed with pool-list or list, respectively. (list --all
doesn't show anything either)
The virtual machines I had started earlier are still running and I can still
connect to them using vnc, but obviously, I can no longer manage them using
libvirt.
Why is this happening? Is libvirtd's data not supposed to be persistent? Is
this maybe just an old bug? How can I get the still running virtual machines
back under control?
The system I'm using for this is a recently installed CentOS 5.4 with
libvirt-0.6.3-20.1.el5_4 installed directly from its package repository. The
only changes I made from the vanilla install was to install and configure some
SSL certs in /etc/libvirt/libvirtd.conf, as described on the libvirt website
and to enable LIBVIRTD_ARGS="--listen" in /etc/sysconfig/libvirtd.
Guido
--
Too much multitasking isn't good for you.
14 years, 7 months
[libvirt-users] weird "no supported architecture for os type 'hvm'"
by Claude Noshpitz
Hi,
I have two identical machines with exactly the same CentOS 5.4 kickstarted
from the same sources, and the same packages installed. Their cpuinfos are
identical, and they are running libvirt 0.6.3 with the very same
libvirtd.conf.
Yet...
One of them works fine, the other says "no supported architecture for os
type 'hvm'" for the same template (attached) and image.
Any thoughts on what kind of defect or misconfiguration could cause this?
Thanks much!
--Claude
14 years, 7 months
[libvirt-users] Building & Installing on OpenSolaris
by gary mazzaferro
Hi,
I'm revisiting my libVirt install for opensolaris. I downloaded libvirt
0.8.0..
This is a bit embarassing, but I forgot how to build libVirt on opensolaris.
(It's been a while) I'm currently stuck with the configure script failing
to find linux kernel headers.
Error:
"configure: error: You must install kernel-headers in order to compile
libvirt"
Can anyone post the instructions to build under opensolaris?
cheers,
gary
14 years, 7 months
[libvirt-users] libvirt with qemu-kvm, not recognizing NIC model
by Jonathan Hoover
Is this the right list for this question, or should I be elsewhere?
I am trying to specify a network card "model type" of "pcnet" (to
emulate vmware esxi's network card). No matter what I put for model type
in my xml config file, it comes up as an Intel e1000. I ran "qemu-kvm
-net nic,model=? /dev/null" and I got back a list of supported models
including pcnet,e1000,virtio, and others as expected. No matter which I
put in my xml file for /etc/libvirt/qemu/Symantec-bg.xml (a Symantec
Brightmail Gateway virtual machine), I just get back that its an Intel
card on boot (which doesn't work with Symantec BG).
I am running Fedora 12, kernel is 2.6.31.6-166.fc12.x86_64. The version
of libvirt is 0.7.1. I am running it on an Intel Core 2 Quad CPU Q8400 @
2.66 GHz. I have the same problem under Fedora 13 Beta.
Thoughts?
Jonathan Hoover
14 years, 7 months
[libvirt-users] using shared storage with libvirt/KVM?
by David Ehle
I apologize if this is a duplicate. I submitted it before I was a list
member and it looks like it went into limbo.
Dear Moderator, please discard the original if it is sitll in a moderation
queue.
Hello,
I've spent a few days googling and reading documentation, but I'm looking for
clarification and advise on setting up KVM/libvirt with shared storage.
I have 2 (for now) Ubuntu Karmic systems with KVM/virsh/virt-manager set up and
running.
I have a storage server that can do NFS/iSCSI/samba/ etc.
I am trying to figure out the best way to set things up so that I can run
several (eventually) production linux guests (mostly debian stable) on the 2
KVM host systems and be able to migrate them when maintenance is required on
their hosts.
Performance is of medium importance, stability/availibilty is pretty high
priority.
A common opinion seems to be that using LVs to hold disk images gives possibly
the best IO performance, followed by raw, then qcow2.
Would you agree or disagree with this? What evidence can you provide?
I am also rather confused about using shared storage, and what options can be
combined with this.
I have succesfully made an iscsi device available to libvirt/virsh/virt-manager
via an XML file + pool-define + pool-start. However the documentation states
that while you can create pools through libvirt, volumes have to be
pre-allocated:
http://libvirt.org/storage.html
"Volumes must be pre-allocated on the iSCSI server, and cannot be created via
the libvirt APIs."
I'm very unclear on what this means in general, and specifically how you
preallocate the the Volumes.
When I make a iscsi pool available via something like this:
<pool type="iscsi">
<name>virtimages</name>
<source>
<host name="iscsi.example.com"/>
<device path="demo-target"/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
In virt-manger I can see it as a pool, but when I try to select it for where to
create an image, it uses the pool AS A Volume (I think).
That brings me to my next question/misunderstanding...
If you are using shared storage (via NFS or iSCSI) does that also mean you MUST
use a file based image rather than an LVM LV? NFS makes directories available,
not devices. You can make an unformated "raw" LV available via iSCSI, but its
not seen on the far side as a LV but instead as a scsi disk which you need to
partition. You can make a PV>VG>LV out of that and then make your new LV
available to libvirt/KVM but then your stacking a heck of a lot of
LVM/LVs/partion tables up which seems pretty dubious. I'm also not sure that
the stack would be visable/available to the second system using that iSCSI
device. (I'm pretty new to using iSCSI as well.)
Redhat provides pretty good documentation on doing shared storage/live
migration for NFS:
http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtua...
But unfortunately the section on Shared Storage with iSCSI is a bit lacking:
"9.1. Using iSCSI for storing guests
This section covers using iSCSI-based devices to store virtualized guests." and
thats it.
For maybe 6-10 guests, should I simply just be using NFS? Is its performance
that much worse than iSCSI for this task?
So since KVM/libvirt is getting used quite often in production now - but some
simple things are not making any sense to me - I'm guessing that I have some
really basic misunderstandings of how to do shared storage / Pools/ Volumes for
migration.
Would one of the kind readers of this least be willing to correct me?
Please CC to my original email address as well as I am not yet a regular
subscriber.
Thanks again!
David.
14 years, 7 months
[libvirt-users] Issues after migrating from Xen to KVM
by Ralf Hornik Mailings
Dear list,
I have moved my HVMs from xen to kvm and it worked well except some
problems using virsh.
First, one OpenSolaris HVM does boot with warnings:
WARNING: /pci@0,0/pci1af4,1100@1,2 (uhci1): No SOF interrupts have been
received, this USB UHCI host controller is unusable*
*The corresponding process:
/usr/bin/qemu-system-x86_64 -S -M pc-0.12 -enable-kvm -m 1024 -smp
1,sockets=1,cores=1,threads=1 \
-name server01 -uuid
e382c360-23bd-b400-0b89-9a1e69613ec4 -nographic \
-nodefaults -chardev
socket,id=monitor,path=/usr/local/libvirt/var/lib/libvirt/qemu/server01.monitor,server,nowait
\
-mon
chardev=monitor,mode=readline -rtc base=utc -boot c -drive
file=/dev/xen_vol/xen_opensol,if=none,id=drive-ide0-0-0,boot=on \
-device
ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 -drive
file=/dev/hde,if=none,id=drive-ide0-0-1 \
-device
ide-drive,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -device
rtl8139,vlan=0,id=net0,mac=00:16:3e:28:85:7a,bus=pci.0,addr=0x4 \
-net
tap,ifname=tap0,vlan=0,name=hostnet0 -chardev pty,id=serial0 -device
isa-serial,chardev=serial0 -usb -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3
This does not happen when I boot with plain qemu-bin:
/usr/local/kvm/bin/qemu-system-x86_64 -hda /dev/xen_vol/xen_opensol \
-hdb
/dev/hde -net nic,macaddr=00:16:3e:28:85:7a \
-net
tap,ifname=tap0 -nographic \
-m 1024
-daemonize
Second is one Centos 5.2 HVM that does not boot at all.
However when I boot manually and omitt "-S" and "-nodefaults" then it
comes up.
I looked for the -S option and I wonder why it is being used by default.
Can anybody help me? I cannot see any further documentation about the
"-nodefault" option and how to disable "-S"
Best regards
Ralf
14 years, 7 months
Re: [libvirt-users] Windows Vista Client Fails to Connect to Ubuntu libvirtd
by Matthias Bolte
2010/4/30 Tim McLeod <tim.mcleod(a)simulamen.eu>:
> Requesting urgent assistance if I may?
>
> Attempting to connect to an Ubuntu machine using an MinMG compiled virsh on
> a Windows Vista machine. Using insecure TCP simply to prove a concept to a
> client. However, cannot connect; situation as follows:
>
> Edited /etc/libvirt/libvirt.conf as follows:
> listen_tcp = 1
> auth_tcp = "none"
>
> Edited /etc/default//libvirt-bin as follows:
> libvirt-opts="-d -l"
Did you restart libvirtd after the changes and checked that it's
started successfully?
listen_tls defaults to 1 in /etc/libvirt/libvirt.conf, in combination
with the -l flag (short for --listen) libvirtd requires properly
configured SSL certificates. If libvirtd can't find the certificates
or fails to validate them then it won't start up.
You can disable TLS in /etc/libvirt/libvirt.conf by setting listen_tls
= 0 and restart libvirtd.
> Running virsh on Vista client fails as follows:
> $ virsh -c qemu+tcp://192.168.1.101/system
> error: unable to connect to libvirtd at '192.168.1.101': errno=10061
> error: failed to connect to the hypervisor
>
> Running virsh on Ubuntu server fails as follows:
> $ virsh -c qemu+tcp:///system
> Connecting to uri: qemu+tcp:///system
> error: unable to connect to libvirtd at 'localhost': Connection refused
> error: failed to connect to the hypervisor
Once libvirtd is running and configured properly for TCP transport
then remote and local TCP access to libvirtd works for me.
> Indeed, with libvirt-opts="-d -l" in etc/default//libvirt-bin the 'default'
> command also fails:
> $ virsh -c qemu:///system
> Connecting to uri: qemu:///system
> error: unable to connect to 'var/run/libvirt/libvirt-sock': Connection
> refused
> error: failed to connect to the hypervisor
>
Local access to libvirtd using qemu:///system requires properly
configured polkit, or running virsh as root. This is independent from
the -d and -l flags.
Matthias
14 years, 7 months