[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] stream finish throws exception via python API
by Shahar Havivi
Hi,
The following snippet works fine e.g. receiving the data but when calling
stream.finish() we get the following error:
stream = con.newStream()
vol.download(stream, 0, 0, 0)
buf = stream.recv(1024)
stream.finish()
libvirt: I/O Stream Utils error : internal error: I/O helper exited abnormally
Traceback (most recent call last):
File "./helpers/kvm2ovirt", line 149, in <module>
download_volume(vol, item[1], diskno, disksitems, pksize)
File "./helpers/kvm2ovirt", line 102, in download_volume
stream.finish()
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 5501, in finish
if ret == -1: raise libvirtError ('virStreamFinish() failed')
libvirt.libvirtError: internal error: I/O helper exited abnormally
Am I doing something wrong?
Thank you,
Shahar.
7 years, 12 months
[libvirt-users] Lifecycle of a connection to libvirtd
by Vincent Bernat
Hey!
I am trying to figure out how to reliably maintain a connection to
libvirtd. From the documentation, I would expect something like that:
- virConnectOpen()
- virConnectRegisterCloseCallback()
- virConnectSetKeepAlive()
- Application logic
And in the registered callback, I would:
- virConnectUnregisterCloseCallback()
- virConnectClose()
- virConnectOpen()
- virConnectRegisterCloseCallback()
- virConnectSetKeepAlive()
However, looking at the source code of virsh, I see that it additional
stuff, notably:
- virConnectIsAlive()
- checking error codes of all calls to check if they are the result of
a disconnect
Are those steps needed? Randomly checking virConnectIsAlive() doesn't
seem reliable. Checking individual error codes either (maybe I will miss
one of them or misinterpret another one).
virsh code uses those error codes to check if there is a disconnection:
(((last_error->code == VIR_ERR_SYSTEM_ERROR) &&
(last_error->domain == VIR_FROM_REMOTE)) ||
(last_error->code == VIR_ERR_RPC) ||
(last_error->code == VIR_ERR_NO_CONNECT) ||
(last_error->code == VIR_ERR_INVALID_CONN))))
Any hint?
--
Program defensively.
- The Elements of Programming Style (Kernighan & Plauger)
8 years, 2 months
[libvirt-users] QEMU IMG vs Libvirt block commit
by Erlon Cruz
Hi folks,
I'm having a issue in the standard NFS driver on OpenStack, that uses
qemu-img and libvirt to create snapshots of volumes. It uses qemu-img in
the Controller to manage the snapshots when the volume is not attached
(offline) or calls the Compute (which calls libvirt) to manage snapshots if
the volume is attached (online). When I try to create/delete snapshots from
a snapshot chain. There are 3 situations:
1 - I create/delete the snapshots in Cinder (it uses qemu-img). It goes OK!
I can delete and I can create snapshots.
2 - I create/delete the snapshots, in online mode (it uses libvirt). It
goes Ok, as weel.
3 - I create the snapshots in Cinder (offline) and delete then in online
mode (using libvirt), then it fails with this message[1]:
libvirtError: invalid argument: could not find image
'volume-13cb83a2-880f-40e8-b60e-7e805eed76f9.d024731c-bdc3-4943-91c0-215a93ee2cf4'
in chain for
'/opt/stack/data/nova/mnt/a3b4c6ddd9bf82edd4f726872be58d05/volume-13cb83a2-880f-40e8-b60e-7e805eed76f9'
But, in this folder, there the backing files are there[2] and they are
chained as think it suppose to be[3].
The version for the 2 hosts in tests are:
Controller/Cinder node: qemu-img version 2.0.0 && libvirt 1.2.2-0ubuntu13
Compute node: qemu-img version 2.5.0 && libvirt-1.3.1-1ubuntu10
Is there any compatibility problem between libvirt and qemu-img snapshots?
Have you guys found any problem like that?
Erlon
----------------------------------
[1] http://paste.openstack.org/show/543270/
[2] http://paste.openstack.org/show/543273/
[3] http://paste.openstack.org/show/543274/
8 years, 3 months
[libvirt-users] NPIV storage pools do not map to same LUN units across hosts.
by Nitesh Konkar
Link: http://wiki.libvirt.org/page/NPIV_in_libvirt
Topic: Virtual machine configuration change to use vHBA LUN
There is a NPIV storage pool defined on two hosts and pool contains a
total of 8 volumes, allocated from a storage device.
Source:
# virsh vol-list poolvhba0
Name Path
------------------------------------------------------------------------------
unit:0:0:0 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000366
unit:0:0:1 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000367
unit:0:0:2 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000368
unit:0:0:3 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000369
unit:0:0:4 /dev/disk/by-id/wwn-0x6005076802818bda300000000000036a
unit:0:0:5 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000380
unit:0:0:6 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000381
unit:0:0:7 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000382
--------------------------------------------------------------------
Destination:
--------------------------------------------------------------------
# virsh vol-list poolvhba0
Name Path
------------------------------------------------------------------------------
unit:0:0:0 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000380
unit:0:0:1 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000381
unit:0:0:2 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000382
unit:0:0:3 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000367
unit:0:0:4 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000368
unit:0:0:5 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000366
unit:0:0:6 /dev/disk/by-id/wwn-0x6005076802818bda300000000000036a
unit:0:0:7 /dev/disk/by-id/wwn-0x6005076802818bda3000000000000369
--------------------------------------------------------------------
As you can see in the above output,the same set of eight LUNs from the
storage server have been mapped,
but the order that the LUNs are probed on each host is different,
resulting in different unit names
on the two different hosts .
If the the guest XMLs is referencing its storage by "unit" number then is
it safe to migrate such guests because the "unit number" is assigned by the
driver according to the specific way it probes the storage and hence
when you migrate
these guests , it results in different unit names on the destination hosts.
Thus the migrated guest gets mapped to the wrong LUNs and is given the
wrong disks.
The problem is that the LUN numbers on the destination host and source
host do not agree.
Example, LUN 0 on source_host, for example, may be LUN 5 on destination_host.
When the guest is given the wrong disk, it suffers a fatal I/O error. (This is
manifested as fatal I/O errors since the guest has no idea that its disks just
changed out under it.)The migration does not take into account that
the unit numbers do
match on on the source and destination sides.
So, should libvirt make sure that the guest domains reference NPIV
pool volumes by their
globally-unique wwn instead of by "unit" numbers?
The guest XML references its storage by "unit" number.
Eg:-
<disk type='volume' device='lun'>
<driver name='qemu' type='raw' cache='none'/>
<source pool='poolvhba0' volume='unit:0:0:0'/>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<alias name='virtio-disk1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</disk>
I am planning to write a patch for it. Any comments on the above
observation/approach would be appreciated.
Thanks,
Nitesh.
8 years, 3 months
[libvirt-users] Live Disk Backup
by Prof. Dr. Michael Schefczyk
Dear All,
using CentOS 7.2.1511, and libvirt from ovirt repositories (currently 1.2.17-13.el7_2.5, but without otherwise using ovirt) I am regularly backing up my VMs which are on qcow2 files. In general, I am trying to follow http://wiki.libvirt.org/page/Live-disk-backup-with-active-blockcommit
A typical backup script would be
#!/bin/bash
dt=`date +%y%m%d`
if virsh dominfo dockers10a | grep -q -E '^Status: *laufend|^State: *running'
then
virsh snapshot-create-as --domain dockers10a dockers10a --diskspec vda,file=/home/dockers10asnap.qcow2 --disk-only --no-metadata --atomic
cp /kvm01/dockers10a.qcow2 /backup/dockers10a$dt.qcow2
virsh blockcommit dockers10a vda --active --verbose --pivot
virsh snapshot-delete dockers10a dockers10a
rm /home/dockers10asnap.qcow2
fi
I am fully aware that the third line from the end "virsh snapshot-delete ..." will fail under regular circumstances. It is just there as a precaution to delete unnecessary snapshots should a previous backup have failed.
For some time, I am noticing that from time to time backup fails in the way that the xml definition file of the VM backed up keeps the temporary file (in the example above /home/dockers10asnap.qcow2) as the source file. Then, at least upon rebooting the host, it will be unable to restart the VM. In addition, lots of other troubles can arise (following backups failing, storage issues).
I am using a similar setup on four hosts. It seems that the better the resources of the host are, the lower the likelihood of the problem occurring - but that cannot be an acceptable state.
Can someone please point me to how to avoid this?
Regards,
Michael Schefczyk
8 years, 3 months
[libvirt-users] Network without forward mode
by Vincent Bernat
Hey!
Another question. The documentation about networks say:
╭─────┤ http://libvirt.org/formatnetwork.html#elementsConnect ├─────
│Inclusion of the forward element indicates that the virtual network is
│to be connected to the physical LAN.Since 0.3.0. The mode attribute
│determines the method of forwarding. If there is no forward element, the
│network will be isolated from any other network (unless a guest
│connected to that network is acting as a router, of course).
╰─────
That's exactly what I want: just a vnet interface, no bridge, no
routing, no forwarding. However, if I create a network with just that:
#v+
<network>
<name>public</name>
<uuid>4629ba54-9e33-4a1f-9e45-78a1c8faaddc</uuid>
</network>
#v-
libvirt (2.0.0) adds a bridge stanza:
#v+
<network>
<name>public</name>
<uuid>4629ba54-9e33-4a1f-9e45-78a1c8faaddc</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:15:45:da'/>
</network>
#v-
The bridge is created. If I spawn a VM attached to this network, it gets
added to the bridge. Any way to have a network where absolutely no setup
is done?
Thanks!
--
Use the "telephone test" for readability.
- The Elements of Programming Style (Kernighan & Plauger)
8 years, 3 months
[libvirt-users] Routing isolated network
by Erwin Straver
I want to create a network like this:
Internet -- physical router -- host (network 192.168.178.x)
-- virtual machine dmz -- eth0
(connected to pyshical router)
-- eth1 (connect to isolated network 10.0.0.x)
-- virtual machine www - eth0
(connect to isolated network 10.0.0.x)
[image: network design] <http://i.stack.imgur.com/QoCz9.png>
I have a virtual host which is conntected to my physical router with eth0
and ip4 address 192.168.178.100. I create a virtual machine dmz which
connects 'direct' to my router via my physical device eth0 on the virtual
host:
<network connections='1'>
<name>direct</name>
<uuid>379d4687-445e-4bc6-8354-b555c7f18b15</uuid>
<forward dev='eth0' mode='bridge'>
<interface dev='eth0' connections='1'/>
</forward>
</network>
On my virtual machine i create a second nic eth1 which is connected on a
virtual network virbr-local:
<network>
<name>local</name>
<uuid>d31b2e0d-810b-4ba0-8ac4-02bc53746142</uuid>
<bridge name='virbr-local' stp='on' delay='0'/>
<mac address='52:54:00:92:06:5c'/>
<domain name='local.box'/>
<dns>
<forwarder addr='192.168.178.1'/>
</dns>
<ip address='10.0.0.1' netmask='255.0.0.0'>
<dhcp>
<range start='10.0.0.100' end='10.0.0.255'/>
<host mac='52:54:00:51:31:86' ip='10.0.0.30'/>
</dhcp>
</ip>
<route address='10.0.0.0' prefix='8' gateway='10.0.0.30'/>
</network>
Now I want to create a second virtual machine which connects to the
internet through the virtual machine dmz on the virbr-local subnet. Is
there a way to accomplish this kind of setup?
My routing table on the virtual host looks likes this:
Destination Gateway Genmask Flags Metric Ref Use Iface
default fritz.box 0.0.0.0 UG 0 0 0 eth0
10.0.0.0 * 255.0.0.0 U 0 0
0 virbr-local
10.0.0.0 10.0.0.30 255.0.0.0 UG 1 0
0 virbr-local
192.168.178.0 * 255.255.255.0 U 0 0 0 eth0
But when I want to ping an address from the www virtual machine I get a
unreachable network message. I setup a DNAT om the virtual machine dmz. But
looking witch tcpdump on eht1 there's no traffic.I appreciate some help to
setup the network. I clearly missing something.
Get a signature like this: Click here!
<http://ws-promos.appspot.com/r?rdata=eyJydXJsIjogImh0dHA6Ly93d3cud2lzZXN0...>
8 years, 3 months
Re: [libvirt-users] How can I run command in containers on the host?
by Daniel P. Berrange
On Tue, Jul 26, 2016 at 05:19:22PM +0800, John Y. wrote:
> Hi Daniel,
>
> I forgot to tell you that I using mips64 fedora. Has any effect on this
> case?
> 2016-07-26 09:05:59.634+0000: 16406: debug : virDomainLxcEnterNamespace:131
> : dom=0xaaad4067c0, (VM: name=fedora2,
> uuid=42b97e4d-54dc-41b4-b009-2321a1477a9a), nfdlist=0, fdlist=0xaaad4007c0,
> noldfdlist=(nil), oldfdlist=(nil), flags=0
> libvirt: error : Expected at least one file descriptor
> error: internal error: Child process (16406) unexpected exit status 125
This is shows nfdlist=0, whicih means that virDomainLxcOpenNamespace
didn't provide any file descriptors. This in turn seems to suggest
that /proc/$PID/ns didn't contain any files.
Is this perhaps a misconfiguration of the mips kernel.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
8 years, 4 months