[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] Freeze Windows Guests For Consistent Storage Snapshots
by Payes Anand
Hi,
Is it possible to freeze windows guests for a consistent storage level
snapshot.
I am using openstack icehouse on centos 6.6
Hypervisor: KVM
Libvirt: 0.10.2
Qemu: 0.10.2
Guest OS: Windows 7 and Windows Server 2008
I was able to freeze Centos guests by issuing the command:
virsh qemu-agent-command <guest_ID> '{"execute":"guest-fsfreeze-freeze"}'
For CentOS guests, I enabled access between compute nodes and guests
through a socket by setting metadata parameter, hw_qemu_guest_agent=yes for
the guest image.
And then installing qemu-guest-agent inside the guest.
What steps do i have to follow for windows ?
Regars,
Payes
9 years, 5 months
[libvirt-users] unable to edit existing snapshot
by NoxDaFox
Greetings,
due to hardware failure I had to replace my workstation which has a
different CPU. I have a VM with several snapshots and I need to revert to a
specific one.
While reverting to it, I get an error due to unsupported CPU features.
Therefore, I try to edit the snapshot XML through the command:
virsh snapshot-edit <domain_name> <snapshot_name>
When I save the changes I get the error message:
error: intermal error: unexpected domain snapshot <snapshot_name> already
exists
Tried to look around for solution but didn't find any information related
to my problem.
Am I doing something wrong? It's quite critical for me to be able to revert
to that state of the machine. Is there any chance to do so via libvirt?
Thank you.
9 years, 6 months
[libvirt-users] [Libvirt Users]how to provide password authentication for qemu driver
by Dhaval_Shah1@dell.com
Dell Customer Communication
Hi All,
I am using
Compiled against library: libvirt 1.2.9
Using library: libvirt 1.2.9
Using API: QEMU 1.2.9
Running hypervisor: QEMU 2.1.2
I want user to provide username and password authentication to virConnectPtr
virConnectOpenAuth(const char *name,
virConnectAuthPtr auth,
unsigned int flags) to login remotely for the qemu hypervisor.
But it is not taking user provided username and driver. Because internally it is calling below qemu driver function,
static virDrvOpenStatus qemuConnectOpen(virConnectPtr conn,
virConnectAuthPtr auth ATTRIBUTE_UNUSED,
unsigned int flags)
so it is not taking any user provided auth parameters and it is internally calling separate function for prompting password. I don't want to provide password here.
In case of windows HyperV it is working fine as it takes auth parameters but for Qemu Hypervisor I am facing this issue since here the auth parameters as marked as ATTRIBUTE_UNUSED.
Can anyone help me with this that how can I achieve this?
Thanks & Regards,
Dhaval Shah
9 years, 6 months
[libvirt-users] Limitations of macvtap devices?
by Lars Kellogg-Stedman
I am running OpenStack inside a libvirt guest that is connected to the
local network via a macvtap interface. My experience so far suggests
that a macvtap interface will not pass traffic with a source MAC
address other than the MAC address of the interface itself...for
example, if inside the guest eth0 is attached to a bridge.
Is that correct, or is there some setting that will make that work?
Outbound traffic doesn't seem to be a problem (I can see, for example,
dhcp requests on the local network), but replies get dropped before
they reach the guest.
Thanks,
--
Lars Kellogg-Stedman <lars(a)redhat.com> | larsks @ {freenode,twitter,github}
Cloud Engineering / OpenStack | http://blog.oddbit.com/
9 years, 6 months
[libvirt-users] Sometimes libvirt fails to update domain block file after blockcommit.
by Matthew Schumacher
Posted to https://bugzilla.redhat.com/show_bug.cgi?id=1217185
I just stumbled on another bug while snapshotting and think it's related
to 1210903 and 1197592 as it seems like some sort of race condition
because it depends on what logging is in place and doesn't happen every
time.
Here are the details:
I wrote this test script to snapshot and commit over and over:
#!/bin/sh
while [ 1 ]; do
echo "Starting snapshot test `date`"
virsh snapshot-create-as test 20150429 20150429-backup --disk-only
--atomic
virsh domblklist test
virsh blockcommit test vda --active --pivot --verbose
virsh snapshot-delete test 20150429 --metadata
virsh domblklist test
rm /glustervol1/vm/test/test.20150429
echo "Ending snapshot test `date`"
echo
echo
sleep 2
done
If I run libvirtd in the foreground with debug set to 1 I can't get it
to fail, it does what it's supposed to do, snapshot and commit over and
over.
If I run libvirtd in the foreground with debug set to 3, then I will
always eventually get this:
Starting snapshot test Wed Apr 29 09:34:34 AKDT 2015
Domain snapshot 20150429 created
Target Source
------------------------------------------------
vda /glustervol1/vm/test/test.20150429
hdc /dev/sr0
Block Commit: [100 %]
Successfully pivoted
Domain snapshot 20150429 deleted
Target Source
------------------------------------------------
vda /glustervol1/vm/test/test.20150429
hdc /dev/sr0
Ending snapshot test Wed Apr 29 09:34:35 AKDT 2015
Starting snapshot test Wed Apr 29 09:34:37 AKDT 2015
error: unsupported configuration: source for disk 'vda' is not a regular
file; refusing to generate external snapshot name
Target Source
------------------------------------------------
vda /glustervol1/vm/test/test.20150429
hdc /dev/sr0
error: internal error: qemu block name '/glustervol1/vm/test/test.qcow2'
doesn't match expected '/glustervol1/vm/test/test.20150429'
error: Domain snapshot not found: no domain snapshot with matching name
'20150429'
Target Source
------------------------------------------------
vda /glustervol1/vm/test/test.20150429
hdc /dev/sr0
rm: can't remove '/glustervol1/vm/test/test.20150429': No such file or
directory
Ending snapshot test Wed Apr 29 09:34:37 AKDT 2015
At this point libvirt is confused about which file is the backing store
because the first run did pivot after blockcommit, but didn't update the
block file. From the logs:
2015-04-29 17:33:41.052+0000: 25192: warning : qemuDomainObjTaint:1972 :
Domain id=2 name='test' uuid=4b9cc25b-68d1-4ce8-8a65-2a378e255e36 is
tainted: high-privileges
2015-04-29 17:34:37.322+0000: 25191: error :
virDomainSnapshotAlignDisks:609 : unsupported configuration: source for
disk 'vda' is not a regular file; refusing to generate external snapshot
name
2015-04-29 17:34:37.352+0000: 25194: error :
qemuMonitorJSONDiskNameLookup:3977 : internal error: unable to find
backing name for device drive-virtio-disk0
2015-04-29 17:34:37.354+0000: 25194: error :
qemuMonitorJSONDiskNameLookupOne:3914 : internal error: qemu block name
'/glustervol1/vm/test/test.qcow2' doesn't match expected
'/glustervol1/vm/test/test.20150429'
So libvirt insists that the block file is:
root@wasvirt2:/glustervol1/vm/waspbx# virsh domblklist test
Target Source
------------------------------------------------
vda /glustervol1/vm/test/test.20150429
hdc /dev/sr0
But that file isn't in use and isn't what qemu is using:
root@wasvirt2:/glustervol1/vm/waspbx# lsof | grep test
25300 /usr/bin/qemu-system-x86_64 /var/log/libvirt/qemu/test.log
25300 /usr/bin/qemu-system-x86_64 /var/log/libvirt/qemu/test.log
25300 /usr/bin/qemu-system-x86_64 /glustervol1/vm/test/test.qcow2
The only way to straighten this out is to destroy and start the domain.
9 years, 6 months
[libvirt-users] non failover equivalent to "virsh migrate --copy-storage-all"
by Andreas Buschmann
Hello,
I have two servers where I can push VMs from one to the other by issuing
the command
virsh migrate --live --persistent --copy-storage-all --verbose \
test6 qemu+ssh://kvmhost2/system
on kvmhost1. I can get the VM back by issuing the equivalent command on
kvmhost2:
virsh migrate --live --persistent --copy-storage-all --verbose \
test6 qemu+ssh://kvmhost1/system
virsh copies the local data file /data/vm/test6.qcow2 with the
filesystem over to the other server.
Is there a way to just copy the data file over to the second server,
without moving the VM?
I want the equivalent of doing these two migrations is sequence, but
without moving the VM.
The goal is to get a backup copy of a running system onto a second
system (kvmhost2).
The copy (snapshot) from kvmhost2 can then be copied to a backup server,
and can be used for a fast recovery point if kvmhost1 dies.
Mit freundlichen Gruessen
Andreas Buschmann
--
Andreas Buschmann
[Senior Systems Engineer]
net.DE AG
9 years, 6 months
[libvirt-users] Semantics of "virsh migrate --copy-storage-all" vs. --copy-storage-inc
by Andreas Buschmann
Hello,
what is really the difference between
virsh migrate --copy-storage-all
and
virsh migrate --copy-storage-inc
?
There are some documents talking about NBD snapshots, but the user
visible semantics are incomplete.
Where does the incremental stuff happen?
Does it require qcow2 files?
Does it work with raw files?
Does "--copy-storage-inc" require existing snapshots?
Does "--copy-storage-inc" require existing snapshots which are in an
external file so that there is a visible chain [base] <- [sn1] <- [sn2]
Background:
I am trying to understand what really happens, so I can use the tools
for a reasonable fast failover between two servers with local storage.
more Background:
I have VMs and bare metal servers which do use expensive shared storages
like NetApp, EMC² VNX, HP and others, but the shared storage boxes all
require customer visible maintenance downtimes every 12 to 20 month.
And these downtimes take a long lead time and are between 01:00 and 05:00.
Mit freundlichen Gruessen
Andreas Buschmann
--
Andreas Buschmann
[Senior Systems Engineer]
net.DE AG
9 years, 6 months
[libvirt-users] Virtual Smartcard GPG
by roky@openmailbox.org
Hi. Is is possible to use GPG on the host instead of NSS with virtual
smartcards? Please document how or add support for it.
Can a virtual smartcard make the host less secure? If there are bugs in
GPG/NSS backend on the host can they be abused by untrusted code in the
vm?
9 years, 6 months