[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 6 months
[libvirt-users] virDomainCoreDumpWithFormat files created as root
by NoxDaFox
Greetings,
I am dumping a guest VM memory for inspection using the command
"virDomainCoreDumpWithFormat" and the created files appear to belong to
root (both user and group).
I have searched around but didn't find any answer. Is there a way to
instruct QEMU to create those files under a different user?
Thank you.
9 years, 1 month
[libvirt-users] Snapshots vs filesystem shares
by anonym
[Please keep me Cc:ed since I'm not subscribed to the list!]
Hi list,
The tl;dr; is: with Libvirt/QEMU, should it be possible to mount a
filesystem share (of type='mount') after restoring from a snapshot that
was taken when the filesystem share wasn't mounted?
It's pretty clear that Libvirt/QEMU (currently, at least) doesn't
support taking snapshots of a live guest which has active filesystem
shares (type='mount'). I get this error:
Call to virDomainSaveFlags failed: internal error: unable to execute
QEMU command 'migrate': Migration is disabled when VirtFS export
path '${TARGET_PATH}' is mounted in the guest using mount_tag
'$TAG' (Libvirt::Error)
I have a use case where I very much would like this combination. It
wouldn't be a problem if the filesystem shares would have to be
temporarily unmounted while taking the snapshot, and mounted again after
restoring it. However, while trying that, `mount` hangs when trying to
mount a filesystem share again *after* restoring the snapshot
(unmounting and remounting works perfectly before that, of course).
Nothing is reported in syslog.
I've also tried unloading combinations of the various 9p and virtio
related modules (like 9p, 9pnet_virtio, 9pnet, virtio, etc) before
taking the snapshot, and then reload them after restoring it, in hope of
getting them into a sane state again (or whatever is the issue). But
then I've seen errors like this in syslog:
9pnet: Installing 9P2000 support
virtio-pci 0000:00:08.0: irq 42 for MSI/MSI-X
virtio-pci 0000:00:08.0: irq 43 for MSI/MSI-X
virtio-pci 0000:00:08.0: irq 42 for MSI/MSI-X
virtio-pci 0000:00:08.0: irq 43 for MSI/MSI-X
9pnet_virtio: probe of virtio3 failed with error -2
FS-Cache: Loaded
9p: Installing v9fs 9p2000 file system support
FS-Cache: Netfs '9p' registered for caching
9pnet_virtio: no channels available
and `mount` complains that the source (tag) doesn't exist when trying to
mount the filesystem share again. For the record, the mount command I
always use is simply:
mount -t 9p -o trans=virtio $TAG $TARGET_DIR
I've tried setting `-o debug=0xfff` but I get no debug info at all.
Is it expected behaviour that filesystem shares get into a broken state
after restoring a snapshot?
If it's of any relevance, here's some more context:
* The host is running Debian Jessie with Linux 3.16.7-ckt9-3~deb8u1,
Libvirt 1.2.9, QEMU 2.1.
* The guest is Tails (https://tails.boum.org) which is Debian Wheezy
with Linux 3.16.7-ckt9-3.
I doubt it matters since I tested this ~two years ago, and got (IIRC)
the exact same results.
Cheers!
9 years, 6 months
Re: [libvirt-users] [ovirt-users] Bug in Snapshot Removing
by Soeren Malchow
Small addition again:
This error shows up in the log while removing snapshots WITHOUT rendering the Vms unresponsive
—
Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net libvirtd[1657]: Timed out during operation: cannot acquire state change lock
Jun 01 01:33:45 mc-dc3ham-compute-02-live.mc.mcon.net vdsm[6839]: vdsm vm.Vm ERROR vmId=`56848f4a-cd73-4eda-bf79-7eb80ae569a9`::Error getting block job info
Traceback (most recent call last):
File "/usr/share/vdsm/virt/vm.py", line 5759, in queryBlockJobs…
—
From: Soeren Malchow <soeren.malchow(a)mcon.net<mailto:soeren.malchow@mcon.net>>
Date: Monday 1 June 2015 00:56
To: "libvirt-users(a)redhat.com<mailto:libvirt-users@redhat.com>" <libvirt-users(a)redhat.com<mailto:libvirt-users@redhat.com>>, users <users(a)ovirt.org<mailto:users@ovirt.org>>
Subject: [ovirt-users] Bug in Snapshot Removing
Dear all
I am not sure if the mail just did not get any attention between all the mails and this time it is also going to the libvirt mailing list.
I am experiencing a problem with VM becoming unresponsive when removing Snapshots (Live Merge) and i think there is a serious problem.
Here are the previous mails,
http://lists.ovirt.org/pipermail/users/2015-May/033083.html
The problem is on a system with everything on the latest version, CentOS 7.1 and ovirt 3.5.2.1 all upgrades applied.
This Problem did NOT exist before upgrading to CentOS 7.1 with an environment running ovirt 3.5.0 and 3.5.1 and Fedora 20 with the libvirt-preview repo activated.
I think this is a bug in libvirt, not ovirt itself, but i am not sure. The actual file throwing the exception is in VDSM (/usr/share/vdsm/virt/vm.py, line 697).
We are very willing to help, test and supply log files in anyway we can.
Regards
Soeren
9 years, 6 months
[libvirt-users] Freeze Windows Guests For Consistent Storage Snapshots
by Payes Anand
Hi,
Is it possible to freeze windows guests for a consistent storage level
snapshot.
I am using openstack icehouse on centos 6.6
Hypervisor: KVM
Libvirt: 0.10.2
Qemu: 0.10.2
Guest OS: Windows 7 and Windows Server 2008
I was able to freeze Centos guests by issuing the command:
virsh qemu-agent-command <guest_ID> '{"execute":"guest-fsfreeze-freeze"}'
For CentOS guests, I enabled access between compute nodes and guests
through a socket by setting metadata parameter, hw_qemu_guest_agent=yes for
the guest image.
And then installing qemu-guest-agent inside the guest.
What steps do i have to follow for windows ?
Regars,
Payes
9 years, 6 months
[libvirt-users] Bug in Snapshot Removing
by Soeren Malchow
Dear all
I am not sure if the mail just did not get any attention between all the mails and this time it is also going to the libvirt mailing list.
I am experiencing a problem with VM becoming unresponsive when removing Snapshots (Live Merge) and i think there is a serious problem.
Here are the previous mails,
http://lists.ovirt.org/pipermail/users/2015-May/033083.html
The problem is on a system with everything on the latest version, CentOS 7.1 and ovirt 3.5.2.1 all upgrades applied.
This Problem did NOT exist before upgrading to CentOS 7.1 with an environment running ovirt 3.5.0 and 3.5.1 and Fedora 20 with the libvirt-preview repo activated.
I think this is a bug in libvirt, not ovirt itself, but i am not sure. The actual file throwing the exception is in VDSM (/usr/share/vdsm/virt/vm.py, line 697).
We are very willing to help, test and supply log files in anyway we can.
Regards
Soeren
9 years, 6 months
[libvirt-users] snapshots and vmdk
by Boylan, Ross
Does libvirt support snapshotting when the virtual disk comes from a vmdk file?
http://wiki.libvirt.org/page/Snapshots#Desired_functionality seems to say no, since it says "'virsh snapshot', which requires all disk images to be qcow2".
OTOH, man virsh, http://libvirt.org/formatsnapshot.html, and http://libvirt.org/formatdomain.html#elementsDisks seem to indicate more flexibility, though I see nothing about vmdk.
If there is support for vmdk, is it just for external snapshots, or do internal snapshots work?
Currently running libvirt 0.9.12, qemu-kvm 1.1.2 (though I could use vmware workstation), linux kernel 3.2.0.
Thanks.
Ross Boylan
9 years, 6 months
[libvirt-users] libvirt and VMWare Workstation Shared Server mode (of GSX history)
by vincent@cojot.name
Hi everyone,
I searched previous postings and I couldn't find a definitive answer on
this..
I run a small lab of RHEL/Centos Based servers on which there's VMWare
Workstation running on a non-standard port but still manageable by tools
like VMrun (and the Fusion of Workstation GUI, of course).
I'm trying to use virsh with this setup and getting the following error
from both RHEL6 and RHEL7:
$ virsh -c esx://user1@server1:943/?no_verify=1
Enter user1's password for server1:
error: internal error Expecting product 'gsx' or 'esx' or 'embeddedEsx' or 'vpx' but found 'ws'
error: failed to connect to the hypervisor
I tried 'gsx' as a type too but to no avail.
The mode that I'm using is called by vmware a 'ws-shared' mode..:
$ /usr/bin/vmrun -T ws-shared -h https://localhost:943/sdk -u user1
listRegisteredVM
Host password:
Total registered VMs: 35
[ha-datacenter/standard] VCS41_node1/VCS4.vmx
[ha-datacenter/standard] VCS41_node2/VCS4.vmx
[ha-datacenter/standard] VCS41_node3/VCS4.vmx
[ha-datacenter/standard] VCS41_node4/VCS4.vmx
[ha-datacenter/standard] VCS51_node1/VCS5.vmx
[ha-datacenter/standard] VCS51_node2/VCS5.vmx
[ha-datacenter/standard] VCS51_node3/VCS5.vmx
[ha-datacenter/standard] VCS51_node4/VCS5.vmx
[ha-datacenter/standard] DOS622/dos.vmx
[ha-datacenter/standard] Solaris8/solaris8.vmx
[ha-datacenter/standard] Solaris10/solaris10.vmx
[ha-datacenter/standard] VCS60_node1/VCS6.vmx
[ha-datacenter/standard] VCS60_node2/VCS6.vmx
[ha-datacenter/standard] MacOSX_107/Mac_OS_X_Lion.vmx
[...]
Would that be something easy to hack/add for the libvirt versions I'm
using? (RHEL6/7) even if this means recompiling some src.rpms..
Any suggestions?
Thanks,
,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'`^`'~*-,
Vincent S. Cojot, Computer Engineering. STEP project. _.,-*~'`^`'~*-,._.,-*~
Ecole Polytechnique de Montreal, Comite Micro-Informatique. _.,-*~'`^`'~*-,.
Linux Xview/OpenLook resources page _.,-*~'`^`'~*-,._.,-*~'`^`'~*-,._.,-*~'
http://step.polymtl.ca/~coyote _.,-*~'`^`'~*-,._ coyote(a)NOSPAM4cojot.name
They cannot scare me with their empty spaces
Between stars - on stars where no human race is
I have it in me so much nearer home
To scare myself with my own desert places. - Robert Frost
9 years, 6 months
[libvirt-users] whats the holdup on whql signed virtio drivers?
by pixelfairy
why is it so hard to get signed virtio drivers? is it politics? a license
issue? can someone with involved with or knows of these decisions comment?
dont want to start a thread of speculation. we all know all where that
leads and i dont want it drowing out any answers.
9 years, 6 months