[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] kvm/libvirt on CentOS7 w/Windows 10 Pro guest
by Benjammin2068
Hey all,
New to list, so I apologize if this has been asked a bunch already...
Is there something I'm missing with Windows 10 as a guest that keeps Windows Updates from nuking the boot process?
I just did an orderly shutdown and windows updated itself <I forgot to disable in time> only to reboot to the diagnostics screen which couldn't repair.
going to command prompt and doing the usual "bootrec /fixmbr, /fixboot and /RebuildBcd" didn't help.
This has happened a few times. I can't believe how fragile Win10pro is while running in a VM.
(and it's happened on a couple machines I've been experimenting with -- both running same OS, but different hardware.)
I just saw the FAQ about the libvirt repo for the virtio drivers for windows.... I need to go read more on it...
but in the meantime, is there any other smoking gun I'm not aware of? (after lots of google searching)
Thanks,
-Ben
6 years, 9 months
[libvirt-users] vga passthroufh fail
by Lying
Hello, I encounter a problem, my display is not light when guest running.
But it's successful that i check guest with command "ping".
Following message is my xml and refer topic from https://libvirt.org/formatdomain.html#elementsHostDev:
<hostdev mode='subsystem' type='pci' managed='yes'>
<driver name='vfio' />
<source>
<address domain='0x0000' bus='0x06' slot='0x02' function='0x0'/>
</source>
</hostdev>
6 years, 12 months
[libvirt-users] virtlock - a VM goes read-only
by Branimir Pejakovic
Dear colleagues,
I am facing a problem that has been troubling me for last week and a half.
Please if you are able to help or offer some guidance.
I have a non-prod POC environment with 2 CentOS7 fully updated hypervisors
and an NFS filer that serves as a VM image storage. The overall environment
works exceptionally well. However, starting a few weeks ago I have been
trying to implement virtlock in order to prevent a VM running on 2
hypervisors at the same time.
Here is the description how the environment looks like in terms of virtlock
configuration on both hypervisors:
-- Content of /etc/libvirt/qemu.conf --
lock_manager = "lockd"
Only the above line is uncommented for direct locking.
# libvirtd --version; python -c "import platform;
print(platform.platform())"; virtlockd -V
libvirtd (libvirt) 3.2.0
Linux-3.10.0-693.2.2.el7.x86_64-x86_64-with-centos-7.4.1708-Core
virtlockd (libvirt) 3.2.0
# getenforce
Permissive
Here is the issue:
h1 # virsh list
Id Name State
----------------------------------------------------
1 test09 running
h1 # virsh domblklist test09
Target Source
------------------------------------------------
vda /storage_nfs/images_001/test09.qcow2
h1 #
h2 # virsh list
Id Name State
----------------------------------------------------
h2 # virsh list --all | grep test09
- test09 shut off
h2 # virsh start test09
error: Failed to start domain test09
error: resource busy: Lockspace resource
'/storage_nfs/images_001/test09.qcow2' is locked
h2 # virsh list
Id Name State
----------------------------------------------------
h2 #
Before I start test09 I open a console to the guest and observe what is
going on in it. Once I try to start test09 (and get a message about locked
resource) on h2 hypervisor, I can see the following messages in the console
and the vm goes to ro mode:
on test09's console:
[ 567.394148] blk_update_request: I/O error, dev vda, sector 13296056
[ 567.395883] blk_update_request: I/O error, dev vda, sector 13296056
[ 572.871905] blk_update_request: I/O error, dev vda, sector 8654040
[ 572.872627] Aborting journal on device vda1-8.
[ 572.873978] blk_update_request: I/O error, dev vda, sector 8652800
[ 572.874707] Buffer I/O error on dev vda1, logical block 1081344, lost
sync page write
[ 572.875472] blk_update_request: I/O error, dev vda, sector 2048
[ 572.876009] Buffer I/O error on dev vda1, logical block 0, lost sync
page write
[ 572.876727] EXT4-fs error (device vda1): ext4_journal_check_start:56:
Detected aborted journal[ 572.878061] JBD2: Error -5 detected when
updating journal superblock for vda1-8.
[ 572.878807] EXT4-fs (vda1): Remounting filesystem read-only
[ 572.879311] EXT4-fs (vda1): previous I/O error to superblock detected
[ 572.880937] blk_update_request: I/O error, dev vda, sector 2048
[ 572.881538] Buffer I/O error on dev vda1, logical block 0, lost sync
page write
I also observe the guests'log:
-- /var/log/libvirt/qemu/test09.log --
block I/O error in device 'drive-virtio-disk0': Permission denied (13)
block I/O error in device 'drive-virtio-disk0': Permission denied (13)
block I/O error in device 'drive-virtio-disk0': Permission denied (13)
block I/O error in device 'drive-virtio-disk0': Permission denied (13)
block I/O error in device 'drive-virtio-disk0': Permission denied (13)
block I/O error in device 'drive-virtio-disk0': Permission denied (13)
block I/O error in device 'drive-virtio-disk0': Permission denied (13)
If it helps, here is the disk portion of an XML file:
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/storage_nfs/images_001/test09.qcow2'/>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
I usually do implement SELinux on a hypervisor to isolate guests even
further but this time I set it to permissive mode just to rule out SELinux
factor. The same thing happens when SELinux is in enforcing mode
(virt_use_nfs is set to on in that case) and audit2why doesn't report any
anomalies when parsing audit logs.
I have tried to use indirect locking via the same filer and with a
separated export for the hashes by removing the comment in
/etc/libvirt/qemu-lockd.conf for the following line:
file_lockspace_dir = "/var/lib/libvirt/lockd/files"
In this case the hashes are normally created on the NFS export mounted
under /var/lib/libvirt/lockd/files. I have also tried playing with both
QCOW2 and raw disk images for VMs (and even with XFS/ext4 based guests) but
the outcome is always the same. I have a couple of KVM books - consulted
them on this topic, consulted Red Hat and SUSE docs but pretty much the
configuration instructions are, naturally, the same. I saw that some
colleagues posted a few emails (ie
https://www.redhat.com/archives/libvirt-users/2015-September/msg00004.html)
to the list related to virtlock but it seems that it is not the same issue.
I have also, as a last resort, completely disabled SELinux, rebooted both
hypervisors, created a new vm, repeated all the steps listed above but with
the same results.
Now, I am pretty sure that I am missing something simple here since this is
a standard feature and should work out of the box if set correctly but so
far I cannot see what I am missing.
I would really appreciate any tip/help.
Thank you very much!!
Regards,
Branimir
7 years
[libvirt-users] Does libvirt-sanlock support network disk?
by Han Han
Hello,
As we know, libvirt sanlock support file type storage. I wonder *if it
supports network storage.*
I tried *iSCSI*, but found it didn't generate any resource file:
Version: *qemu-2.10 libvirt-3.9 sanlock-3.5*
1. Set configuration:
qemu.conf:
*lock_manager = "sanlock"*
qemu-sanlock.conf:
*auto_disk_leases = 1disk_lease_dir = "/var/lib/libvirt/sanlock"host_id =
1user = "sanlock"group = "sanlock"*
# systemctl restart sanlock
# systemctl restart libvirtd
2. Start a VM with iSCSI disk and check
*resource file*
VM disk xml:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='iscsi'
name='iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.cb4bfb00df2f/0'>
<host name='xx.xx.xx.xx' port='3260'/>
</source>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
Start VM:
# virsh start iscsi
Domain iscsi started
Check resource file:
# ls /var/lib/libvirt/sanlock
__LIBVIRT__DISKS__
No resource file generated.
So, does libvirt sanlock only support file or block type storage?
--
Han Han
Quality Engineer
Redhat.
Email: hhan(a)redhat.com
Phone: +861065339333
7 years
[libvirt-users] Urgent: virsh - change-media run into a promptly close of libvirt
by Holger Schranz
Hello,
I use the virsh change-media command and since the combination of:
etcsvms5:/kvm/CS8400/M1 # virsh version
Compiled against library: libvirt 3.9.0
Using library: libvirt 3.9.0
Using API: QEMU 3.9.0
Running hypervisor: QEMU 2.10.1
etcsvms5:/kvm/CS8400/M1 #
I got the following problem:
----------------------
etcsvms5:/kvm/CS8400/M1 # virsh change-media S5VCS84M1-VLP0 hdc --eject
Successfully ejected media.
etcsvms5:/kvm/CS8400/M1 # virsh change-media S5VCS84M1-VLP0 hdc
/home/kvm/etcsdvmb/Medien/V7.0A/CS_licenses/key-cd_VCS85FTSVTL.iso --insert
error: Disconnected from qemu:///system due to end of file
error: Failed to complete action insert on media
error: End of file while reading data: Input/output error
----------------------
journalctl -b | less shows the following:
Nov 14 10:11:32 etcsvms5 kernel: libvirtd[10178]: segfault at 10 ip
00007f49ff969170 sp 00007f4a069cb940 error 4 in
libvirt_driver_qemu.so[7f49ff8d9000+172000]
Nov 14 10:11:37 etcsvms5 systemd-coredump[42152]: Process 10177
(libvirtd) of user 0 dumped core.
----------------------
At the end all open consoles of all virtual machines closed and the
virt-manager also.
A restart of the libvirtd is possible but not helpful
The virsh change-media commands are in procedure andused since a long time.
I need the change-media function at the installation of our test systems.
Therefor I need an urgent help please
Best regards
Holger
---
Diese E-Mail wurde von Avast Antivirus-Software auf Viren geprüft.
https://www.avast.com/antivirus
7 years
[libvirt-users] Issues on hibernated domains
by Francesc Guasch
Hello.
I am using libvirt and KVM for a VDI project. It works fine,
great job ! I have only a small issue that bothers some
users from time to time.
This is libvirt-bin 2.5.0-3ubuntu5.5.
When restoring a saved domain that has been many hours or
days down, the network address may be taken by some other
virtual machine. Then both are in conflict.
I found this patch:
https://www.redhat.com/archives/libvir-list/2016-October/msg00561.html
And I tried setting it in my host but it discards silently
my settings after doing virsh net-edit. So It may not be
apllied to the 2.5.0 release ?
That's what I tried:
<dhcp>
<leasetime>24h</leasetime>
<range start='192.168.122.2' end='192.168.122.254'/>
</dhcp>
I also tried leasetime -1 with the same results.
I also found about adding infinite to the dhcp-range at
/var/lib/libvirt/dnsmasq/default.conf. But this file has
a comment about it shouldn't be edited and it should be
done with net-edit. But I don't know how to add this
tag to the network.
dhcp-range=192.168.122.2,192.168.122.254,infinite
Any hints ? Thank you for your time.
--
Francesc Guasch
ETSETB - UPC
7 years