[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] unhelpful error message on failed "virsh migrate"
by Andreas Buschmann
Hello,
I have two servers where I can push VMs from one to the other by issuing
the command
virsh migrate --live --persistent --copy-storage-all --verbose \
test6 qemu+ssh://kvmhost2/system
on kvmhost1. I can get the VM back by issuing the equivalent command on
kvmhost2:
virsh migrate --live --persistent --copy-storage-all --verbose \
test6 qemu+ssh://kvmhost1/system
After an update (on behalf of the glibc bug) it only works for some VMs
and fails for others. The error message is unhelpful:
virsh --debug 4 migrate --live --persistent --copy-storage-all --verbose \
vmware-mgmt qemu+ssh://kvmhost1/system
error: internal error: info migration reply was missing return status
Is there a way to get a more helpful error message?
migrating VMs only works for a very limited subset of the VMs on the
system in the direction kvmhost2 --> kvmhost2
Stopping the VMs, copying them over to kvmhost1 and restarting still
works, and I can even migrate them back.
How do I debug this problem?
One possible way would be to stop all VMs on kvmhost2, manually copy
them to kvmhost1 and start them
and afterwards reinstall all packages
(or the whole Server if that doesn't help)
But that would be a workaround, and I still would not know what the
problem is.
The system is CentOS 7.2 now and was CentOS 7.1 before the upgrade, with
the qemu-kvm-rhev repo added for qemu.
Mit freundlichen Gruessen
Andreas Buschmann
--
Andreas Buschmann
[Senior Systems Engineer]
net.DE AG
8 years, 5 months
[libvirt-users] Zombie processes being created when console buffer is full
by Peter Steele
We have been researching stuck zombie processes in our libvirt lxc
containers. What we found was:
1) Each zombie’s parent was pid 1. init which symlinks to systemd.
2) In some cases, the zombies were launched by systemd, in others the
zombie was inherited.
3) While the child is in the zombie state, the parent process (systemd)
/proc/1/status shows no pending signals.
4) Attaching gdb to systemd, there was 1 thread and it was waiting in
write() and the file being written was /dev/console.
This write() to the console never returns. We operated under the
assumption that systemd's SIGCHLD handler sets a bit and a foreground
thread (the only thread) would see that child processes needed reaping.
While the single thread is stuck in write(), the reaping never takes
place.
So why is write() blocking? The answer seems to be that there is
nothing draining the console and eventually it blocks write() when its
buffers become full. When we attached to the container's console, the
buffer is cleared allowing systemd’s write() to return. The zombies are
then reaped and everything goes back to normal.
Our “solution” was more of a workaround. systemd was altered to log
errors/warnings/etc to /dev/null instead of /dev/console. This
prevented the problem, only in that the console buffer was unlikely to
get filled up since systemd generally is the only then that writes to
it. This is definitely a hack though.
This may be a bug in the libvirt container library (you can't expect
something to periodically connect to a container's console to empty it
out). We suspect there may also be a configuration issue in our
containers with regards to the console.
Has anyone else observed this problem?
Peter
_______________________________________________
libvirt-users mailing list
libvirt-users(a)redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users
8 years, 7 months
[libvirt-users] Network speed between two guests on same host.
by Dominique Ramaekers
Hi,
I've got two hosts. Most of my guests are Windows systems. I'm using LANBench to test network performance.
1) From an physical PC to a guest (it doesn't matter on which host), I get almost 1Gb/s. They are connected through a 1Gb/s swich => very good!
2) From a guest on one host to a guest on the other host => plusminus 1Gb/s => Okay!
3) Between two guests on the same host => plusminus 230Mb/s ???
The guests have network over a bridged interface, so I tried the same test over a NAT interface => The same 230Mb/s...
Is there a way to tweak connection speeds between two guest running on the same host?
Thanks in advance...
An piece of the xml dump of one of the guests:
<domain type='kvm' id='17'>
<name>PCVIRTdra</name>
<uuid>925e4f9b-2c27-406d-bdd9-f3e0b44f28bb</uuid>
<title>PCVIRTdra - PC voor dra</title>
<description></description>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<memoryBacking>
<hugepages/>
</memoryBacking>
<vcpu placement='static'>8</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-utopic'>hvm</type>
<bootmenu enable='yes'/>
</os>
<features>
<acpi/>
<apic/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
</hyperv>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>cpu64-rhel6</model>
<topology sockets='2' cores='2' threads='2'/>
</cpu>
<clock offset='localtime'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
<timer name='hypervclock' present='yes'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
....
<interface type='bridge'>
<mac address='52:54:00:b1:41:b3'/>
<source bridge='br0'/>
<target dev='vnet4'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
.....
8 years, 8 months
[libvirt-users] ANNOUNCE: Oz 0.15.0 release
by Chris Lalancette
All,
I'm pleased to announce release 0.15.0 of Oz. Oz is a program for
doing automated installation of guest operating systems with limited input
from the user. Release 0.15.0 is a bugfix and feature release for Oz.
Some of the highlights between Oz 0.14.0 and 0.15.0 are:
* Make sure openssh-clients is included in CentOS builds
* Add support for Fedora-23
* Add customization support for Debian
* Use the ssh -i option to avoid "Too many authentication failures" with
many ssh keys
* Support "rawhide" as a Fedora version
* Add in Ubuntu 15.10 support
* Add in OpenSUSE 13.2 support
* Improve Oz's free port selection
A tarball and zipfile of this release is available on the Github releases
page: https://github.com/clalancette/oz/releases . Packages for Fedora-22,
Fedora-23, and EPEL-7 have been built in Koji and will eventually make
their way to stable. Instructions on how to get and use Oz are available
at http://github.com/clalancette/oz/wiki .
If you have questions or comments about Oz, please feel free to contact me
at clalancette at gmail.com, or open up an issue on the github page:
http://github.com/clalancette/oz/issues .
Thanks to everyone who contributed to this release through bug reports,
patches, and suggestions for improvement.
Chris Lalancette
8 years, 8 months
[libvirt-users] Libvirt-lxc packages deprecation
by syed muhammad
Folks,
I found this thread
<https://www.redhat.com/archives/libvirt-users/2015-August/msg00026.html>that
talks about deprecating libvirt-lxc packages [1] [2] [3] in RHEL7.1.
I am using [1] as a dependency for my application deployed on RHEL7.2
servers. For now it works fine but I am not sure about the future RHEL
releases.
The plan of deprecation was delayed or cancelled? Can someone please share
the latest state of this announcement?
[1] libvirt-daemon-driver-lxc
[2] libvirt-daemon-lxc
[3] libvirt-login-shell
Regards,
Qasim Sarfraz
8 years, 8 months
[libvirt-users] High i/o-wait in guest but no i/o-wait on host
by Dennis Jacobfeuerborn
Hi,
I have an issue with one of our CentOS 7 hypervisors. The system is a
Dell server equipped with Samsung 840 Pro SSDs.
What happens is that the guest does about 40MB/s of writes (mostly MySQL
inserts) and ends up becoming almost unusable because of high i/o-wait
numbers yet when I check on the host vmstat consistently shows i/o-wait
being 0 the whole time.
This is weird for two reasons:
a) I would not expect 40MB/s to create such an extreme congestion on the
SSDs.
b) If that congestion is in fact real I would expect to see non-zero
i/o-wait numbers on the host created by the corresponding qemu process.
Right now it looks like the i/o requests get stuck in the guest even
though there is no congestion on the host. Could this be a virtio bug?
Has anyone an explanation for this strange behavior?
Regards,
Dennis
8 years, 8 months
[libvirt-users] problem cloning storage pool volume
by Andrei Perietanu
I'm trying to clone a volume in a storage pool and I'm following the steps
described here:
http://libvirt.org/docs/libvirt-appdev-guide-python/en-US/html/libvirt_ap...
My code looks like:
destXML = """
<volume>
<name>"""+newDiskName+""".qcow2</name>
<target>
<path>/jffs2/disk0/"""+poolName+'/'+newDiskName+""".qcow2</path>
<format type='qcow2'/>
<permissions>
<owner>-1</owner>
<group>-1</group>
<mode>0644</mode>
<label>virt_image_t</label>
</permissions>
</target>
</volume>"""
srcDisk = tmpPool.storageVolLookupByName(vDisk)
newVol = sp.createXMLFrom(destXML,srcDisk, 0)
According to the steps described in the link, this should be it; but I
can't start any VM that are using this volume.
Also comparing the volume XML files for src/dest it looks like the format
is not copied over:
source:
<volume type='file'>
<name>k.qcow2</name>
<key>/jffs2/disk0/sp/vDisk.qcow2</key>
<source>
</source>
<capacity unit='bytes'>3221225472</capacity>
<allocation unit='bytes'>332075008</allocation>
<target>
<path>/jffs2/disk0/sp/vDisk.qcow2</path>
*<format type='qcow2'/>*
<permissions>
<mode>0644</mode>
<owner>0</owner>
<group>0</group>
</permissions>
<timestamps>
<atime>1455879195</atime>
<mtime>1455879171</mtime>
<ctime>1455879171</ctime>
</timestamps>
</target>
dest:
<volume type='file'>
<name>newDiskName.qcow2</name>
<key>/jffs2/disk0/sp/newDiskName.qcow2</key>
<source>
</source>
<capacity unit='bytes'>3221225472</capacity>
<allocation unit='bytes'>326868992</allocation>
<target>
<path>/jffs2/disk0/sp/newDiskName.qcow2</path>
*<format type='raw'/>*
<permissions>
<mode>0644</mode>
<owner>0</owner>
<group>0</group>
</permissions>
<timestamps>
<atime>1455878686</atime>
<mtime>1455878681</mtime>
<ctime>1455878681</ctime>
</timestamps>
</target>
</volume>
I this a bug in libvirt->createXMLFrom ?? Or am I missing something?
Thanks,
Andrei
--
The information transmitted is intended only for the person or entity to
which it is addressed and may contain confidential and/or privileged
material. Any review, retransmission, dissemination or other use of or
taking of any action in reliance upon this information by persons or
entities other than the intended recipient is prohibited. If you receive
this in error please contact the sender and delete the material from any
computer immediately. It is the policy of Klas Limited to disavow the
sending of offensive material and should you consider that the material
contained in the message is offensive you should contact the sender
immediately and also your I.T. Manager.
Klas Telecom Inc., a Virginia Corporation with offices at 1101 30th St. NW,
Washington, DC 20007.
Klas Limited (Company Number 163303) trading as Klas Telecom, an Irish
Limited Liability Company, with its registered office at Fourth Floor, One
Kilmainham Square, Inchicore Road, Kilmainham, Dublin 8, Ireland.
8 years, 8 months