[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 3 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 6 months
[libvirt-users] No way to stop virStream error after guest stop
by JosÉ Luis Valencia Gutierrez
Hello everyone,
I am opening a virChannel(unix) to a domain and receiving data with a
non-blocking virStream using events, when the connected domain gets
stopped(which deletes the channel unix socket) by calling destroy,
shutdown, pause or migrate on that domain, with the stream open the read
event is triggered repeatedly, and virStreamRecv returns 0 bytes indicating
EOF but neither virStreamFinish nor virStreamFinish is working to stop
the stream to trigger the event . Each time the event is called I got this
errors.
libvirt: I/O Stream Utils error : this function is not supported by
the connection driver: virStreamRecv
libvirt: I/O Stream Utils error : this function is not supported by
the connection driver: virStreamFinish
libvirt: I/O Stream Utils error : this function is not supported by
the connection driver: virStreamEventRemoveCallback
Is there other way to stop getting this errors? or perhaps this is a bug.
Thanks in advance.
Jose Valencia
7 years, 6 months
[libvirt-users] Libvirtd freezes
by Stefano Ricci
Hello everyone
I come back to ask for a hand to solve a problem that has affected me
since October 2016 and I have not yet solved using libvirt.
I thought I would solve it by going to a 4.9.x kernel with qemu
2.8.1.1 and with libvirt 3.2.0.
Compile it all in a stable LFS environment version 7.9 and that all
checks pass without errors.
The strange thing is that the libvirtd process starts without errors
but has arrived at the qemu process launch to understand the system's
capabilities freezes until the following process is killed
/usr/bin/qemu-system-x86_64 -S -none-user-config -nodefaults
-nographic -machine none, accel = kvm: tcg -qmp unix:
/var/lib/libvirt/qemu/capabilities.monitor.sock,server,nowait-pidfile/var/lib/libvirt/qemu/capabilities.Pidfile
-daemonize, since that time libvirtd resumes running and can be used
with virsh.
Performing qemu independently of libvirt works regularly, creates and
runs virtual machines smoothly.
Thanks in advance
Stefano Ricci
7 years, 7 months
[libvirt-users] building virtual desktops with libvirt, KVM, SPICE and GNOME
by Daniel Pocock
Can anybody comment on how to host virtual desktops on a headless server
using libvirt and KVM on the server and a SPICE client to access the
virtual desktop? Is there a standard way of doing this?
I've seen many fragments of information about how to do this but I
didn't come across a single guide describing the entire solution.
Search engines also return a lot of information about gaining remote
access to a real physical desktop but that is not what I'm looking for.
I've also come across many real-world scenarios where people are
manually starting VNC server processes for each user on different ports
but I was hoping to find out if there is a more standard way of doing
this now.
When I say "virtual desktop", the type of user experience I'm thinking
about is that named users can run a SPICE client anywhere and always
connect to the same host/desktop. E.g. if they leave some windows open,
disconnect, go to another physical machine and reconnect with the same
username they will see the same desktop with the same windows open.
Regards,
Daniel
7 years, 7 months
[libvirt-users] Does lxc support cputune/vcpusched option
by Peter Steele
I need a container that supports real time threads. According to the
documentation, I can do something like this:
<domain type='lxc'>
<name>test1</name>
<uuid>e7446f55-3d59-4af0-90b2-d1498ac4370d</uuid>
...
<vcpu placement='auto'>2</vcpu>
<cputune>
<vcpusched vcpus='0-1' scheduler='fifo' priority='1'/>
</cputune>
...
</domain>
The document describes the vcpusched element as follows:
The optional |vcpusched| elements specifies the scheduler type (values
|batch|, |idle|, |fifo|, |rr|) for particular vCPU/IOThread threads
(based on |vcpus| and |iothreads|, leaving out |vcpus|/|iothreads| sets
the default). Valid |vcpus| values start at 0 through one less than the
number of vCPU's defined for the domain. Valid |iothreads| values are
described in the |iothreadids| |description|
<https://libvirt.org/formatdomain.html#elementsIOThreadsAllocation>. If
no |iothreadids| are defined, then libvirt numbers IOThreads from 1 to
the number of |iothreads| available for the domain. For real-time
schedulers (|fifo|, |rr|), priority must be specified as well (and is
ignored for non-real-time ones). The value range for the priority
depends on the host kernel (usually 1-99).
So I *think* my xml is correct for this, but it doesn't seem to work--I
still can't create real time threads in my container. Am I missing
another configuration step somewhere?
Peter
7 years, 7 months
[libvirt-users] virt-manager / remote desktop latency issues
by Daniel Pocock
Hi,
I'm using virt-viewer to launch remote desktop connections
The connection to the KVM host is using ssh
The virtual machines/guests are running a GNOME desktop
I've observed some latency issues, for example, changing focus from one
terminal to another is a bit sluggish. Typing isn't too bad.
The WAN connection is gigabit fibre from my home into a local data
center. Ping times are about 0.8ms to the physical server.
Smokeping shows there is no packet loss and the latency is constant.
Can I do anything to tweak the .ssh/config to make it better? It
occurred to me that adding "IPQoS lowdelay" might be useful as it
defaults to "throughput" for non-interactive connections. Maybe
virt-manager should add that on the ssh command line?
Is there any way to see SPICE latency statistics in the virt-manager GUI?
Would any other SPICE viewer make a difference?
Are there any changes I should make to the GNOME desktop configuration
to make it work better through SPICE?
Can anybody make any other suggestions?
Regards,
Daniel
7 years, 8 months
[libvirt-users] Live migration with non-shared ZFS volume
by Daniel Kučera
Hi all,
I'm using ZFS on Linux block volumes as my VM storage and want to do live
migrations between hypervisors.
If I create ZFS snapshot of used volume on source host, send it do
destination host (zfs send/recv) and then run live migration with
VIR_MIGRATE_NON_SHARED_DISK
flag, the migration works OK.
But this procedure copies the whole disk twice which is a huge downside.
The best solution would be, if libvirt could send the incremental data
since last snapshot itself but this feature is not there (AFAIK).
So I am thinking about a workaround:
1. Create snapshot using: "virsh snapshot-create --xmlfile snap.xml
--disk-only --no-metadata test-domain" which will start writing snapshot
data into temporary qcow2 file
<domainsnapshot>
<disks>
<disk name='/dev/zstore/test-volume'>
<source file='/tmp/test-volume.qcow2'/>
</disk>
</disks>
</domainsnapshot>
2. Create snapshot of backing ZFS volume and send it to destination host.
3. Migrate the domain
Currently, in step 3 I need to create empty qcow snapshot file on the
destination host, otherwise the migration fails with: "Operation not
supported: pre-creation of storage targets for incremental storage
migration is not supported"
My question is: Is it possible to do live migration with blockcommit
operation? If not, would it be hard to implement?
I imagine it like I would start migration with some special parameter (e.g.
VIR_MIGRATE_NON_SHARED_INC_COMMIT) which would only migrate data from qcow
snapshot to destination storage.
This would ensure the disk consistency and avoid useless whole disk copy.
Or do you have any other idea how to solve this?
BR.
Daniel Kucera.
7 years, 8 months
[libvirt-users] Data format
by Anastasiya Ruzhanskaya
What is exactly the format of data being sent across remote connection (
from client to server with RPC protocol)? I see, there is XML but converted
to string.
7 years, 8 months
[libvirt-users] Tunnelled migrate Windows7 VMs halted
by Eric Blake
[moderator note: I'm forwarding a stripped down version of the original
mail which was rejected in the moderator queue. I stripped the 3.3
megabyte .tar.bz2 of the log file attachment, which is inappropriate for
a technical list. Either trim the log to the relevant portion, or host
the log externally and have your list email merely give a URL of the
externally-hosted file]
> ForwardedMessage.eml
>
> Subject:
> Tunnelled migrate Windows7 VMs halted
> From:
> 邓林文 <dlworld0(a)163.com>
> Date:
> 04/25/2017 11:12 PM
>
> To:
> libvirt-users(a)redhat.com
>
>
>
> I migrated a Windows 7 VM with libvirtd tunnelled, the VM halted on the target although the status is running.
>
>
> [root@test15 ~]# virsh migrate --live --p2p --tunnelled i-000000ac qemu+tcp://192.168.65.13/system
>
>
> But when migrated with qemu native mode, the VM runs well.
>
>
> [root@test15 ~]# virsh migrate --live --p2p i-000000ac qemu+tcp://192.168.65.13/system
>
>
> System Info:
> Release: Centos 7.2
> Kernel: 3.10.0-327.28.3.el7.x86_64
> Qemu: qemu-kvm-rhev-2.3.0
> Libvirt: libvirt-1.2.17/libvirt-2.0.0
> CPU: AMD Opteron(TM) Processor 6212
>
>
> As CPUFrequency may lead windows migrate halt, I have disabled Power Management.
>
>
> Does anyone have some sugestions?
>
>
> Thanks,
> Linwen Deng
>
>
> vm.xml
>
>
> <domain type='kvm' id='8'>
> <name>i-000000ac</name>
> <uuid>53f0710f-b25e-4f47-a7cf-c15a9409fdc3</uuid>
> <metadata>
> <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.0">
> <nova:package version="12.0.0-4"/>
> <nova:name>c7-vm15-test15-7</nova:name>
> <nova:creationTime>2017-04-14 06:48:54</nova:creationTime>
> <nova:flavor name="oeflavor-4-2048-20">
> <nova:memory>2048</nova:memory>
> <nova:disk>20</nova:disk>
> <nova:swap>0</nova:swap>
> <nova:ephemeral>0</nova:ephemeral>
> <nova:vcpus>4</nova:vcpus>
> </nova:flavor>
> <nova:owner>
> <nova:user uuid="b0f21b1d16b147ffb3f7713716cc894a">admin</nova:user>
> <nova:project uuid="ae883f160c3c41db850d5cde8de8208b">service</nova:project>
> </nova:owner>
> </nova:instance>
> </metadata>
> <memory unit='KiB'>2097152</memory>
> <currentMemory unit='KiB'>2097152</currentMemory>
> <memoryBacking>
> <hugepages>
> <page size='2048' unit='KiB' nodeset='0'/>
> </hugepages>
> </memoryBacking>
> <vcpu placement='static'>2</vcpu>
> <cputune>
> <shares>4096</shares>
> <vcpupin vcpu='0' cpuset='4-7'/>
> <vcpupin vcpu='1' cpuset='4-7'/>
> <emulatorpin cpuset='4-7'/>
> </cputune>
> <resource>
> <partition>/machine</partition>
> </resource>
> <sysinfo type='smbios'>
> <system>
> <entry name='manufacturer'>Fedora Project</entry>
> <entry name='product'>OpenStack Nova</entry>
> <entry name='version'>12.0.0-4</entry>
> <entry name='serial'>95637afe-453f-42a8-b198-df673ab59c91</entry>
> <entry name='uuid'>53f0710f-b25e-4f47-a7cf-c15a9409fdc3</entry>
> <entry name='family'>Virtual Machine</entry>
> </system>
> </sysinfo>
> <os>
> <type arch='x86_64' machine='pc-i440fx-rhel7.2.0'>hvm</type>
> <boot dev='cdrom'/>
> <boot dev='hd'/>
> <boot dev='fd'/>
> <smbios mode='sysinfo'/>
> </os>
> <features>
> <acpi/>
> <apic/>
> </features>
> <cpu mode='host-model'>
> <model fallback='allow'/>
> <topology sockets='2' cores='1' threads='1'/>
> </cpu>
> <clock offset='utc'>
> <timer name='pit' tickpolicy='delay'/>
> <timer name='rtc' tickpolicy='catchup'/>
> <timer name='hpet' present='no'/>
> </clock>
> <on_poweroff>destroy</on_poweroff>
> <on_reboot>restart</on_reboot>
> <on_crash>destroy</on_crash>
> <devices>
> <emulator>/usr/libexec/qemu-kvm</emulator>
> <disk type='file' device='disk'>
> <driver name='qemu' type='qcow2' cache='none'/>
> <source file='/opt/ssd/win7.qcow2'/>
> <backingStore type='file' index='1'>
> <format type='raw'/>
> <source file='/opt/ssd/_base/win7.qcow2'/>
> <backingStore/>
> </backingStore>
> <target dev='vda' bus='virtio'/>
> <serial>8164a82d-6066-46d7-8a92-8391620fc58c</serial>
> <alias name='virtio-disk0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
> </disk>
> <controller type='usb' index='0' model='ich9-ehci1'>
> <alias name='usb'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x7'/>
> </controller>
> <controller type='usb' index='0' model='ich9-uhci1'>
> <alias name='usb'/>
> <master startport='0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0' multifunction='on'/>
> </controller>
> <controller type='usb' index='0' model='ich9-uhci2'>
> <alias name='usb'/>
> <master startport='2'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x1'/>
> </controller>
> <controller type='usb' index='0' model='ich9-uhci3'>
> <alias name='usb'/>
> <master startport='4'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x2'/>
> </controller>
> <controller type='pci' index='0' model='pci-root'>
> <alias name='pci.0'/>
> </controller>
> <controller type='virtio-serial' index='0'>
> <alias name='virtio-serial0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
> </controller>
> <serial type='file'>
> <source path='/var/lib/nova/instances/53f0710f-b25e-4f47-a7cf-c15a9409fdc3/console.log'/>
> <target port='0'/>
> <alias name='serial0'/>
> </serial>
> <serial type='pty'>
> <source path='/dev/pts/2'/>
> <target port='1'/>
> <alias name='serial1'/>
> </serial>
> <console type='file'>
> <source path='/var/lib/nova/instances/53f0710f-b25e-4f47-a7cf-c15a9409fdc3/console.log'/>
> <target type='serial' port='0'/>
> <alias name='serial0'/>
> </console>
> <channel type='spicevmc'>
> <target type='virtio' name='com.redhat.spice.0' state='connected'/>
> <alias name='channel0'/>
> <address type='virtio-serial' controller='0' bus='0' port='1'/>
> </channel>
> <input type='tablet' bus='usb'>
> <alias name='input0'/>
> </input>
> <input type='mouse' bus='ps2'>
> <alias name='input1'/>
> </input>
> <input type='keyboard' bus='ps2'>
> <alias name='input2'/>
> </input>
> <graphics type='vnc' port='5900' autoport='yes' listen='0.0.0.0' keymap='en-us'>
> <listen type='address' address='0.0.0.0'/>
> </graphics>
> <graphics type='spice' port='5901' autoport='yes' listen='0.0.0.0' keymap='en-us'>
> <listen type='address' address='0.0.0.0'/>
> <image compression='auto_glz'/>
> <streaming mode='filter'/>
> <mouse mode='client'/>
> </graphics>
> <sound model='ich6'>
> <alias name='sound0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
> </sound>
> <video>
> <model type='qxl' ram='65536' vram='65536' vgamem='16384' primary='yes'/>
> <alias name='video0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
> </video>
> <memballoon model='virtio'>
> <stats period='10'/>
> <alias name='balloon0'/>
> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
> </memballoon>
> </devices>
> <seclabel type='none' model='none'/>
> <seclabel type='dynamic' model='dac' relabel='yes'>
> <label>+0:+107</label>
> <imagelabel>+0:+107</imagelabel>
> </seclabel>
> </domain>
>
--
Eric Blake, Principal Software Engineer
Red Hat, Inc. +1-919-301-3266
Virtualization: qemu.org | libvirt.org
7 years, 8 months