[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] net interface direct - no IP communication between guest & host
by lejeczek
hi everyone
I wonder why, when I attach an interface like this:
virsh # attach-interface --domain win10Ent --type direct
--source nm-team --config --persistent --model virtio
host cannot ip ping the guest and vice versa, yet guest can
ping other nodes(outside of its host, connected via phys net
via a switch)
Would you know?
I thought maybe routing on the host, so I did:
$ route add -host 192.168.2.222 dev nm-team
but to no avail.
I wonder if it's lower layer, arp kernel bits?
thanks, L.
7 years
[libvirt-users] libvirt/dnsmasq is not adhering to static DHCP assignments
by Dagmawi Biru
Given the following network configuration:
===========
<network>
<name>osc_mgmt</name>
<uuid>d93fe709-14ae-4a0e-8989-aeaa8c76c513</uuid>
<forward mode='route'/>
<bridge name='osc_mgmt' stp='on' delay='0'/>
<mac address='52:54:00:3f:fe:10'/>
<ip address='192.168.80.254' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.80.1' end='192.168.80.200'/>
<host mac='52:54:00:2c:85:92' name='openstack-controller-00'
ip='192.168.80.1'/>
<host mac='52:54:00:e2:4b:25' name='openstack-database-00'
ip='192.168.80.2'/>
<host mac='52:54:00:50:91:04' name='openstack-keystone-00'
ip='192.168.80.3'/>
<host mac='52:54:00:fe:5b:36' name='openstack-rabbitmq-00'
ip='192.168.80.7'/>
<host mac='52:54:00:95:ca:bd' name='openstack-glance-00'
ip='192.168.80.5'/>
</dhcp>
</ip>
</network>
When attempting to bring up the relevant interface in the virtual machine,
I get an incorrect IP address assigned, different from the one I statically
set up per the XML above. As you can see, the device with MAC
'52:54:00:e2:4b:25' should really be getting 192.168.80.2, but what happens
when we bring this interface up is this:
===========
root@openstack-database-00:/home/osc# ifup ens11
Internet Systems Consortium DHCP Client 4.3.3
Copyright 2004-2015 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/
Listening on LPF/ens11/...
Sending on LPF/ens11/...
Sending on Socket/fallback
DHCPDISCOVER on ens11 to 255.255.255.255 port 67 interval 3 (xid=0x6769e42a)
DHCPREQUEST of 192.168.80.27 on ens11 to 255.255.255.255 port 67
(xid=0x2ae46967)
DHCPOFFER of 192.168.80.27 from 192.168.80.254
DHCPACK of 192.168.80.27 from 192.168.80.254
bound to 192.168.80.27 -- renewal in 1407 seconds.
Some additional info about the VM and network it's attached to
===========
[root@dragon dnsmasq]# virsh domiflist openstack-database-00
Interface Type Source Model MAC
-------------------------------------------------------
vnet12 bridge br20 virtio 52:54:00:6c:ce:b9
vnet13 network VM_MGMT rtl8139 52:54:00:7d:ca:87
vnet14 network osc_mgmt rtl8139 52:54:00:e2:4b:25
[root@dragon dnsmasq]# virsh net-info osc_mgmt
Name: osc_mgmt
UUID: d93fe709-14ae-4a0e-8989-aeaa8c76c513
Active: yes
Persistent: yes
Autostart: yes
Bridge: osc_mgmt
===========
What's strange is that the first VM seems to work correctly and gets an
assigned address of 192.168.80.1, but for some reason the others don't. Any
ideas?
7 years
[libvirt-users] guest state "in shutdown"
by Lying
Hello,I just encountered a problem
my virtual machine is not a normal shutdown
and I get state is "in shutdown" when i check it using "virsh list --all"
why i get the error state,and how i should solve it?
7 years
Re: [libvirt-users] Need to increase the rx and tx buffer size of my interface
by Ashish Kurian
Hi Yalan,
In the previous email you mentioned "tx_queue_size='512' will not work in
the guest with direct type interface, in fact, no matter what you set, it
will not work and guest will get the default '256'. "
So if I am using macvtap for my interfaces, then the device type will
always be direct type. Does it mean that there is no way I can increase the
buffer size with the macvtap interfaces?
Best Regards,
Ashish Kurian
On Thu, Oct 26, 2017 at 9:04 AM, Ashish Kurian <ashishbnv(a)gmail.com> wrote:
> Hi Yalan,
>
> Thank you for your comment on qemu-kvm-rhev
>
> I am waiting for a response about my previous email with the logs
> attached. I do not understand what is the problem.
>
>
> On Oct 26, 2017 8:58 AM, "Yalan Zhang" <yalzhang(a)redhat.com> wrote:
>
> Hi Ashish,
>
> Please never mind for qemu-kvm-rhev.
> qemu with the code base 2.10.0 will support the tx_queue_size and
> rx_queue_size.
>
> Thank you~
>
>
>
>
>
> -------
> Best Regards,
> Yalan Zhang
> IRC: yalzhang
> Internal phone: 8389413
>
> On Thu, Oct 26, 2017 at 2:22 PM, Yalan Zhang <yalzhang(a)redhat.com> wrote:
>
>> Hi Ashish,
>>
>> Are these packages available for free? How can I install them?
>> => You did have vhost backend driver. Do not set <driver name='qemu'...>,
>> by default it will use vhost as backend driver.
>>
>> Is it possible to have my interfaces with an IP address inside the VM to
>> be bridged to the physical interfaces on the host?
>> => Yes, you can create a linux bridge with physical interface connected,
>> and use bridge type interface. Refer to https://libvirt.org/formatd
>> omain.html#elementsNICSBridge
>> direct type is also ok (but your host and guest have no access to each
>> other).
>>
>> Is it also a possibility that I change the rx and tx buffer on the
>> physical interface on the host and it is reflected automatically inside the
>> VM as you said it will always receive the default value of the host?
>> => No, it do not receive the default value of the host. It's the default
>> value related with the virtual device driver on the guest.
>> hostdev type interface will passthrough the physical interface or VF of
>> the host to guest, it will get the device's parameters for rx and tx buffer.
>>
>>
>>
>> -------
>> Best Regards,
>> Yalan Zhang
>> IRC: yalzhang
>> Internal phone: 8389413
>>
>> On Thu, Oct 26, 2017 at 1:30 PM, Ashish Kurian <ashishbnv(a)gmail.com>
>> wrote:
>>
>>> Hi Yalan,
>>>
>>> Thank you for your response. I do not have the following packages
>>> installed
>>>
>>> vhost backend driver
>>> qemu-kvm-rhev package
>>>
>>> Are these packages available for free? How can I install them?
>>>
>>> In my KVM VM, I must have an IP address to the interfaces that I am
>>> trying to increasing the buffers. That is the reason I was using macvtap
>>> (direct type interface). Is it possible to have my interfaces with an IP
>>> address inside the VM to be bridged to the physical interfaces on the host?
>>>
>>> Is it also a possibility that I change the rx and tx buffer on the
>>> physical interface on the host and it is reflected automatically inside the
>>> VM as you said it will always receive the default value of the host?
>>>
>>>
>>> Best Regards,
>>> Ashish Kurian
>>>
>>> On Thu, Oct 26, 2017 at 6:45 AM, Yalan Zhang <yalzhang(a)redhat.com>
>>> wrote:
>>>
>>>> Hi Ashish,
>>>>
>>>> I have tested with your xml in the first mail, and it works for rx_queue_size(see
>>>> below).
>>>> multiqueue need to work with vhost backend driver. And when you set
>>>> "queues=1" it will ignored.
>>>>
>>>> Please check your qemu-kvm-rhev package, should be newer than
>>>> qemu-kvm-rhev-2.9.0-16.el7_4.2
>>>> And the logs?
>>>>
>>>> tx_queue_size='512' will not work in the guest with direct type
>>>> interface, in fact, no matter what you set, it will not work and guest will
>>>> get the default '256'.
>>>> We only support vhost-user backend to have more than 256. refer to
>>>> https://libvirt.org/formatdomain.html#elementsNICSEthernet
>>>>
>>>> tx_queue_size
>>>> The optional tx_queue_size attribute controls the size of virtio ring
>>>> for each queue as described above. The default value is hypervisor
>>>> dependent and may change across its releases. Moreover, some hypervisors
>>>> may pose some restrictions on actual value. For instance, QEMU v2.9
>>>> requires value to be a power of two from [256, 1024] range. In addition to
>>>> that, this may work only for a subset of interface types, e.g.
>>>> aforementioned QEMU enables this option only for vhostuser type. Since
>>>> 3.7.0 (QEMU and KVM only)
>>>> multiqueue only supports vhost as backend driver.
>>>>
>>>> # rpm -q libvirt qemu-kvm-rhev
>>>> libvirt-3.2.0-14.el7_4.3.x86_64
>>>> qemu-kvm-rhev-2.9.0-16.el7_4.9.x86_64
>>>>
>>>> 1. the xml as below
>>>> <interface type='direct'>
>>>> <mac address='52:54:00:00:b5:99'/>
>>>> <source dev='eno1' mode='vepa'/>
>>>> <model type='virtio'/>
>>>> <driver name='vhost' queues='5' rx_queue_size='512'
>>>> tx_queue_size='512'>
>>>> <host csum='off' gso='off' tso4='off' tso6='off' ecn='off'
>>>> ufo='off' mrg_rxbuf='off'/>
>>>> <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
>>>> </driver>
>>>> <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
>>>> function='0x0'/>
>>>> </interface>
>>>>
>>>> 2. after start the vm, check the qemu command line:
>>>> *-netdev
>>>> tap,fds=26:28:29:30:31,id=hostnet0,vhost=on,vhostfds=32:33:34:35:36*
>>>> -device virtio-net-pci,csum=off,gso=off,host_tso4=off,host_tso6=off,
>>>> host_ecn=off,host_ufo=off,mrg_rxbuf=off,guest_csum=off,guest
>>>> _tso4=off,guest_tso6=off,guest_ecn=off,guest_ufo=off,
>>>> *mq=on,vectors=12,rx_queue_size=512,tx_queue_size=512*,netdev=hostnet
>>>> 0,id=net0,mac=52:54:00:00:b5:99,bus=pci.0,addr=0x3
>>>>
>>>> 3. check on guest
>>>> # ethtool -g eth0
>>>> Ring parameters for eth0:
>>>> Pre-set maximums:
>>>> RX: *512 ==> rx_queue_size works*
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: *256 ===> no change*
>>>> Current hardware settings:
>>>> RX: *512 **==> rx_queue_size works*
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: *256 ===> no change*
>>>>
>>>> # ethtool -l eth0
>>>> Channel parameters for eth0:
>>>> Pre-set maximums:
>>>> RX: 0
>>>> TX: 0
>>>> Other: 0
>>>> Combined: *5 ==> queues what we set*
>>>> Current hardware settings:
>>>> RX: 0
>>>> TX: 0
>>>> Other: 0
>>>> Combined: 1
>>>>
>>>>
>>>> If change to qemu as driver,
>>>> # virsh edit rhel7
>>>> ..
>>>> <interface type='direct'>
>>>> <mac address='52:54:00:00:b5:99'/>
>>>> <source dev='eno1' mode='vepa'/>
>>>> <model type='virtio'/>
>>>> <driver name='qemu' queues='5' rx_queue_size='512'
>>>> tx_queue_size='512'>
>>>> <host csum='off' gso='off' tso4='off' tso6='off' ecn='off'
>>>> ufo='off' mrg_rxbuf='off'/>
>>>> <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
>>>> </driver>
>>>> <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
>>>> function='0x0'/>
>>>> </interface>
>>>> ..
>>>> Domain rhel7 XML configuration edited. ==> the xml can validate and save
>>>>
>>>> # virsh start rhel7
>>>> Domain rhel7 started
>>>>
>>>>
>>>> # virsh dumpxml rhel7 | grep /interface -B9
>>>> <source dev='eno1' mode='vepa'/>
>>>> <target dev='macvtap0'/>
>>>> <model type='virtio'/>
>>>> *<driver name='qemu' queues='5' rx_queue_size='512'
>>>> tx_queue_size='512'>*
>>>> <host csum='off' gso='off' tso4='off' tso6='off' ecn='off'
>>>> ufo='off' mrg_rxbuf='off'/>
>>>> <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
>>>> </driver>
>>>> <alias name='net0'/>
>>>> <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
>>>> function='0x0'/>
>>>> </interface>
>>>>
>>>>
>>>> * -netdev tap,fds=26:28:29:30:31*,id=hostnet0 -device
>>>> virtio-net-pci,csum=off,gso=off,host_tso4=off,host_tso6=off,
>>>> host_ecn=off,host_ufo=off,mrg_rxbuf=off,guest_csum=off,guest
>>>> _tso4=off,guest_tso6=off,guest_ecn=off,guest_ufo=off,
>>>> *rx_queue_size=512,tx_queue_size=512*,netdev=hostnet0,id=net0,mac=52:
>>>> 54:00:00:b5:99,bus=pci.0,addr=0x3
>>>>
>>>> *"mq=on,vectors=12" is missing*, indicates there is no multiqueue
>>>>
>>>> and check on guest
>>>>
>>>> # ethtool -l eth0
>>>> Channel parameters for eth0:
>>>> Pre-set maximums:
>>>> RX: 0
>>>> TX: 0
>>>> Other: 0
>>>> Combined: 1 ==> no multiqueue
>>>> Current hardware settings:
>>>> RX: 0
>>>> TX: 0
>>>> Other: 0
>>>> Combined: 1
>>>>
>>>> # ethtool -g eth0
>>>> Ring parameters for eth0:
>>>> Pre-set maximums:
>>>> RX: *512*
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: 256
>>>> Current hardware settings:
>>>> RX: *512*
>>>> RX Mini: 0
>>>> RX Jumbo: 0
>>>> TX: 256
>>>>
>>>>
>>>>
>>>>
>>>> -------
>>>> Best Regards,
>>>> Yalan Zhang
>>>> IRC: yalzhang
>>>> Internal phone: 8389413
>>>>
>>>> On Thu, Oct 26, 2017 at 2:33 AM, Ashish Kurian <ashishbnv(a)gmail.com>
>>>> wrote:
>>>>
>>>>> Hi Michal,
>>>>>
>>>>> An update to what I have already said : when I try adding <driver
>>>>> name='qemu' txmode='iothread' ioeventfd='on' event_idx='off' queues='1'
>>>>> rx_queue_size='512' tx_queue_size='512'> although it showed me the error as
>>>>> mentioned, when I checked the xml again I saw that <driver
>>>>> name='qemu' txmode='iothread' ioeventfd='on' event_idx='off' > is added to
>>>>> the interface.
>>>>>
>>>>> The missing parameters are : queues='1' rx_queue_size='512'
>>>>> tx_queue_size='512'
>>>>>
>>>>> Best Regards,
>>>>> Ashish Kurian
>>>>>
>>>>> On Wed, Oct 25, 2017 at 5:07 PM, Ashish Kurian <ashishbnv(a)gmail.com>
>>>>> wrote:
>>>>>
>>>>>> Hi Michal,
>>>>>>
>>>>>> What I found was that when I restarted the machine and did a virsh
>>>>>> edit command to see the xml config, I see that it is was not actually
>>>>>> changed. This suggests why I saw 256 again after restarting.
>>>>>>
>>>>>> So now I tried again to edit the xml via virsh edit command and used
>>>>>> the following to set the parameters.
>>>>>>
>>>>>> <driver name='qemu' txmode='iothread' ioeventfd='on' event_idx='off'
>>>>>> queues='1' rx_queue_size='512' tx_queue_size='512'>
>>>>>> </driver>
>>>>>>
>>>>>> It was not accepted and I got the error saying :
>>>>>>
>>>>>>
>>>>>> error: XML document failed to validate against schema: Unable to
>>>>>> validate doc against /usr/share/libvirt/schemas/domain.rng
>>>>>> Extra element devices in interleave
>>>>>> Element domain failed to validate content
>>>>>>
>>>>>> What does this imply? I have two more other interfaces and do I have
>>>>>> to the same to them also?
>>>>>>
>>>>>> Btw, there are now logs generated now in the domain log or libvirtd
>>>>>> log
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> Best Regards,
>>>>>> Ashish Kurian
>>>>>>
>>>>>> On Wed, Oct 25, 2017 at 2:50 PM, Michal Privoznik <
>>>>>> mprivozn(a)redhat.com> wrote:
>>>>>>
>>>>>>> On 10/25/2017 01:53 PM, Ashish Kurian wrote:
>>>>>>> > Dear Users/Developers,
>>>>>>> >
>>>>>>> > I am using a KVM Ubuntu VM as a degrader to apply specific delays
>>>>>>> to
>>>>>>> > incoming packets. As the delay for my packets can be higher than
>>>>>>> 7.5
>>>>>>> > seconds, there is not enough buffer on my interface to buffer all
>>>>>>> the
>>>>>>> > packets. Therefore those overflowing packets are dropped in the
>>>>>>> machine and
>>>>>>> > not forwarded.
>>>>>>> >
>>>>>>> > When I tried to use the command ethtool -G ens8 rx 512 to
>>>>>>> increase the
>>>>>>> > buffer size, I get the following error.
>>>>>>> >
>>>>>>> > Cannot set device ring parameters: Operation not permitted
>>>>>>> >
>>>>>>> > I have kept the VM xml files as specified in the link :
>>>>>>> > https://libvirt.org/formatdomain.html. The value that I kept in
>>>>>>> my xml file
>>>>>>> > is as follows.
>>>>>>> >
>>>>>>> > <interface type='direct'>
>>>>>>> > <mac address='52:54:00:72:f9:eb'/>
>>>>>>> > <source dev='enp7s0f0' mode='vepa'/>
>>>>>>> > <model type='virtio'/>
>>>>>>> > <driver name='vhost' queues='5' rx_queue_size='512'
>>>>>>> > tx_queue_size='512'>
>>>>>>> > <host csum='off' gso='off' tso4='off' tso6='off' ecn='off'
>>>>>>> ufo='off'
>>>>>>> > mrg_rxbuf='off'/>
>>>>>>> > <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
>>>>>>> > </driver>
>>>>>>> > <address type='pci' domain='0x0000' bus='0x00' slot='0x08'
>>>>>>> > function='0x0'/>
>>>>>>> > </interface>
>>>>>>> > <interface type='direct'>
>>>>>>> > <mac address='52:54:00:00:b5:99'/>
>>>>>>> > <source dev='enp7s0f1' mode='vepa'/>
>>>>>>> > <model type='virtio'/>
>>>>>>> > <driver name='vhost' queues='5' rx_queue_size='512'
>>>>>>> > tx_queue_size='512'>
>>>>>>> > <host csum='off' gso='off' tso4='off' tso6='off' ecn='off'
>>>>>>> ufo='off'
>>>>>>> > mrg_rxbuf='off'/>
>>>>>>> > <guest csum='off' tso4='off' tso6='off' ecn='off' ufo='off'/>
>>>>>>> > </driver>
>>>>>>> > <address type='pci' domain='0x0000' bus='0x00' slot='0x09'
>>>>>>> > function='0x0'/>
>>>>>>>
>>>>>>> So what does the qemu command line look like? You can find it in
>>>>>>> either
>>>>>>> libvirtd log or domain log.
>>>>>>>
>>>>>>> http://wiki.libvirt.org/page/DebugLogs
>>>>>>>
>>>>>>> Michal
>>>>>>>
>>>>>>
>>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> libvirt-users mailing list
>>>>> libvirt-users(a)redhat.com
>>>>> https://www.redhat.com/mailman/listinfo/libvirt-users
>>>>>
>>>>
>>>>
>>>
>>
>
>
7 years
[libvirt-users] question about how to set rng device on vm
by Yalan Zhang
Hi Amos,
I'm a libvirt QE, and I can not understand the setting on libvirt.org for
rng device.
Could you please help to explain a little?
(The xml in https://libvirt.org/formatdomain.html#elementsRng)
<devices>
<rng model='virtio'>
<rate period="2000" bytes="1234"/>
<backend model='random'>/dev/random</backend>
<!-- OR -->
<backend model='egd' type='udp'>
*<source mode='bind' service='1234'/>*
* <source mode='connect' host='1.2.3.4' service='1234'/>*
</backend>
</rng>
</devices>
How did it work with source mode='bind' and source mode='connect' together?
which process on guest or host will act as server part, which for client
part?
One detail example:
start a vm with below device, and no egd running on host:
<rng model='virtio'>
<backend model='egd' type='udp'>
<source mode='bind' service='1234'/>
<source mode='connect' host='127.0.0.1' service='1234'/>
</backend>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09'
function='0x0'/>
</rng>
qemu command line:
-chardev udp,id=charrng0,host=127.0.0.1,port=1234,localaddr=,localport=1234
-object rng-egd,id=objrng0,chardev=charrng0 -device
virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.0,addr=0x9
In my understanding the purpose of the rng device on guest is to provide
guest a hardware RNG device /dev/hwrng which obtain seeds from the host.
The source can be /dev/random on host, then the xml will be:
<rng model='virtio'>
<backend model='random'>/dev/random</backend>
</rng>
can be hardware on host:
<rng model='virtio'>
<backend model='random'>/dev/hwrng</backend>
</rng>
can be edg daemon running on host:
<rng model='virtio'>
<backend model='egd' type='tcp'>
<source mode='connect' host='127.0.0.1' service='1234'/>
</backend>
</rng>
(on host, there should be a egd daemon running on tcp 127.0.0.1:1234
# egd.pl --debug-client --nofork localhost:1234)
Thank you very much and look forward for your response!
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
7 years
Re: [libvirt-users] terminating on signal 15 from pid 2146 (/usr/sbin/libvirtd)
by Matwey V. Kornilov
2017-10-20 17:14 GMT+03:00 Matwey V. Kornilov <matwey.kornilov(a)gmail.com>:
> 2017-10-20 15:16 GMT+03:00 Martin Kletzander <mkletzan(a)redhat.com>:
>> On Fri, Oct 20, 2017 at 03:07:19PM +0300, Matwey V. Kornilov wrote:
>>>
>>> 2017-10-20 14:59 GMT+03:00 Martin Kletzander <mkletzan(a)redhat.com>:
>>>>
>>>> On Thu, Oct 19, 2017 at 09:11:00PM +0300, Matwey V. Kornilov wrote:
>>>>>
>>>>>
>>>>> Hello,
>>>>>
>>>>> I use libvirt 3.3.0 and qemu 2.9.0
>>>>>
>>>>> My domain XML spec is the following:
>>>>>
>>>>> <domain type='qemu'>
>>>>> <name>s390_generic</name>
>>>>> <uuid>82b4d16e-b636-447e-9fda-41d44616bce8</uuid>
>>>>> <memory unit='KiB'>1048576</memory>
>>>>> <currentMemory unit='KiB'>1048576</currentMemory>
>>>>> <vcpu placement='static'>1</vcpu>
>>>>> <os>
>>>>> <type arch='s390x' machine='s390-ccw-virtio-2.9'>hvm</type>
>>>>> <boot dev='hd'/>
>>>>> </os>
>>>>> <clock offset='utc'/>
>>>>> <on_poweroff>destroy</on_poweroff>
>>>>> <on_reboot>restart</on_reboot>
>>>>> <on_crash>destroy</on_crash>
>>>>> <devices>
>>>>> <emulator>/usr/bin/qemu-system-s390x</emulator>
>>>>> <disk type='block' device='disk'>
>>>>> <driver name='qemu' type='raw' cache='none' io='native'/>
>>>>> <source dev='/dev/lvm_pda/libvirt_s390'/>
>>>>> <target dev='vda' bus='virtio'/>
>>>>> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0000'/>
>>>>> </disk>
>>>>> <disk type='file' device='cdrom'>
>>>>> <driver name='qemu' type='raw'/>
>>>>> <target dev='sda' bus='scsi'/>
>>>>> <readonly/>
>>>>> <address type='drive' controller='0' bus='0' target='0' unit='0'/>
>>>>> </disk>
>>>>> <controller type='scsi' index='0' model='virtio-scsi'>
>>>>> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0002'/>
>>>>> </controller>
>>>>> <interface type='bridge'>
>>>>> <mac address='52:54:00:e8:61:7e'/>
>>>>> <source bridge='br0'/>
>>>>> <model type='virtio'/>
>>>>> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0001'/>
>>>>> </interface>
>>>>> <console type='pty'>
>>>>> <target type='sclp' port='0'/>
>>>>> </console>
>>>>> <memballoon model='virtio'>
>>>>> <address type='ccw' cssid='0xfe' ssid='0x0' devno='0x0003'/>
>>>>> </memballoon>
>>>>> <panic model='s390'/>
>>>>> </devices>
>>>>> </domain>
>>>>>
>>>>> The issue is that when I try to start it, it starts and shutdowns
>>>>> immediately:
>>>>>
>>>>> virsh # start s390_generic
>>>>> Domain s390_generic started
>>>>>
>>>>> virsh #
>>>>>
>>>>> In the domain log file I see the following:
>>>>>
>>>>> 2017-10-19 18:10:21.633+0000: starting up libvirt version: 3.3.0, qemu
>>>>> version: 2.9.0(openSUSE Leap 42.3), hostname: oak.local
>>>>> LC_ALL=C
>>>>> PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
>>>>> QEMU_AUDIO_DRV=none /usr/bin/qemu-system-s390x -name
>>>>> guest=s390_generic,debug-threads=on -S -object
>>>>>
>>>>>
>>>>> secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-7-s390_generic/master-key.aes
>>>>> -machine s390-ccw-virtio-2.9,accel=tcg,usb=off,dump-guest-core=off -m
>>>>> 1024 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
>>>>> 82b4d16e-b636-447e-9fda-41d44616bce8 -display none -no-user-config
>>>>> -nodefaults -chardev
>>>>>
>>>>>
>>>>> socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-7-s390_generic/monitor.sock,server,nowait
>>>>> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
>>>>> -no-shutdown -boot strict=on -device
>>>>> virtio-scsi-ccw,id=scsi0,devno=fe.0.0002 -drive
>>>>>
>>>>>
>>>>> file=/dev/lvm_pda/libvirt_s390,format=raw,if=none,id=drive-virtio-disk0,cache=none,aio=native
>>>>> -device
>>>>>
>>>>>
>>>>> virtio-blk-ccw,scsi=off,devno=fe.0.0000,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
>>>>> -drive if=none,id=drive-scsi0-0-0-0,readonly=on -device
>>>>>
>>>>>
>>>>> scsi-cd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-scsi0-0-0-0,id=scsi0-0-0-0
>>>>> -netdev tap,fd=26,id=hostnet0 -device
>>>>>
>>>>>
>>>>> virtio-net-ccw,netdev=hostnet0,id=net0,mac=52:54:00:e8:61:7e,devno=fe.0.0001
>>>>> -chardev pty,id=charconsole0 -device
>>>>> sclpconsole,chardev=charconsole0,id=console0 -device
>>>>> virtio-balloon-ccw,id=balloon0,devno=fe.0.0003 -msg timestamp=on
>>>>> 2017-10-19T18:10:21.701184Z qemu-system-s390x: -chardev
>>>>> pty,id=charconsole0: char device redirected to /dev/pts/5 (label
>>>>> charconsole0)
>>>>> 2017-10-19T18:10:21.721299Z qemu-system-s390x: terminating on signal 15
>>>>> from pid 2146 (/usr/sbin/libvirtd)
>>>>> 2017-10-19 18:10:21.985+0000: shutting down, reason=shutdown
>>>>>
>>>>
>>>> You don't have much logging enabled, so there's not that much info.
>>>> What's
>>>> in
>>>> the libvirtd.log? what is the status reason for the domain? I.e. output
>>>> of
>>>> `virsh domstate --reason` ?
>>>
>>>
>>> How could I increase log level? There is nothing in libvirtd.log.
>>>
>>
>> https://wiki.libvirt.org/page/DebugLogs
>
> Too much info for me...
>
> --
> With best regards,
> Matwey V. Kornilov
--
With best regards,
Matwey V. Kornilov
7 years