[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 3 months
[libvirt-users] Libvirt access control drivers
by Anastasiya Ruzhanskaya
Hello!
According to the documentation access control drivers are not in really
"good condition". There is a polkit, but it can distinguish users only
according the pid. However, I have met some articles about more
fine-grained control and about selinux drivers for libvirt? So, what is the
status now? Should I implement something by myself if I want access based
on login, are their instructions how to write these drivers or there is
smth already?
6 years
[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] Virtio-net drivers immune to Nethammer?
by procmem
Hi I'm a privacy distro maintainer investigating the implications of the
newly published nethammer attack [0] on KVM guests particularly the
virtio-net drivers. The summary of the paper is that rowhammer can be
remotely triggered by feeding susceptible* network driver crafted
traffic. This attack can do all kinds of nasty things such as modifying
SSL certs on the victim system.
* Susceptible drivers are those relying on Intel CAT, uncached memory or
the clflush instruction.
My question is, do virtio-net drivers do any of these things?
***
[0] https://arxiv.org/abs/1805.04956
6 years, 5 months
[libvirt-users] List-Archives ...
by thg
Hi everybody,
actually I wanted to search the list archive, before asking, but
unfortunately I don't "get" it:
$ gunzip 2018-May.txt.gz
gunzip: 2018-May.txt.gz: not in gzip format
It seems, that in every archive-file there is always one message in
plaintext and then a big binary block.
Any hint?
Thanks a lot,
--
kind regards,
thg
6 years, 5 months
[libvirt-users] Make discard='unmap' the default?
by Ian Pilcher
Is it possible to make discard='unmap' the default for virtio-scsi
disks? (Related, is it possible to make virtio-scsi the default disk
type, rather than virtio-blk?)
Thanks!
--
========================================================================
Ian Pilcher arequipeno(a)gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================
6 years, 5 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] Two Node Cluster
by Cobin Bluth
Hello Libvirt Users,
I would like to setup a two node bare-metal cluster. I need to guidance on
the network configuration. I have attached a small diagram, the same
diagram can be seen here: https://i.imgur.com/SOk6a6G.png
I would like to configure the following details:
- Each node has a DHCP enabled guest network where VMs will run. (eg,
*192.168.1.0/24
<http://192.168.1.0/24>* for Host1, and *192.168.2.0/24
<http://192.168.2.0/24>* for Host2)
- Any guest in Host1 should be able to ping guests in Host2, and vice versa.
- All guests have routes to reach the open internet (so that '*yum update*'
will work "out-of-the-box")
- Each node will be able to operate fully if the other physical node fails.
(no central DHCP server, etc)
- I would like to *add more physical nodes later* when I need the resources.
This is what I have done so far:
- Installed latest Ubuntu 18.04, with latest version of libvirt and
supporting software from ubuntu's apt repo.
- Each node can reach the other via its own eth0.
- Each node has a working vxlan0, which can ping the other via its vxlan0,
so it looks like the vxlan config is working. (I used *ip link add vxlan0
type vxlan...*)
- Configured route on Host1 like so: *ip route add 192.168.2.0/24
<http://192.168.2.0/24> via 172.20.0.1*
- Configured route on Host2 also: *ip route add 192.168.1.0/24
<http://192.168.1.0/24> via 172.20.0.2*
- All guests on Host1 (and Host1) can ping eth0 and vxlan0 on Host2, and
vice versa, yay.
- Guests on Host1 *cannot* ping guests on Host2, I suspect because the the
default NAT config of the libvirt network.
So, at this point I started to search for tutorials or more
information/documentation, but I am a little overwhelmed by the sheer
amount of information, as well as a lot of "stale" information on blogs etc.
I have learned that I can *virsh net-edit default*, and then change it to
an "open" network:* <forward mode='open'/>*
After doing this, the guests cannot reach outside their own network, nor
reach the internet, so I assume that I would need to add some routes, or
something else to get the network functioning like I want it. There is
also *<forward
mode="route"/>*, but I dont fully understand the scenarios where one would
need an *open* or a *route* forward mode. I have also shied away from using
openvswitch, and have opted for ifupdown2.
(I have taken most of my inspiration from this blog post:
https://joejulian.name/post/how-to-configure-linux-vxlans-with-multiple-u...
)
Some questions that I have for the mailing list, any help would be greatly
appreciated:
- Is my target configuration of a KVM cluster uncommon? Do you see
drawbacks of this setup, or does it go against "typical convention"?
- Would my scenario be better suited for an "*open*" network or a "*route*"
network?
- What would be the approach to complete this setup?
6 years, 5 months