Updating authentication for a Ceph (RBD) disk in a live domain
by will.gorman@joyent.com
Is it possible to update and change the <auth/> for an RBD network disk while the domain the disk is attached to is running and without detaching/reattaching the disk? For example if I have a disk attached like the following:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<auth username='someuser'>
<secret type='ceph' usage='someuser key'/>
</auth>
<source protocol='rbd' name='somepool/someimage'>
<host name='127.0.0.1' port='3300'/>
</source>
<target dev='sdd' bus='scsi'/>
<alias name='scsi0-0-0-3'/>
<address type='drive' controller='0' bus='0' target='0' unit='3'/>
</disk>
If I want to change the auth to
<auth username='someotheruser'>
<secret type='ceph' usage='someotheruser key'/>
</auth>
can I do that without either attaching/detaching the disk or stopping/restarting the domain?
I've tried `virsh update-device domain disk.xml --live --persistent` using xml identical to the current disk except for the auth and it says "Device updated successfully" but when I check the domain with `dumpxml` I can still see the original auth settings for the disk.
4 months, 1 week
cap seach for session libvirt
by daggs
Greetings,
I'm working on allowing a session vm to create a tap iface.
the vm has this defintion:
<interface type='ethernet'>
<mac address='52:54:00:a7:79:6b'/>
<target dev='veth0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
</interface>
when I try to start the vm, I get this error: Unable to create tap device veth0: Operation not permitted
searching the code led me to this line: https://github.com/libvirt/libvirt/blob/0caacf47d7b423db9126660fb0382ed56...
I've looked on line and found out I need the net_admin_cap set. so I took the relevant code in to a dedicated test file and using pam_cap I've defined such cap
for the test file, all went well.
so I took it back to virsh and defined that cap to virsh but I'm still getting the same issue, see: https://ibb.co/zHggRQZ
the os is debian 12
any ideas why I'm still getting this error?
Thanks,
Dagg
4 months, 2 weeks
luks devices and libvirt
by Marc Haber
Hi,
this is an ongoing issue. I don't know whether I ever have addresses
this here, but it's still annoying.
I am using debian unstable, libvirt 10.5.0, virt-manager 4.1.0, qemu
9.0.2. I work through virt-manager, rarely I use virsh.
I regularly configure virtual disks that are located on a luks-encrypted
LVM volume. when unlocked, the block devices appears as /dev/mapper/foo
and is a symlink to a ../dm-xx node with xx being a random number,
../dm-xx being a regular block device.
To facilitate this, I have defined a storage pool with this XML:
<pool type="dir">
<name>mapper</name>
<uuid></uuid>
<capacity unit="bytes">24598757376</capacity>
<allocation unit="bytes">0</allocation>
<available unit="bytes">24598757376</available>
<source>
</source>
<target>
<path>/dev/mapper</path>
<permissions>
<mode>0755</mode>
<owner>0</owner>
<group>0</group>
</permissions>
</target>
</pool>
This is necessary as the storage type "LVM volume group" now insists on
a volume group name, and the DM mappings created by LUKS dont have a
volume group name.
When I add a disk to a VM from this storage pool, it generates the XML:
<disk type="file" device="disk">
<driver name="qemu" type="raw"/>
<source file="/dev/mapper/wintest"/>
<target dev="vda" bus="virtio"/>
<address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
</disk>
qemu won't start with this settings:
error: Failed to start domain 'win11test'
error: internal error: QEMU unexpectedly closed the monitor (vm='win11test'): 2024-07-28T15:20:25.250387Z qemu-system-x86_64: -blockdev {"driver":"file","filename":"/dev/mapper/wintest","node-name":"libvirt-1-storage","read-only":false}: 'file' driver requires '/dev/mapper/wintest' to be a regular file
Changing the XML to
<disk type="block" device="disk">
<driver name="qemu" type="raw"/>
<source dev="/dev/mapper/wintest"/>
<target dev="vda" bus="virtio"/>
<address type="pci" domain="0x0000" bus="0x05" slot="0x00" function="0x0"/>
</disk>
(note type="block" and "source dev")
makes the VM work.
Can virt-manager somehow be coaxed into generating XML that works here?
If not, is this a virt-manager issue or should qemu just accept
type="file" and "source file")?
Greetings
Marc
--
-----------------------------------------------------------------------------
Marc Haber | "I don't trust Computers. They | Mailadresse im Header
Leimen, Germany | lose things." Winona Ryder | Fon: *49 6224 1600402
Nordisch by Nature | How to make an American Quilt | Fax: *49 6224 1600421
4 months, 3 weeks
setting bridge for VMs IP assignment by router's DHCP server
by Germano Massullo
I am running a libvirt hostmachine (Fedora 40) which has 192.168.1.6 IP
address, assigned by router's DHCP server.
I want the libvirt VMs IPs to be assigned by router's DHCP server, so I
tried to setup a bridge via
# virsh net-define foo.xml
and trying the following files as xml file, but they all failed to
achieve the task. Here I list the two XML variants I tried to use
1)
<network>
<name>bridge-no-nat</name>
<bridge name='virbr1_no-nat' stp='on' delay='0'/>
<forward mode='open'/>
</network>
RETURNS:
open forwarding requested, but no IP address provided for network
'bridge-no-nat'
2)
I created in nmtui the bridge virbr1_no_nat then I used following XML
for virsh net-define
<network>
<name>br1_no_nat</name>
<forward mode='bridge'/>
<bridge name='virbr1_no_nat'/>
<virtualport type='openvswitch'/>
<portgroup name='default'/>
</network>
then I configure the VM network to use but when I start the VM I get error:
internal error: Unable to add port vnet1 to OVS bridge virbr1_no_nat: <null>
Do you know how I can solve this?
Thank you
4 months, 3 weeks
KVM static internal networking without host bridge interface (virbr)
by Daniel
How to set up an internal network between two KVM network interfaces
while using static networking (avoiding dnsmasq) and while avoiding a
host bridge interface (virbr)?
Currently I am using this for the network.
<network>
<name>Internal</name>
<bridge name='virbr2' stp='on' delay='0'/>
</network>
And then for the VM.
<interface type='network'>
<source network='Internal'/>
<model type='virtio'/>
<driver name='qemu'/>
</interface>
* I would like to avoid the host `virbr2` interface. This is because
ideally package sniffers on the host such as tshark / wireshark would be
unable to see these packages following between an internal network
between two VMs.
* SLIRP should be avoided due to past security issues. [1]
* dnsmasq on the host operating system or inside the VMs should also be
avoided in favor of static IP addresses.
By comparison, this is possible in VirtualBox. [2]
Is that possible with KVM too? Could you please show an example
configuration file on how to accomplish that?
[1] CVE-2019-6778
[2] VirtualBox has this capability. VirtualBox can have an internal
network using static networking. No vibr bridge interfaces can be seen
on the host operating system. And VM to VM internal traffic is not
visible to package analyzers on the host operating system either.
Regards,
Daniel
--
Daniel Winzen
Steinkaulstr. 47
52070 Aachen
Germany
Web: https://danwin1210.de/
E-Mail: daniel(a)danwin1210.de
Phone: +49 176 98819809
PGP-Key: https://danwin1210.de/pgp.txt
4 months, 3 weeks