So hey let's talk about this nftables ordering situation.
by robinleepowell@gmail.com
So I, like many other people, have hit problems with nftables ordering, as has been discussed on this mailing list MANY TIMES.
This whole thing seemed ridiculous so I asked the nftables people about what one is *supposed* to do in this situation. It turns out that the standard solution is for libvirt's nftables rules to set a packet mark (there's a collision possibility here but it's a 32 bit integer if you pick one at random it shouldn't be a problem) and then the user adds a rule to exclude packets with that mark from any reject rules they might have, or explicitly accept marked packets in their own chains, or whatever.
It's not *as nice* as the iptables situation, but having documentation that says "if you're using nftables make sure that packets with mark 79892 are accepted in all your chains" is quite straightforward compared to the current situation of "LOL good luck". (I'm not blaming anyone there!, the current situation is impossible for libvirt to navigate and it's not anyone's fault.)
If y'all don't like that, what's working excellently for me is adding `iifname "virbr*" accept` to my rule chain. FWIW.
It was very hard to navigate through this situation because there's no documentation that this problem even exists.
My suggestion is to describe the situation at https://libvirt.org/firewall.html and suggest the virbr* fix, and down the road maybe look at this mark thing.
I'd like to help. I'm happy to write up issues for this, and I'm happy to write the updates to the firewall docs; just tell me what you'd like me to do.
6 days, 22 hours
nftables rules (DNS, DHCP, etc) not being written on Fedora 41
by robinleepowell@gmail.com
I do not think this is any of the similar issues that have been
posted to this list; I've checked.
In particular, this in *NOT* an issue with nft rules precedence; the
rules are simply not being written by libvirt.
I'm running vagrant which is running libvirt on a Fedora 41 host. I
do not think this is a vagrant problem, but if people want me to run
virsh commands directly I certainly can.
Anyway, vagrant was failing until I noticed that the default for
/etc/libvirt/network.conf is now nftables, which I did not have set
up. When I set `firewall_backend = "iptables"`, everything worked
fine.
I want to emphasize that: the same vagrant / libvirt setup *was*
working with iptables.
But I took that as a sign that it was time to move to nftables, so I
moved everything on this host and stuff is back to working.
But libvirt just isn't writing out the right nft rules. Like, at
all.
Here's the network vagrant creates:
$ sudo virsh net-dumpxml vagrant-libvirt
<network connections='1' ipv6='yes'>
<name>vagrant-libvirt</name>
<uuid>b2d93ef4-b305-4382-a380-c1eca92d8ebd</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:63:d3:f6'/>
<ip address='192.168.121.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.121.1' end='192.168.121.254'/>
</dhcp>
</ip>
</network>
And here's the *entire* libvirt-related ruleset in nftables:
table ip libvirt_network {
chain forward {
type filter hook forward priority filter; policy accept;
counter packets 0 bytes 0 jump guest_cross
counter packets 0 bytes 0 jump guest_input
counter packets 0 bytes 0 jump guest_output
}
chain guest_output {
ip saddr 192.168.121.0/24 iif "virbr1" counter packets 0 bytes 0 accept
iif "virbr1" counter packets 0 bytes 0 reject
}
chain guest_input {
oif "virbr1" ip daddr 192.168.121.0/24 ct state established,related counter packets 0 bytes 0 accept
oif "virbr1" counter packets 0 bytes 0 reject
}
chain guest_cross {
iif "virbr1" oif "virbr1" counter packets 0 bytes 0 accept
}
chain guest_nat {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 192.168.121.0/24 ip daddr 224.0.0.0/24 counter packets 1 bytes 187 return
ip saddr 192.168.121.0/24 ip daddr 255.255.255.255 counter packets 0 bytes 0 return
meta l4proto tcp ip saddr 192.168.121.0/24 ip daddr != 192.168.121.0/24 counter packets 0 bytes 0 masquerade to :1024-65535
meta l4proto udp ip saddr 192.168.121.0/24 ip daddr != 192.168.121.0/24 counter packets 0 bytes 0 masquerade to :1024-65535
ip saddr 192.168.121.0/24 ip daddr != 192.168.121.0/24 counter packets 0 bytes 0 masquerade
}
}
table ip6 libvirt_network {
chain forward {
type filter hook forward priority filter; policy accept;
counter packets 0 bytes 0 jump guest_cross
counter packets 0 bytes 0 jump guest_input
counter packets 0 bytes 0 jump guest_output
}
chain guest_output {
iif "virbr1" counter packets 0 bytes 0 reject
}
chain guest_input {
oif "virbr1" counter packets 0 bytes 0 reject
}
chain guest_cross {
iif "virbr1" oif "virbr1" counter packets 0 bytes 0 accept
}
chain guest_nat {
type nat hook postrouting priority srcnat; policy accept;
}
}
Here's some log output (I have virtnetworkd running with -v):
$ sudo journalctl -u virtnetworkd.service | grep -i nft
[snip]
Feb 20 21:36:42 stodi.digitalkingdom.org virtnetworkd[89119]: using firewall_backend: 'nftables'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft list table ip libvirt_network'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft list table ip6 libvirt_network'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_output iif virbr1 counter reject'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_input oif virbr1 counter reject'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_cross iif virbr1 oif virbr1 counter accept'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip6 libvirt_network guest_output iif virbr1 counter reject'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip6 libvirt_network guest_input oif virbr1 counter reject'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip6 libvirt_network guest_cross iif virbr1 oif virbr1 counter accept'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_output ip saddr 192.168.121.0/24 iif virbr1 counter accept'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_input oif virbr1 ip daddr 192.168.121.0/24 ct state related,established counter accept'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat ip saddr 192.168.121.0/24 ip daddr '!=' 192.168.121.0/24 counter masquerade'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat meta l4proto udp ip saddr 192.168.121.0/24 ip daddr '!=' 192.168.121.0/24 counter masquerade to :1024-65535'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat meta l4proto tcp ip saddr 192.168.121.0/24 ip daddr '!=' 192.168.121.0/24 counter masquerade to :1024-65535'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat ip saddr 192.168.121.0/24 ip daddr 255.255.255.255/32 counter return'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat ip saddr 192.168.121.0/24 ip daddr 224.0.0.0/24 counter return'
So it's not like trying to create the rules and failing, that I can
see; it just isn't trying.
To clarify this setup appears to be missing every rule that would be
needed for DHCPv4 and DNS to work.
Like https://gitlab.com/libvirt/libvirt/-/issues/88#note_694493261
shows rules for udp 67 and 53 that are just 100% not being created
at all.
I have no idea what's going wrong here or even where to look; the
code at
https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/GT...
sure *looks like* it should be unconditionally adding those rules.
Help?
1 week, 3 days
nftables rules (DNS, DHCP, etc) not being written on Fedora 41
by Robin Lee Powell
I do not think this is any of the similar issues that have been
posted to this list; I've checked.
In particular, this in *NOT* an issue with nft rules precedence; the
rules are simply not being written by libvirt.
I'm running vagrant which is running libvirt on a Fedora 41 host. I
do not think this is a vagrant problem, but if people want me to run
virsh commands directly I certainly can.
Anyway, vagrant was failing until I noticed that the default for
/etc/libvirt/network.conf is now nftables, which I did not have set
up. When I set `firewall_backend = "iptables"`, everything worked
fine.
I want to emphasize that: the same vagrant / libvirt setup *was*
working with iptables.
But I took that as a sign that it was time to move to nftables, so I
moved everything on this host and stuff is back to working.
But libvirt just isn't writing out the right nft rules. Like, at
all.
Here's the network vagrant creates:
$ sudo virsh net-dumpxml vagrant-libvirt
<network connections='1' ipv6='yes'>
<name>vagrant-libvirt</name>
<uuid>b2d93ef4-b305-4382-a380-c1eca92d8ebd</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:63:d3:f6'/>
<ip address='192.168.121.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.121.1' end='192.168.121.254'/>
</dhcp>
</ip>
</network>
And here's the *entire* libvirt-related ruleset in nftables:
table ip libvirt_network {
chain forward {
type filter hook forward priority filter; policy accept;
counter packets 0 bytes 0 jump guest_cross
counter packets 0 bytes 0 jump guest_input
counter packets 0 bytes 0 jump guest_output
}
chain guest_output {
ip saddr 192.168.121.0/24 iif "virbr1" counter packets 0 bytes 0 accept
iif "virbr1" counter packets 0 bytes 0 reject
}
chain guest_input {
oif "virbr1" ip daddr 192.168.121.0/24 ct state established,related counter packets 0 bytes 0 accept
oif "virbr1" counter packets 0 bytes 0 reject
}
chain guest_cross {
iif "virbr1" oif "virbr1" counter packets 0 bytes 0 accept
}
chain guest_nat {
type nat hook postrouting priority srcnat; policy accept;
ip saddr 192.168.121.0/24 ip daddr 224.0.0.0/24 counter packets 1 bytes 187 return
ip saddr 192.168.121.0/24 ip daddr 255.255.255.255 counter packets 0 bytes 0 return
meta l4proto tcp ip saddr 192.168.121.0/24 ip daddr != 192.168.121.0/24 counter packets 0 bytes 0 masquerade to :1024-65535
meta l4proto udp ip saddr 192.168.121.0/24 ip daddr != 192.168.121.0/24 counter packets 0 bytes 0 masquerade to :1024-65535
ip saddr 192.168.121.0/24 ip daddr != 192.168.121.0/24 counter packets 0 bytes 0 masquerade
}
}
table ip6 libvirt_network {
chain forward {
type filter hook forward priority filter; policy accept;
counter packets 0 bytes 0 jump guest_cross
counter packets 0 bytes 0 jump guest_input
counter packets 0 bytes 0 jump guest_output
}
chain guest_output {
iif "virbr1" counter packets 0 bytes 0 reject
}
chain guest_input {
oif "virbr1" counter packets 0 bytes 0 reject
}
chain guest_cross {
iif "virbr1" oif "virbr1" counter packets 0 bytes 0 accept
}
chain guest_nat {
type nat hook postrouting priority srcnat; policy accept;
}
}
Here's some log output (I have virtnetworkd running with -v):
$ sudo journalctl -u virtnetworkd.service | grep -i nft
[snip]
Feb 20 21:36:42 stodi.digitalkingdom.org virtnetworkd[89119]: using firewall_backend: 'nftables'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft list table ip libvirt_network'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft list table ip6 libvirt_network'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_output iif virbr1 counter reject'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_input oif virbr1 counter reject'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_cross iif virbr1 oif virbr1 counter accept'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip6 libvirt_network guest_output iif virbr1 counter reject'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip6 libvirt_network guest_input oif virbr1 counter reject'
Feb 20 21:36:46 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip6 libvirt_network guest_cross iif virbr1 oif virbr1 counter accept'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_output ip saddr 192.168.121.0/24 iif virbr1 counter accept'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_input oif virbr1 ip daddr 192.168.121.0/24 ct state related,established counter accept'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat ip saddr 192.168.121.0/24 ip daddr '!=' 192.168.121.0/24 counter masquerade'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat meta l4proto udp ip saddr 192.168.121.0/24 ip daddr '!=' 192.168.121.0/24 counter masquerade to :1024-65535'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat meta l4proto tcp ip saddr 192.168.121.0/24 ip daddr '!=' 192.168.121.0/24 counter masquerade to :1024-65535'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat ip saddr 192.168.121.0/24 ip daddr 255.255.255.255/32 counter return'
Feb 20 21:36:47 stodi.digitalkingdom.org virtnetworkd[89119]: Applying 'nft -ae insert rule ip libvirt_network guest_nat ip saddr 192.168.121.0/24 ip daddr 224.0.0.0/24 counter return'
So it's not like trying to create the rules and failing, that I can
see; it just isn't trying.
To clarify this setup appears to be missing every rule that would be
needed for DHCPv4 and DNS to work.
Like https://gitlab.com/libvirt/libvirt/-/issues/88#note_694493261
shows rules for udp 67 and 53 that are just 100% not being created
at all.
I have no idea what's going wrong here or even where to look; the
code at
https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/GT...
sure *looks like* it should be unconditionally adding those rules.
Help?
1 week, 4 days
virt-install iscsi direct - Target not found
by kgore4 une
When I try to use an iscsi direct pool in a "--disk" clause for
virt-install, I get error "iSCSI: Failed to connect to LUN : Failed to log
in to target. Status: Target not found(515)". I've seen that sort of error
before when the initiator name isn't used. The SAN returns different LUNS
depending on the initiator.
I've run out of ideas on what to try next. Any advice welcome. I've
included what I thought was relevent below.
klint.
The disk parameter to virt-install is (it's part of a script but the
variables are correct when executed)
[code]
--disk
vol=${poolName}/unit:0:0:${vLun1},xpath.set="./source/initiator/iqn/@name='iqn.2024-11.localdomain.agbu.agbuvh1:${vName}'"
\
[/code]
I added the xpath.set as I noticed that the initiator wasn't in the debug
output of the disk definition and it didn't work without it either.
The iscsi direct pool is defined and it appears to work - it's active and
vol-list shows the correct luns.
Using --debug on virt install, I can see the drive is detected early in the
process as it's got the size of the drive.
[code]
[Wed, 19 Feb 2025 16:43:33 virt-install 2174805] DEBUG (cli:3554) Parsed
--disk volume as: pool=agbu-ldap1 vol=unit:0:0:3
[Wed, 19 Feb 2025 16:43:33 virt-install 2174805] DEBUG (disk:648)
disk.set_vol_object: volxml=
<volume type='network'>
<name>unit:0:0:3</name>
<key>ip-10.1.4.3:3260-iscsi-iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3</key>
<capacity unit='bytes'>49996103168</capacity>
<allocation unit='bytes'>49996103168</allocation>
<target>
<path>ip-10.1.4.3:3260-iscsi-iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3</path>
</target>
</volume>
[Wed, 19 Feb 2025 16:43:33 virt-install 2174805] DEBUG (disk:650)
disk.set_vol_object: poolxml=
<pool type='iscsi-direct'>
<name>agbu-ldap1</name>
<uuid>1c4ae810-9bae-433c-a92f-7d3501b6ba80</uuid>
<capacity unit='bytes'>49996103168</capacity>
<allocation unit='bytes'>49996103168</allocation>
<available unit='bytes'>0</available>
<source>
<host name='10.1.4.3'/>
<device path='iqn.1992-09.com.seagate:01.array.00c0fff6c846'/>
<initiator>
<iqn name='iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap'/>
</initiator>
</source>
</pool>
[/code]
The generated initial_xml for the disk looks like
[code]
<disk type="network" device="disk">
<driver name="qemu" type="raw"/>
<source protocol="iscsi"
name="iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3">
<host name="10.1.4.3"/>
<initiator>
<iqn name="iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap"/>
</initiator>
</source>
<target dev="vda" bus="virtio"/>
</disk>
[/code]
The generated final_xml looks like
[code]
<disk type="network" device="disk">
<driver name="qemu" type="raw"/>
<source protocol="iscsi"
name="iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3">
<host name="10.1.4.3"/>
<initiator>
<iqn name="iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap"/>
</initiator>
</source>
<target dev="vda" bus="virtio"/>
</disk>
[/code]
The full error is
[code]
[Wed, 19 Feb 2025 16:43:35 virt-install 2174805] DEBUG (cli:256) File
"/usr/bin/virt-install", line 8, in <module>
virtinstall.runcli()
File "/usr/share/virt-manager/virtinst/virtinstall.py", line 1233, in
runcli
sys.exit(main())
File "/usr/share/virt-manager/virtinst/virtinstall.py", line 1226, in main
start_install(guest, installer, options)
File "/usr/share/virt-manager/virtinst/virtinstall.py", line 974, in
start_install
fail(e, do_exit=False)
File "/usr/share/virt-manager/virtinst/cli.py", line 256, in fail
log.debug("".join(traceback.format_stack()))
[Wed, 19 Feb 2025 16:43:35 virt-install 2174805] ERROR (cli:257) internal
error: process exited while connecting to monitor:
2025-02-19T05:43:35.075695Z qemu-system-x86_64: -blockdev
{"driver":"iscsi","portal":"10.1.4.3:3260","target":"iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3","lun":0,"transport":"tcp","initiator-name":"iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}:
iSCSI: Failed to connect to LUN : Failed to log in to target. Status:
Target not found(515)
[Wed, 19 Feb 2025 16:43:35 virt-install 2174805] DEBUG (cli:259)
Traceback (most recent call last):
File "/usr/share/virt-manager/virtinst/virtinstall.py", line 954, in
start_install
domain = installer.start_install(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/share/virt-manager/virtinst/install/installer.py", line 695,
in start_install
domain = self._create_guest(
^^^^^^^^^^^^^^^^^^^
File "/usr/share/virt-manager/virtinst/install/installer.py", line 637,
in _create_guest
domain = self.conn.createXML(initial_xml or final_xml, 0)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3/dist-packages/libvirt.py", line 4481, in createXML
raise libvirtError('virDomainCreateXML() failed')
libvirt.libvirtError: internal error: process exited while connecting to
monitor: 2025-02-19T05:43:35.075695Z qemu-system-x86_64: -blockdev
{"driver":"iscsi","portal":"10.1.4.3:3260","target":"iqn.1992-09.com.seagate:01.array.00c0fff6c846-lun-3","lun":0,"transport":"tcp","initiator-name":"iqn.2024-11.localdomain.agbu.agbuvh1:agbu-ldap","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}:
iSCSI: Failed to connect to LUN : Failed to log in to target. Status:
Target not found(515)
[/code]
Things that could affect the answer
* What I'm calling a SAN is a seagate exos-x iscsi unit
* virtual host is debian 12
* virsh version 9.0.0
* iscsiadm version 2.1.8
1 week, 5 days
Re: SEV-SNP Libvirt Support
by Michal Prívozník
On 2/17/25 11:11, Paraskevas Nik wrote:
> Hello Michael,
> Thanks for responding. I have built the libvirt and and then used ninja
> install to install it. Does this also installs the correct selinux labels?
Depends, if you've passed "-Dsystem=true" to meson setup then it should
replace the system binaries and thus SELinux might have the correct policy.
But otherwise, I'd suggest 'meson dist && rpmbuild -ta ...' and then
installing those RPMs.
Michal
2 weeks, 1 day
SEV-SNP Libvirt Support
by Paraskevas Nik
Hello,
I am trying to build libvirt v10.5.0 to support SEV-SNP but when I run
virsh domcapabilities I am getting <sev support=no>. If I install libvirt
8.0.0 using apt install libvirt-daemon-system qemu-kvm them sev shows as
supported. Everything is enabled on the system about SEV, SEV-ES, SEV-SNP.
Is there a specific build option to enable SEV?
*# cat /sys/module/kvm_amd/parameters/sev*
*Y*
*CPU: AMD EPYC 9254*
*# dmesg | grep -i sev*
*[ 0.000000] SEV-SNP: RMP table physical range [0x000000002d500000 -
0x000000004ddfffff]*
*[ 0.009021] SEV-SNP: Reserving start/end of RMP table on a 2MB boundary
[0x000000002d400000]*
*[ 11.184492] ccp 0000:01:00.5: sev enabled*
*[ 12.664210] ccp 0000:01:00.5: SEV API:1.55 build:36*
*[ 12.664217] ccp 0000:01:00.5: SEV-SNP API:1.55 build:36*
*[ 12.671343] kvm_amd: SEV enabled (ASIDs 16 - 1006)*
*[ 12.671345] kvm_amd: SEV-ES enabled (ASIDs 1 - 15)*
*[ 12.671346] kvm_amd: SEV-SNP enabled (ASIDs 1 - 15)*
*Linux kernel version: 6.12**Libvirt version: 10.5.0*
*qemu-system-x86_64 version : 9.1.0*
*If you need any other information please let me know.Cheers*
2 weeks, 1 day
Re: Network denied access
by Rodrigo Prieto
Thank you for taking the time to respond. I want to mention that I don't
speak English, and it's difficult for me to understand using a translator.
In the file */etc/libvirt/libvirtd.conf*, I have the following:
access_drivers = [ "polkit" ]
The *virtqemud* and *virtnetworkd* services are not installed. I used the
version from the Debian 12 repositories.
systemctl status virtnetworkd.socket
Unit virtnetworkd.socket could not be found.
systemctl status virtqemud.socket
Unit virtqemud.socket could not be found.
In the file */etc/libvirt/qemu.conf*, the default configuration is present.
Best regards.
El jue, 6 feb 2025 a las 20:48, Rodrigo Prieto (<rodrigoprieto2019(a)gmail.com>)
escribió:
> Thank you for taking the time to respond. I want to mention that I don't
> speak English, and it's difficult for me to understand using a translator.
>
> In the file */etc/libvirt/libvirtd.conf*, I have the following:
> access_drivers = [ "polkit" ]
>
>
> The *virtqemud* and *virtnetworkd* services are not installed. I used the
> version from the Debian 12 repositories.
>
> systemctl status virtnetworkd.socket
> Unit virtnetworkd.socket could not be found.
>
> systemctl status virtqemud.socket
> Unit virtqemud.socket could not be found.
>
> In the file */etc/libvirt/qemu.conf*, the default configuration is
> present.
>
> Best regards.
>
> El jue, 6 feb 2025 a las 12:55, Martin Kletzander (<mkletzan(a)redhat.com>)
> escribió:
>
>> On Fri, Jan 31, 2025 at 03:34:03AM -0300, Rodrigo Prieto wrote:
>> >Hello,
>> >
>> >I am configuring Polkit using an example I found on the web. It correctly
>> >displays the assigned domain for a given user, but when I try to start
>> the
>> >VM, I get the following error:
>> >
>> >error: Failed to start domain 'debian12'
>> >error: access denied: 'network' denied access
>> >
>> >Here is my configuration:
>> >
>> >polkit.addRule(function(action, subject) {
>> > if (action.id == "org.libvirt.unix.manage" &&
>> > subject.user == "lolo") {
>> > return polkit.Result.YES;
>> > }
>> >});
>> >polkit.addRule(function(action, subject) {
>> > if (action.id.indexOf("org.libvirt.api.domain.") == 0 &&
>> > subject.user == "lolo") {
>> > if (action.lookup("connect_driver") == 'QEMU' &&
>> > action.lookup("domain_name") == 'debian12') {
>> > return polkit.Result.YES;
>> > } else {
>> > return polkit.Result.NO;
>> > }
>> > }
>> >});
>> >
>>
>> So doing this allows you to do anything with debian12 domain on the QEMU
>> connection driver.
>>
>> >To grant network access, I have to configure the following:
>> >
>> >polkit.addRule(function(action, subject) {
>> > if (action.id.indexOf("org.libvirt.api.network") == 0 &&
>> > subject.user == "lolo") {
>> > return polkit.Result.YES;
>> > }
>> >});
>> >
>>
>> Adding this allows you to do anything with any network. This rule does
>> omit a condition similar to the above one from the api.domain rule.
>>
>> >The problem with the previous configuration is that it allows full access
>> >to the network, requiring the following configuration:
>> >
>>
>> *to all the networks
>>
>> >polkit.addRule(function(action, subject) {
>> > if ((action.id == "org.libvirt.api.network.stop" ||
>> > action.id == "org.libvirt.api.network.delete" ||
>> > action.id == "org.libvirt.api.network.write") &&
>> > subject.user == "lolo") {
>> > return polkit.Result.NO;
>> > }
>> >});
>> >
>> >By default, shouldn't network access behave like domains or pools, which
>> >cannot be deleted?
>>
>> Can you not? The domain undefine API checks domain:delete ACL with the
>> domain name and network undefine API checks network:delete ACL with the
>> network name. I'll have to test it, but in the meantime could you try
>> reproducing that with the same polkit rules (obviously modified to fit
>> the domain/network difference)?
>>
>> >I tested it on Libvirt 9.0.0 and 10.0.0
>> >
>>
>> I did not find any difference between 9.0.0 and the current master with
>> a quick git-fu.
>>
>> I tested it on current git master and it works fine, the user can
>> undefine both the network and the domain, but only the one named as
>> specified.
>>
>> >If you can help me, I would really appreciate it.
>>
>> Be sure to check that both virtqemud and virtnetworkd use polkit as
>> their access driver in their respective configs.
>>
>> Have a nice day,
>> Martin
>>
>
2 weeks, 6 days
How can I control iptables/nftables rules addition on libvirtd host on
Debian 12 ?
by oza.4h07@gmail.com
Hello,
When I install libvirt-daemon on a Debian 12 host, I can see the iptables rules below beeing added.
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:LIBVIRT_FWI - [0:0]
:LIBVIRT_FWO - [0:0]
:LIBVIRT_FWX - [0:0]
:LIBVIRT_INP - [0:0]
:LIBVIRT_OUT - [0:0]
-A INPUT -j LIBVIRT_INP
-A FORWARD -j LIBVIRT_FWX
-A FORWARD -j LIBVIRT_FWI
-A FORWARD -j LIBVIRT_FWO
-A OUTPUT -j LIBVIRT_OUT
-A LIBVIRT_FWI -d 192.168.122.0/24 -o virbr0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A LIBVIRT_FWI -o virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWO -s 192.168.122.0/24 -i virbr0 -j ACCEPT
-A LIBVIRT_FWO -i virbr0 -j REJECT --reject-with icmp-port-unreachable
-A LIBVIRT_FWX -i virbr0 -o virbr0 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p udp -m udp --dport 67 -j ACCEPT
-A LIBVIRT_INP -i virbr0 -p tcp -m tcp --dport 67 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 53 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p udp -m udp --dport 68 -j ACCEPT
-A LIBVIRT_OUT -o virbr0 -p tcp -m tcp --dport 68 -j ACCEPT
COMMIT
For some reason, I need to add a couple of other rules.
How can I do that ?
Best regards
3 weeks, 1 day
SEV Support Libvirt
by Paraskevas Nik
*Hello I am trying to enable libvirt to support sev-snp. Currently I am
using virsh domcapabilities to check if its enabled but I am getting : <sev
supported='no'/>*
*Followed the instructions at : *
*https://libvirt.org/kbase/launch_security_sev.html*
<https://libvirt.org/kbase/launch_security_sev.html>
* My AMD CPU supports SEV,SEV-SNP and I have followed all the steps and it
is enabled. BIOS settings are configured to support SEV,SNP CPU: AMD EPYC
9254*
*# cat /sys/module/kvm_amd/parameters/sev*
*Y*
*# dmesg | grep -i sev*
*[ 0.000000] SEV-SNP: RMP table physical range [0x000000002d500000 -
0x000000004ddfffff]*
*[ 0.009021] SEV-SNP: Reserving start/end of RMP table on a 2MB boundary
[0x000000002d400000]*
*[ 11.184492] ccp 0000:01:00.5: sev enabled*
*[ 12.664210] ccp 0000:01:00.5: SEV API:1.55 build:36*
*[ 12.664217] ccp 0000:01:00.5: SEV-SNP API:1.55 build:36*
*[ 12.671343] kvm_amd: SEV enabled (ASIDs 16 - 1006)*
*[ 12.671345] kvm_amd: SEV-ES enabled (ASIDs 1 - 15)*
*[ 12.671346] kvm_amd: SEV-SNP enabled (ASIDs 1 - 15)*
*Versions:*
*Libvirt version: 10.7.0*
*qemu-system-x86_64 version : 9.1.0*
*Linux kernel version: 6.11.0-rc3*
*Distro: Ubuntu 22.04.4 LTS*
* If you need any other information please let me know.*
Thanks!
3 weeks, 4 days
Network denied access
by Rodrigo Prieto
Hello,
I am configuring Polkit using an example I found on the web. It correctly
displays the assigned domain for a given user, but when I try to start the
VM, I get the following error:
error: Failed to start domain 'debian12'
error: access denied: 'network' denied access
Here is my configuration:
polkit.addRule(function(action, subject) {
if (action.id == "org.libvirt.unix.manage" &&
subject.user == "lolo") {
return polkit.Result.YES;
}
});
polkit.addRule(function(action, subject) {
if (action.id.indexOf("org.libvirt.api.domain.") == 0 &&
subject.user == "lolo") {
if (action.lookup("connect_driver") == 'QEMU' &&
action.lookup("domain_name") == 'debian12') {
return polkit.Result.YES;
} else {
return polkit.Result.NO;
}
}
});
To grant network access, I have to configure the following:
polkit.addRule(function(action, subject) {
if (action.id.indexOf("org.libvirt.api.network") == 0 &&
subject.user == "lolo") {
return polkit.Result.YES;
}
});
The problem with the previous configuration is that it allows full access
to the network, requiring the following configuration:
polkit.addRule(function(action, subject) {
if ((action.id == "org.libvirt.api.network.stop" ||
action.id == "org.libvirt.api.network.delete" ||
action.id == "org.libvirt.api.network.write") &&
subject.user == "lolo") {
return polkit.Result.NO;
}
});
By default, shouldn't network access behave like domains or pools, which
cannot be deleted?
I tested it on Libvirt 9.0.0 and 10.0.0
If you can help me, I would really appreciate it.
3 weeks, 4 days