On Thu, Sep 18, 2025 at 11:29 AM Pavel Mores <pmores(a)redhat.com> wrote:
> On Thu, Sep 18, 2025 at 10:25 AM Martin Kletzander <mkletzan(a)redhat.com>
> wrote:
>
>> On Wed, Sep 17, 2025 at 04:02:12PM +0200, Pavel Mores wrote:
>> >On Wed, Sep 17, 2025 at 3:05 PM Martin Kletzander <mkletzan(a)redhat.com>
>> >wrote:
>> >
>> >> On Wed, Sep 17, 2025 at 02:14:51PM +0200, Pavel Mores via Users wrote:
>> >> >Hi,
>> >> >
>> >> >I'm examining a domain that's connected to the 'default' network
>> >> >
>> >> ># virsh net-dumpxml default
>> >> ><network connections='1'>
>> >> > <name>default</name>
>> >> > <uuid>c757baa7-2b31-4794-9dfb-0df384575602</uuid>
>> >> > <forward mode='nat'>
>> >> > <nat>
>> >> > <port start='1024' end='65535'/>
>> >> > </nat>
>> >> > </forward>
>> >> > <bridge name='virbr0' stp='on' delay='0'/>
>> >> > <mac address='52:54:00:37:b7:92'/>
>> >> > <ip address='192.168.122.1' netmask='255.255.255.0'>
>> >> > <dhcp>
>> >> > <range start='192.168.122.2' end='192.168.122.254'/>
>> >> > </dhcp>
>> >> > </ip>
>> >> ></network>
>> >> >
>> >>
>> >> This is standard.
>> >>
>> >> >using a device as follows:
>> >> >
>> >> ><interface type='network'>
>> >> > <mac address='52:54:00:ed:06:2e'/>
>> >> > <source network='default'
>> portid='83db8ca9-baed-47f3-ba0d-1a967ee86aa5'
>> >> >bridge='virbr0'/>
>> >> > <target dev='vnet19'/>
>> >> > <model type='virtio'/>
>> >> > <alias name='net0'/>
>> >> > <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
>> >> >function='0x0'/>
>> >> ></interface>
>> >> >
>> >>
>> >> This looks fine.
>> >>
>> >> >The domain is running but apparently without an IP address:
>> >> >
>> >> ># virsh domifaddr podvm-podsandbox-totok-8f10756a
>> >> > Name MAC address Protocol Address
>> >>
>> >>
>> >-------------------------------------------------------------------------------
>> >> >
>> >>
>> >> This shows that libvirt does not know about any IP address. Does
>> adding
>> >> "--source agent", "--source arp" or "--source lease" change anything?
>> >>
>> >
>> >'arp' and 'lease' don't but
>> >
>> ># virsh domifaddr --source agent podvm-podsandbox-totok-8f10756a
>> >error: Failed to query for interfaces addresses
>> >error: argument unsupported: QEMU guest agent is not configured
>> >
>> >This is surprising to me since this is a peer pods setup where the domain
>> >in question is a podvm running an image which I was told does have
>> >the qemu agent running.
>> >
>> >However the agent shouldn't be necessary for IP address acquisition I
>> guess,
>> >right?
>> >
>> >>The requisite host-side interfaces look good (to me anyway :-)):
>> >> >
>> >> >10: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc htb
>> state UP
>> >> >group default qlen 1000
>> >> > link/ether 52:54:00:37:b7:92 brd ff:ff:ff:ff:ff:ff
>> >> > inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
>> >> > valid_lft forever preferred_lft forever
>> >> >[...]
>> >> >35: vnet19: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue
>> >> master
>> >> >virbr0 state UNKNOWN group default qlen 1000
>> >> > link/ether fe:54:00:ed:06:2e brd ff:ff:ff:ff:ff:ff
>> >> > inet6 fe80::fc54:ff:feed:62e/64 scope link proto kernel_ll
>> >> > valid_lft forever preferred_lft forever
>> >> >
>> >> >I can share more information about the setup if necessary but I'll
>> stop
>> >> >here for now since I feel this must be just a simple stupid oversight
>> on
>> >> my
>> >> >part. Please let me know if you'd like to have additional info.
>> >> >
>> >>
>> >> When this happens to me sometimes, it's most often a firewall issue and
>> >> the VM does not get any IP address or cannot communicate outside its
>> >> network.
>> >>
>> >
>> >I've seen a firewall suggested as a possible culprit, yes, however I
>> don't
>> >quite
>> >know what it should look like. iptables appear unconfigured:
>> >
>> ># iptables -L -v -n
>> >Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
>> > pkts bytes target prot opt in out source
>> >destination
>> >
>> >Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>> > pkts bytes target prot opt in out source
>> >destination
>> >
>> >Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
>> > pkts bytes target prot opt in out source
>> >destination
>> >
>> >`nft list ruleset` lists only rules that look managed by libvirt
>> >itself(*). At any
>> >rate the host machine has no specific hand-configured firewall that I
>> know
>> >of.
>> >
>> >
>> >> What it can be here is that there are some access issues to the dnsmasq
>> >> lease file.
>> >>
>> >> What's in your /var/lib/libvirt/dnsmasq/virbr0.status file on the host?
>> >>
>> >
>> >It's empty.
>> >
>> >Thanks Martin!
>> >pvl
>> >
>> >(*) # nft list ruleset
>> >table ip libvirt_network {
>> >chain forward {
>> >type filter hook forward priority filter; policy accept;
>> >counter packets 85854914 bytes 398726525237 jump guest_cross
>> >counter packets 85854914 bytes 398726525237 jump guest_input
>> >counter packets 34777368 bytes 3386943972 jump guest_output
>> >}
>> >
>> >chain guest_output {
>> >ip saddr 192.168.122.0/24 iif "virbr0" counter packets 0 bytes 0 accept
>>
>> This suggests there were no incoming packets from an IP address in the
>> range on the bridge.
>>
>> >iif "virbr0" counter packets 0 bytes 0 reject
>>
>> And no packets from outside of that range that would fall through to
>> this above rule.
>>
>> [...]
>>
>> >}
>> >
>> >chain guest_input {
>>
>> [...]
>>
>> >oif "virbr0" ip daddr 192.168.122.0/24 ct state established,related
>> counter
>> >packets 0 bytes 0 accept
>>
>> No packets sent to the address range on the bridge, but
>>
>> >oif "virbr0" counter packets 0 bytes 0 reject
>>
>> basically no packets sent at all.
>>
>> >}
>> >
>> >chain guest_cross {
>> >iif "openshift-412" oif "openshift-412" counter packets 0 bytes 0 accept
>> >iif "openshift-419" oif "openshift-419" counter packets 0 bytes 0 accept
>> >iif "openshift-416" oif "openshift-416" counter packets 0 bytes 0 accept
>> >iif "openshift-415" oif "openshift-415" counter packets 0 bytes 0 accept
>> >iif "openshift-413" oif "openshift-413" counter packets 0 bytes 0 accept
>> >iif "virbr0" oif "virbr0" counter packets 0 bytes 0 accept
>>
>> No intra-network communication
>>
>> [...]
>>
>> >chain guest_nat {
>> >type nat hook postrouting priority srcnat; policy accept;
>>
>> [...]
>>
>> >ip saddr 192.168.122.0/24 ip daddr 224.0.0.0/24 counter packets 50 bytes
>> >3676 return
>>
>> There were some IPv4 multicast packets, but these could've originated
>> from the host.
>>
>> >ip saddr 192.168.122.0/24 ip daddr 255.255.255.255 counter packets 0
>> bytes
>> >0 return
>>
>> And no broadcast packets from the address space.
>>
>> >meta l4proto tcp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24
>> >counter packets 0 bytes 0 masquerade to :1024-65535
>> >meta l4proto udp ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24
>> >counter packets 0 bytes 0 masquerade to :1024-65535
>> >ip saddr 192.168.122.0/24 ip daddr != 192.168.122.0/24 counter packets 0
>> >bytes 0 masquerade
>>
>> No NAT, anything.
>>
>> [...]
>> >ip saddr 192.168.14.0/24 ip daddr 224.0.0.0/24 counter packets 50 bytes
>> >3675 return
>>
>> These counters on another range are the same, so I would say all the
>> multicast packets on the range we are interested in are just the same,
>> hence having nothing to do with the guest.
>>
>> >}
>> >}
>> >table ip6 libvirt_network {
>> >chain forward {
>> >type filter hook forward priority filter; policy accept;
>> >counter packets 0 bytes 0 jump guest_cross
>> >counter packets 0 bytes 0 jump guest_input
>> >counter packets 0 bytes 0 jump guest_output
>>
>> And totally nothing with IPv6.
>>
>
> As a bit of context, this is a virtlab machine whose primary purpose is to
> run
> kcli-based openshift clusters whose nodes are libvirt domains. Those are
> the
> "openshift-41[1-9]" networks and bridges. They are unrelated to the setup
> I'm
> looking into and most of them are actually obsolete (it's been years now
> since
> a 4.11 cluster last ran on the host :-)).
>
> My "guess" would be that the guest did not even get an IP address, maybe
>> did not eve try DHCP. Are you sure the guest booted?
>>
>
> I think it is, based on
>
> # virsh list
> Id Name State
> -------------------------------------------------
> [...]
> 20 podvm-podsandbox-totok-8f10756a running
>
> But now that you mention it, I'm not positively sure that it tried DHCP.
> The zero
> traffic on the virbr0 bridge you mention above is overall explainable by
> the domain
> not having an address *but* if it did try DHCP those packets would show up
> in the
> virbr0 stats I guess?
>
> I did check out previously the DHCP leases on the 'default' network:
>
> # virsh net-dhcp-leases default
> Expiry Time MAC address Protocol IP address Hostname Client ID
> or DUID
>
> -----------------------------------------------------------------------------------
>
> and there are none but that doesn't rule out any other failure in DHCP.
>
> The domain runs a peer pods podvm image which I don't have any control
> over and
> frankly am not familiar with. I assume that it does do DHCP to configure
> its interfaces
> but as the guest agent example shows my information about the image might
> not be
> always accurate.
>
I verified that the VM does do DHCP (there actually doesn't seem to be any
other
means for a podvm to get its network configured in peer pods).
pvl
> Is there a way to check if the domain attempts DHCP purely from the
> libvirt side, just
> using libvirt means?
>
> Thanks!
> pvl
>