Global option to configure tb-cache size for tcg feature
by Yatin Karel
Hi Team,
In OpenStack Upstream CI as part of testing we have to create multiple
guest vms together on a 8GB VM(no nested-virt support) so we have to use
qemu emulation. But with Qemu>=5 'tb-cache' size has increased from 32MiB
to 1GiB and due to this we cannot run multiple guest vms together as we
have limited resources in CI. We tried to use some swap together but that
too didn't helped much as it get's too slow.
Thanks Libvirt-8.0.0 now allows to configure tcg feature as below in
instance domain.xml:-
<features>
<tcg>
<tb-cache unit='KiB'>32768</tb-cache>
</tcg>
</features>
Is there some global option available in libvirt/qemu which we can use
instead of changing each and every xml so we can avoid code changes in nova
code[1] for just the CI use case?
[1] https://bugs.launchpad.net/nova/+bug/1949606
PS: Please keep me in CC for responses as I am not subscribed to the
mailing list.
Thanks and Regards
Yatin Karel
1 year, 11 months
Attach NBD device with qemu flags
by Miguel Ping
Is there a way to attach an nbd device via eg: virsh attach-device, and
provide custom qemu flags? I'm particularly interested in the flag
"reconnect-delay".
I'm fine with using a language binding (eg: C, Java) for this.
Thanks
1 year, 11 months
Re: Predictable and consistent net interface naming in guests
by Julia Suvorova
On Thu, Nov 3, 2022 at 9:26 AM Amnon Ilan <ailan(a)redhat.com> wrote:
>
>
>
> On Thu, Nov 3, 2022 at 12:13 AM Amnon Ilan <ailan(a)redhat.com> wrote:
>>
>>
>>
>> On Wed, Nov 2, 2022 at 6:47 PM Laine Stump <laine(a)redhat.com> wrote:
>>>
>>> On 11/2/22 11:58 AM, Igor Mammedov wrote:
>>> > On Wed, 2 Nov 2022 15:20:39 +0000
>>> > Daniel P. Berrangé <berrange(a)redhat.com> wrote:
>>> >
>>> >> On Wed, Nov 02, 2022 at 04:08:43PM +0100, Igor Mammedov wrote:
>>> >>> On Wed, 2 Nov 2022 10:43:10 -0400
>>> >>> Laine Stump <laine(a)redhat.com> wrote:
>>> >>>
>>> >>>> On 11/1/22 7:46 AM, Igor Mammedov wrote:
>>> >>>>> On Mon, 31 Oct 2022 14:48:54 +0000
>>> >>>>> Daniel P. Berrangé <berrange(a)redhat.com> wrote:
>>> >>>>>
>>> >>>>>> On Mon, Oct 31, 2022 at 04:32:27PM +0200, Edward Haas wrote:
>>> >>>>>>> Hi Igor and Laine,
>>> >>>>>>>
>>> >>>>>>> I would like to revive a 2 years old discussion [1] about consistent network
>>> >>>>>>> interfaces in the guest.
>>> >>>>>>>
>>> >>>>>>> That discussion mentioned that a guest PCI address may change in two cases:
>>> >>>>>>> - The PCI topology changes.
>>> >>>>>>> - The machine type changes.
>>> >>>>>>>
>>> >>>>>>> Usually, the machine type is not expected to change, especially if one
>>> >>>>>>> wants to allow migrations between nodes.
>>> >>>>>>> I would hope to argue this should not be problematic in practice, because
>>> >>>>>>> guest images would be made per a specific machine type.
>>> >>>>>>>
>>> >>>>>>> Regarding the PCI topology, I am not sure I understand what changes
>>> >>>>>>> need to occur to the domxml for a defined guest PCI address to change.
>>> >>>>>>> The only think that I can think of is a scenario where hotplug/unplug is
>>> >>>>>>> used,
>>> >>>>>>> but even then I would expect existing devices to preserve their PCI address
>>> >>>>>>> and the plug/unplug device to have a reserved address managed by the one
>>> >>>>>>> acting on it (the management system).
>>> >>>>>>>
>>> >>>>>>> Could you please help clarify in which scenarios the PCI topology can cause
>>> >>>>>>> a mess to the naming of interfaces in the guest?
>>> >>>>>>>
>>> >>>>>>> Are there any plans to add the acpi_index support?
>>> >>>>>>
>>> >>>>>> This was implemented a year & a half ago
>>> >>>>>>
>>> >>>>>> https://libvirt.org/formatdomain.html#network-interfaces
>>> >>>>>>
>>> >>>>>> though due to QEMU limitations this only works for the old
>>> >>>>>> i440fx chipset, not Q35 yet.
>>> >>>>>
>>> >>>>> Q35 should work partially too. In its case acpi-index support
>>> >>>>> is limited to hotplug enabled root-ports and PCIe-PCI bridges.
>>> >>>>> One also has to enable ACPI PCI hotplug (it's enled by default
>>> >>>>> on recent machine types) for it to work (i.e.it's not supported
>>> >>>>> in native PCIe hotplug mode).
>>> >>>>>
>>> >>>>> So if mgmt can put nics on root-ports/bridges, then acpi-index
>>> >>>>> should just work on Q35 as well.
>>> >>>>
>>> >>>> With only a few exceptions (e.g. the first ich9 audio device, which is
>>> >>>> placed directly on the root bus at 00:1B.0 because that is where the
>>> >>>> ich9 audio device is located on actual Q35 hardware), libvirt will
>>> >>>> automatically put all PCI devices (including network interfaces) on a
>>> >>>> pcie-root-port.
>>> >>>>
>>> >>>> After seeing reports that "acpi index doesn't work with Q35
>>> >>>> machinetypes" I just assumed that was correct and didn't try it. But
>>> >>>> after seeing the "should work partially" statement above, I tried it
>>> >>>> just now and an <interface> of a Q35 guest that had its PCI address
>>> >>>> auto-assigned by libvirt (and so was placed on a pcie-root-port)m and
>>> >>>> had <acpi index='4'/> was given the name "eno4". So what exactly is it
>>> >>>> that *doesn't* work?
>>> >>>
>>> >>> From QEMU side:
>>> >>> acpi-index requires:
>>> >>> 1. acpi pci hotplug enabled (which is default on relatively new q35 machine types)
>>> >>> 2. hotpluggble pci bus (root-port, various pci bridges)
>>> >>> 3. NIC can be cold or hotplugged, guest should pick up acpi-index of the device
>>> >>> currently plugged into slot
>>> >>> what doesn't work:
>>> >>> 1. device attached to host-bridge directly (work in progress)
>>> >>> (q35)
>>> >>> 2. devices attached to any PXB port and any hierarchy hanging of it (there are not plans to make it work)
>>> >>> (q35, pc)
>>> >>
>>> >> I'd say this is still a relatively important, as the PXBs are needed
>>> >> to create a NUMA placement aware topology for guests, and I'd say it
>>> >> is undesirable to loose acpi-index if a guest is updated to be NUMA
>>> >> aware, or if a guest image can be deployed in either normal or NUMA
>>> >> aware setups.
>>> >
>>> > it's not only Q35 but also PC.
>>> > We basically do not generate ACPI hierarchy for PXBs at all,
>>> > so neither ACPI hotplug nor depended acpi-index would work.
>>> > It's been so for many years and no one have asked to enable
>>> > ACPI hotplug on them so far.
>>>
>>> I'm guessing (based on absolutely 0 information :-)) that there would be
>>> more demand for acpi-index (and the resulting predictable interface
>>> names) than for acpi hotplug for NUMA-aware setup.
>>
>>
>> My guess is similar, but it is still desirable to have both (i.e. support ACPI-indexing/hotplug with Numa-aware)
>> Adding @Peter Xu to check if our setups for SAP require NUMA-aware topology
>>
>> How big of a project would it be to enable ACPI-indexing/hotplug with PXB?
Why would you need to add acpi hotplug on pxb?
> Adding +Julia Suvorova and +Tsirkin, Michael to help answer this question
>
> Thanks,
> Amnon
>
>>
>> Since native PCI was improved, we can still compromise on switching to native-PCI-hotplug when PXB is required (and no fixed indexing)
Native hotplug works on pxb as is, without disabling acpi hotplug.
>> Thanks,
>> Amnon
>>
>>
>>>
>>>
>>> Anyway, it sounds like (*within the confines of how libvirt constructs
>>> the PCI topology*) we actually have functional parity of acpi-index
>>> between 440fx and Q35.
>>>
1 year, 11 months
libvirt_connect_get_machine_types
by Simon Fairweather
Using the following
php: version 8.1.13
php-libvirt: version 0.5.6 (build 2)
libvirt 8.7.0
QEMU 7.1.0
virsh capabilities works fine.
<os_type>hvm</os_type>
<arch name='x86_64'>
<wordsize>64</wordsize>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<machine maxCpus='255'>pc-i440fx-7.1</machine>
<machine canonical='pc-i440fx-7.1' maxCpus='255'>pc</machine>
<machine maxCpus='288'>pc-q35-5.2</machine>
<machine maxCpus='255'>pc-i440fx-2.12</machine>
<machine maxCpus='255'>pc-i440fx-2.0</machine>
<machine maxCpus='255'>pc-i440fx-6.2</machine>
<machine maxCpus='288'>pc-q35-4.2</machine>
<machine maxCpus='255'>pc-i440fx-2.5</machine>
<machine maxCpus='255'>pc-i440fx-4.2</machine>
<machine maxCpus='255'>pc-i440fx-5.2</machine>
<machine maxCpus='255' deprecated='yes'>pc-i440fx-1.5</machine>
<machine maxCpus='255'>pc-q35-2.7</machine>
<machine maxCpus='288'>pc-q35-7.1</machine>
<machine canonical='pc-q35-7.1' maxCpus='288'>q35</machine>
<machine maxCpus='255'>pc-i440fx-2.2</machine>
<machine maxCpus='255'>pc-i440fx-2.7</machine>
<machine maxCpus='288'>pc-q35-6.1</machine>
<machine maxCpus='255'>pc-q35-2.4</machine>
<machine maxCpus='288'>pc-q35-2.10</machine>
<machine maxCpus='1'>x-remote</machine>
<machine maxCpus='288'>pc-q35-5.1</machine>
<machine maxCpus='255' deprecated='yes'>pc-i440fx-1.7</machine>
<machine maxCpus='288'>pc-q35-2.9</machine>
<machine maxCpus='255'>pc-i440fx-2.11</machine>
<machine maxCpus='288'>pc-q35-3.1</machine>
<machine maxCpus='255'>pc-i440fx-6.1</machine>
<machine maxCpus='288'>pc-q35-4.1</machine>
<machine maxCpus='255'>pc-i440fx-2.4</machine>
<machine maxCpus='255'>pc-i440fx-4.1</machine>
<machine maxCpus='255'>pc-i440fx-5.1</machine>
<machine maxCpus='255'>pc-i440fx-2.9</machine>
<machine maxCpus='1'>isapc</machine>
<machine maxCpus='255' deprecated='yes'>pc-i440fx-1.4</machine>
<machine maxCpus='255'>pc-q35-2.6</machine>
<machine maxCpus='255'>pc-i440fx-3.1</machine>
<machine maxCpus='288'>pc-q35-2.12</machine>
<machine maxCpus='288'>pc-q35-7.0</machine>
<machine maxCpus='255'>pc-i440fx-2.1</machine>
<machine maxCpus='288'>pc-q35-6.0</machine>
<machine maxCpus='255'>pc-i440fx-2.6</machine>
<machine maxCpus='288'>pc-q35-4.0.1</machine>
<machine maxCpus='255'>pc-i440fx-7.0</machine>
<machine maxCpus='255' deprecated='yes'>pc-i440fx-1.6</machine>
<machine maxCpus='288'>pc-q35-5.0</machine>
<machine maxCpus='288'>pc-q35-2.8</machine>
<machine maxCpus='255'>pc-i440fx-2.10</machine>
<machine maxCpus='288'>pc-q35-3.0</machine>
<machine maxCpus='255'>pc-i440fx-6.0</machine>
<machine maxCpus='288'>pc-q35-4.0</machine>
<machine maxCpus='288'>microvm</machine>
<machine maxCpus='255'>pc-i440fx-2.3</machine>
<machine maxCpus='255'>pc-i440fx-4.0</machine>
<machine maxCpus='255'>pc-i440fx-5.0</machine>
<machine maxCpus='255'>pc-i440fx-2.8</machine>
<machine maxCpus='288'>pc-q35-6.2</machine>
<machine maxCpus='255'>pc-q35-2.5</machine>
<machine maxCpus='255'>pc-i440fx-3.0</machine>
<machine maxCpus='288'>pc-q35-2.11</machine>
<domain type='qemu'/>
<domain type='kvm'/>
</arch>
Any known issues with php 8 for this function, others seem to be working
fine.
1 year, 11 months
Need help
by Gk Gk
Hi,
We have an openstack platform and we are trying to get the network details
of the guest vm on the hypervisors using the python libvirt library
(domain.interfaceStats) . But in cases of SR-IOV vms, the interface is not
being reported by the above tool. The interface in this case is "hostdev"
in the guest xml definition. Can anyone let me know how to find out the
sr-iov network interface details of the guest vm ?
Thanks
Kumar
1 year, 11 months