On 08/10/2015 11:09 AM, Daniel P. Berrange wrote:
On Thu, Aug 06, 2015 at 07:46:58PM +0200, Martin Kletzander wrote:
> On Thu, Aug 06, 2015 at 06:37:41PM +0200, Martin Kletzander wrote:
>> On Fri, Jul 17, 2015 at 02:27:45PM +0300, Pavel Fedin wrote:
>>> Here we assume that if qemu supports generic PCI host controller,
>>> it is a part of virt machine and can be used for adding PCI devices.
>>>
>>> In qemu this is actually a PCIe bus, so we also declare multibus
>>> capability so that 0'th bus is specified to qemu correctly as
'pcie.0'
>>>
>>> Signed-off-by: Pavel Fedin <p.fedin(a)samsung.com>
>>> ---
>>> src/qemu/qemu_capabilities.c | 8 ++++++++
>>> src/qemu/qemu_domain.c | 17 +++++++++++++----
>>> 2 files changed, 21 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
>>> index d570fdd..f3486c7 100644
>>> --- a/src/qemu/qemu_capabilities.c
>>> +++ b/src/qemu/qemu_capabilities.c
>>> @@ -2138,6 +2138,14 @@ bool virQEMUCapsHasPCIMultiBus(virQEMUCapsPtr
qemuCaps,
>>> return false;
>>> }
>>>
>>> + if (ARCH_IS_ARM(def->os.arch)) {
>>> + /* If 'virt' supports PCI, it supports multibus.
>>> + * No extra conditions here for simplicity.
>>> + */
>> So every ARM qemu with the "virt" machine type supports both PCI and
>> multiqueue? How about those "virt-*" for which you check below. That
>> might not be related, I'm just curious.
>>
>>> + if (STREQ(def->os.machine, "virt"))
>>> + return true;
>>> + }
>>> +
>>> return false;
>>> }
>>>
>>> diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
>>> index 8b050a0..c7d14e4 100644
>>> --- a/src/qemu/qemu_domain.c
>>> +++ b/src/qemu/qemu_domain.c
>>> @@ -981,7 +981,7 @@ virDomainXMLNamespace virQEMUDriverDomainXMLNamespace =
{
>>> static int
>>> qemuDomainDefPostParse(virDomainDefPtr def,
>>> virCapsPtr caps,
>>> - void *opaque ATTRIBUTE_UNUSED)
>>> + void *opaque)
>>> {
>>> bool addDefaultUSB = true;
>>> bool addImplicitSATA = false;
>>> @@ -1030,12 +1030,21 @@ qemuDomainDefPostParse(virDomainDefPtr def,
>>> break;
>>>
>>> case VIR_ARCH_ARMV7L:
>>> - addDefaultUSB = false;
>>> - addDefaultMemballoon = false;
>>> - break;
>>> case VIR_ARCH_AARCH64:
>>> addDefaultUSB = false;
>>> addDefaultMemballoon = false;
>>> + if (STREQ(def->os.machine, "virt") ||
>>> + STRPREFIX(def->os.machine, "virt-")) {
>>> + virQEMUDriverPtr driver = opaque;
>>> +
>>> + /* This condition is actually a (temporary) hack for test suite
which
>>> + * does not create capabilities cache */
>> Few questions here. a) how "temporary" is this since you're not
>> removing it in this series? b) for what tests you need this hack and
>> what part of the below is the hack?
>>
>> Moreover, you cannot use capabilities when defining an XML. The
>> emulator can change between the domain is defined and started, so you
>> cannot know with what emulator this will be started.
>>
>> I see Michal (Cc'd) just pushed this, I probably just missed the mail
> Of course I forgot, Cc'ing now.
I agree with your core statement that we should not be using the QEMU
capabilities when defining the XML. With all existing scenarios we have
been able to determine whether to add the implicit PCI controller based
on the machine type name only, because with every other QEMU arch when
doing such a major change as adding a PCI bus, they have created a new
machine type. The problem is that arm 'virt' machine type is not stable,
it is being changed arbitrarily in new QEMU releases :-(
So AFAIK, that leaves us with 3 choices
- Never add PCI controller at time the XML is defined, on the basis
that we have to be conservative in what we add to cope with old QEMU
- Always add PCI controller at time the XML is defined, on the basis
that most people will have new enough QEMU because ARM 'virt' machine
type is very much still in development, so no one will serously stick
with the older QEMU versions which lack PCI.
- Use the capabilities in XML post-parse to conditionally add the
PCI controller. This is what was currently merged
I don't think option 1 makes much sense as it'll harm ARM arch forever
more, to cope with QEMU versions that will almost never be used in
practice.
I'd be inclined to go with option 2, and then if any PCI devices are
actually used with the guest, check the capability at start time when
we are doing auto-address assignment.
Except that auto-address assignment happens after the parse, but before
we write the config to disk (i.e. we don't wait until the domain is
started. I'm guessing that was originally done because we didn't want to
be potentially re-writing the config to disk every time the domain was
started.) Also, we can't assume that "no PCI devices used at start"
means "no PCI devices will ever be desired" because hotplug.
At any rate, a machinetype that sometimes has an implied controller
device and sometimes doesn't is just a completely broken idea -
different instances of the same machinetype are assumed to have the same
ABI, in particular if you start up a virtual machine of that machinetype
with --nodefaults, it should *always* give you exactly the same set of
implied devices. I don't think we should break our internal boundaries
in order to cater to this broken machinetype (i.e. I agree that option 2
is the correct way to go), especially if it's a transient thing.
On the other hand, if the virt-* machinetypes are for dev only, should
we be trying to cater to them at all? Will they *always* be for dev
only, and when something useful is settled on, a "permanent" machinetype
will be defined? If that's the case, then maybe the proper thing is to
*not* automatically add a pcie-root for this machinetype, but to
properly support having one manually added.
(Of course I've found all this discussion less than 24 hours after
pushing a long series of patches that added 3 new new capabilities, and
the new OBJECT_GPEX capability caused merge conflicts that had to be
resolved; now if we remove the need for QEMU_CAPS_OBJECT_GPEX, there
will be more gratuitous merge conflicts. I wish I'd waited until today
to push. sigh.)
BTW, a related topic - when addPCIeRoot is true, we not only add an
implied pcie-root, but we also add a dmi-to-pci-bridge and a pci-bridge.
pci-bridge is a generic device that I assume could work okay on
something non-X86, but the dmi-to-pci-bridge is implemented with an
i828b011-bridge device, which is very much specific to an Intel chipset.
Is that device really functional on this ARM virt-* machinetype?