Laine Stump wrote:
On 04/13/2014 08:53 AM, Roman Bogorodskiy wrote:
> Hi,
>
> This is the first attempt to implement PCI address allocation for the
> bhyve driver. This patch is by no means a complete version and the idea
> of this is to understand if I'm moving in a right direction.
>
> This code is based on the one from QEMU driver.
Rather than doing a cut/paste of the qemu code, it would be greatly more
desirable to move the pci address allocation from qemu into a library
that is accessible from both drivers. This will eliminate dual
maintenance headaches in the future.
Extracting common code into a library makes a perfect sense. But before
doing that I'd like to have a complete and functional implementation for
bhyve (not necessarily pushed). When it's done, it'd be more or less
obvious which parts of the code should be shared, otherwise there's a
risk of extracting the code from Qemu which would not be used by Bhyve.
Projects like this can tend to be painful, but in the end it means
that
when someone using qemu finds and fixes a bug, it will automatically be
fixed for bhyve. And. for example, when the qemu driver adds support for
PCIe "root-port" controllers and upstream/downstream switches[*], the
bhyve driver will automatically get address allocation support for those
controllers too (if/when it implements the controllers).
Or is there some particular reason that makes it better to keep it
separate (or maybe some other conversation that I missed, or have
forgotten)?
I don't see a reason to keep it separate at this point, but as I've
mentioned above, I think it'd make sense for me to complete the current
patch in a way it as and then analyze what parts need to be shared.
I don't think you've missed any conversations on that topic.
[*]For reference, here is a description of the various PCI
controller
types - the design in this email is fairly close to what was eventually
implemented, except that the actual code uses a "model" attribute to the
controller element, rather than a separate <model> subelement:
https://www.redhat.com/archives/libvir-list/2013-April/msg01144.html
(root-port, upstream-switch-port, and downstream-switch-port weren't
implemented at the time, because they weren't immediately useful)
> Even though bhyve currently
> has no support for PCI bridges, it should be possible to add that
> support without major rewrites when this feature will be available in bhyve.
>
> So, currently we have the following. For a domain like that:
>
>
https://gist.github.com/novel/10569989#file-domainin-xml
>
> The processed domain looks this way:
>
>
https://gist.github.com/novel/10569989#file-domainout-xml
>
> and the command is:
>
>
https://gist.github.com/novel/10569989#file-cmd
>
> Please let me know if it's uncomfortable to follow gist links; I didn't
> put domain xml inline as it's pretty lengthy.
>
> Open questions are:
>
> * What's the best way to deal with the 0:0,hostbridge device, should it
> be explicitly added to the domain definition?
No. Slot 0 on each PCI controller is considered to be reserved, and
doesn't show up in the domain definition. The address allocation code
takes that into account.
Good.
> * How to handle lpc device, that's required for console.
From bhyve point
> of view it looks like this:
>
> -s 31,lpc -l com1,ttydev
>
> That is, LPC PCI-ISA bridge on PCI slot and com1 port on that bridge.
We don't model the ISA bus for qemu, and also don't support adding any
device to the ISA bus (although qemu has one). I think you can just
directly define a serial character device with port='0':
<serial type='pty'>
<target port='0'/>
</serial>
Bhyve uses 'nmdm' type console now, and it requires the ISA bridge and
ISA bus (more details on that in commit
6c91134de46ea481fa36c008c0a3667cbd088f1c), unfortunately.
Thanks,
Roman Bogorodskiy