On Tue, May 19, 2015 at 10:15:17AM -0400, Laine Stump wrote:
On 05/19/2015 05:07 AM, Michael S. Tsirkin wrote:
> On Wed, Apr 22, 2015 at 10:23:04AM +0100, Daniel P. Berrange wrote:
>> On Fri, Apr 17, 2015 at 04:53:02PM +0800, Chen Fan wrote:
>>> backgrond:
>>> Live migration is one of the most important features of virtualization
technology.
>>> With regard to recent virtualization techniques, performance of network I/O
is critical.
>>> Current network I/O virtualization (e.g. Para-virtualized I/O, VMDq) has a
significant
>>> performance gap with native network I/O. Pass-through network devices have
near
>>> native performance, however, they have thus far prevented live migration. No
existing
>>> methods solve the problem of live migration with pass-through devices
perfectly.
>>>
>>> There was an idea to solve the problem in website:
>>>
https://www.kernel.org/doc/ols/2008/ols2008v2-pages-261-267.pdf
>>> Please refer to above document for detailed information.
>>>
>>> So I think this problem maybe could be solved by using the combination of
existing
>>> technology. and the following steps are we considering to implement:
>>>
>>> - before boot VM, we anticipate to specify two NICs for creating bonding
device
>>> (one plugged and one virtual NIC) in XML. here we can specify the
NIC's mac addresses
>>> in XML, which could facilitate qemu-guest-agent to find the network
interfaces in guest.
>>>
>>> - when qemu-guest-agent startup in guest it would send a notification to
libvirt,
>>> then libvirt will call the previous registered initialize callbacks. so
through
>>> the callback functions, we can create the bonding device according to the
XML
>>> configuration. and here we use netcf tool which can facilitate to create
bonding device
>>> easily.
>> I'm not really clear on why libvirt/guest agent needs to be involved in
this.
>> I think configuration of networking is really something that must be left to
>> the guest OS admin to control. I don't think the guest agent should be
trying
>> to reconfigure guest networking itself, as that is inevitably going to conflict
>> with configuration attempted by things in the guest like NetworkManager or
>> systemd-networkd.
> There should not be a conflict.
> guest agent should just give NM the information, and have NM do
> the right thing.
That assumes the guest will have NM running. Unless you want to severely
limit the scope of usefulness, you also need to handle systems that have
NM disabled, and among those the different styles of system network
config. It gets messy very fast.
Systems with system network config can just do the configuration
manually, they won't be worse off than they are now.
>
> Users are actually asking for this functionality.
>
> Configuring everything manually is possible but error
> prone.
Yes, but attempting to do it automatically is also error prone (due to
the myriad of different guest network config systems, even just within
the seemingly narrow category of "Linux guests"). Pick your poison :-)
Make it work well for RHEL guests. Others will work with less integration.
--
MST