On Wednesday, 13 August, 2014 10:42:18 AM, Martin Kletzander wrote:
On Mon, Aug 11, 2014 at 03:58:27PM +0200, Levente Kurusa wrote:
>(Imported thread from archives, hopefully I won't break threading. If I
>did, I apologize.)
>
> On Aug 08 2014, Martin Kletzander wrote:
>>On Fri, Aug 08, 2014 at 02:07:56PM +0200, Maxime Leroy wrote:
>>>On Fri, Aug 8, 2014 at 11:21 AM, Martin Kletzander
<mkletzan(a)redhat.com>
>>>wrote:
>>>>On Thu, Aug 07, 2014 at 05:34:35PM +0200, Maxime Leroy wrote:
>>>>>
>>>>>On Thu, Aug 7, 2014 at 12:33 PM, Martin Kletzander
<mkletzan(a)redhat.com>
>>>>>wrote:
>>>>>>
>>>>>>On Tue, Aug 05, 2014 at 06:48:01PM +0200, Maxime Leroy wrote:
>>>>>>>
>>>[...]
>>>>>>
>>>>>>>Note: the ivshmem server needs to be launched before
>>>>>>> creating the VM. It's not done by libvirt.
>>>>>>>
>>>>>>
>>>>>>This is a good temporary workaround, but keep in mind that
libvirt
>>>>>>works remotely as well and for remote machines libvirt should be
able
>>>>>>to do everything for you to be able to run the machine if
necessary.
>>>>>>So even if it might not start the server now, it should in the
future.
>>>>>>That should be at least differentiable by some parameter (maybe
you do
>>>>>>it somewhere in the code somehow, I haven't got into that).
>>>>>>
>>>>>
>>>>>The new version of ivshmem server has not been accepted yet in QEMU.
>>>>>I think it's too early to have an option to launch an ivshmem
server or
>>>>>not.
>>>>>
>>>>>I will prefer to focus on integrating these patches in libvirt
first,
>>>>>before adding a new feature to launch an ivhsmem server.
>>>>>
>>>>>Are you ok with that ?
>>>>>
>>>>
>>>>There was a suggestion of implementing the non-server variant first
>>>>and expand it to the variant with server afterwards. That might be
>>>>the best solution because we'll have bit more time to see the
>>>>re-factoring differences in QEMU as well. And we can concentrate on
>>>>other details.
>>>>
>>>
>>>I would prefer to have ivshmen server and non-server mode supported in
>>>libvirt with these patches; because the XML format need to be designed
>>>with both at the same time.
>>>
>>>The new XML format supporting a start or not of ivshmem server could be:
>>>
>>><shmem type='ivshmem'>
>>> <shm file='ivshmem0'>
>>> <server socket='/tmp/socket-ivshmem0' start='yes'>
>>> <size unit='M'>32</size>
>>> <msi vectors='32' ioeventfd='on'/>
>>></shmem>
>>>
>>>Note: This new XML format can support different types of shmem.
>>>
>>>After my holiday, I am going to check how to implement this feature.
>>>What do you think about this XML format?
>>>
>>
>>It's better, but I would like this:
>>
>><shmem name='my_ivshmem_001'>
>> <server socket='/tmp/ivsh-server' start='yes'/>
>> ...
>></shmem>
>
>Keep in mind that libvirt should not allow having both the attribute
>name and the server in the same shmem element. QEMU currently only
>issues a warning, but I plan to change that, since it makes no sense to
>continue when there are two possible sources for the SHMEM's file
>descriptor.
>
I may have misunderstood, so please let me ask for a clarification.
Sure! :-)
What I meant here is that if you use the name attribute then QEMU will
open/create an SHM named like that. If you use the server, then the server
will send the fd for the SHM (see below).
If you set 'name' and 'server', then QEMU will have two sources for the
fd,
(it can get it from the server; or it can open the SHM named $name)
and it will issue a WARNING and just ignore the 'name'. I guess this is
user error, so I will modify QEMU to make sure it errors out in such case.
One server can serve only one shared memory segment? So if there are
multiple ones, multiple servers must be spawned as well? If one
server can handle multiple segments, then how does qemu tell it about
which one it's communicating (sending an interrupt for example)? And
how does qemu access the memory if it doesn't know the shmem name? I
can only think of passing an FD with the new memfd_create() syscall,
but that's even in linux-next for around 5 days only, so I don't think
that's it.
No, one server can only handle one SHM. Currently, the server has to be
on the same host as the guests are in order to be able to pass the fd
via SCM_RIGHTS (see: cmsg(3)) on a UNIX socket.
I must say, I was just lazy to read the code, so that's why I have so
many questions, sorry for that :)
>>
>>That could be simplified into (in case of no server):
>>
>><shmem name='my_ivshmem_001'/>
>>
>>Of course this will have a PCI address afterwards.
>>
>>This design is generic enough that it doesn't mention "ivshmem" (so
it
>>can be implemented differently in any hypervisor).
>>
>>What's your opinion on that?
>>
>>>Any hints to develop this feature (i.e. starting ivshmen server in
>>>libvirt) is welcomed.
>>>
>>
>>With starting the server in libvirt we'll have to wait until it's in
>>qemu *and* we have to deal with how and when to stop the server (if we
>>are running it).
>>
>>>I assume I need to add a new file: src/util/virivshmemserver.c to add
>>>a new function virIvshmemServerRun() and to use it in qemu_command.c.
>>>
>>>How can I check whether an ivshmem-server application is installed or
>>>not on the host ? Are there other equivalent behaviors into libvirt?
>>>
>>
>>That should be checked by virFileIsExecutable() on a
>>IVSHMEM_SERVER_PATH that would be set by configure.ac (take a look at
>>EBTABLES_PATH in src/util/virfirewall.c for example).
>>
>>I'll also see if I'll be able to get any info from the QEMU side about
>>what can we expect to happen with ivshmem in near future.
>
>I plan to make a few mostly internal modifications on how it accesses
>memory and how the registers are laid out in BAR1.
>
>I don't really plan on changing the command line for now, one of my
>minimal changes were changing 'use64=<int>' to
'use64=<bool>', however
>MST didn't really like it claiming that it will break existing
>scripts, so I postponed the patch until I finish off doing the more
>serious internal stuff.
>
>Watch out for patchdump with RFCs on qemu-devel in the next few days...
>
>Thanks!
>Levente Kurusa
Thanks for the info!
Martin
Cheers,
Levente Kurusa