[libvirt] RFC: Splitting python binding out into a separate repo & ading to PyPi

As everyone knows, we have historically always shipped the python binding as part of the libvirt primary tar.gz distribution. In some ways that has simplified life for people, since we know they'll always have a libvirt python that matches their libvirt C library. At the same time though, this policy of ours is causing increasing amounts of pain for a number of our downstream users. In OpenStack, in particular, their development and test environments aim to be able to not rely on any system installed python packages. They use a virtualenv and pip to install all python deps from PyPi (equivalent of Perl's CPAN in Python world). This approach works for everything except the libvirt Python code which is not available standalone on PyPi. This is causing so much pain that people have suggested taking the libvirt python code we ship and just uploading it to PyPi themselves[1]. This would obviously be a somewhat hostile action to take, but the way we distribute libvirt python is forcing OpenStack to consider such things. In RHEL world too, bundling of libvirt + its python binding is causing pain with the fairly recent concept of "software collections"[2]. This allows users to install multiple versions of languages like Python, Perl, etc on the same box in parallel. To use libvirt python with thse alternate python installs though, requires that they recompile the entire libvirt distribution just to get the Python binding. This is obviously not an approach that works for most people, particularly if they're looking to populate their software collection using 'pip' rather than RPM. Looking on google there are a number of other people asking for libvirt python as a separate module, eg on Stackoverflow[3]. I don't think these issues are going to go away, in fact I think they will likely become more pressing, until the point where some 3rd party takes the step of providing libvirt python bindings themselves. I don't think we want to let ourselves drift into the situation where we loose control over releasing libvirt python bindings. IMHO we should / must listen to our users here before it is too late. We can still release libvirt python at the same time as normal libvirt releases, and require that people update the bindings whenever adding new APIs (if the generator doesn't cope with them). We should simply distribute python as a separate tar.gz, as we do for all other languages, and upload it to PyPi, as well as libvirt.org FTP when doing a release. Obviously there will be some work to separate things out, but I don't see that being insurmountable, since all other language bindings manage to be separate, even when doing code generation. We'd also want to change to use distutils, rather than autoconf, since that's what the python world wants. Regards, Daniel [1] http://lists.openstack.org/pipermail/openstack-dev/2013-August/013187.html [2] https://access.redhat.com/site/documentation/en-US/Red_Hat_Developer_Toolset... [3] http://stackoverflow.com/questions/14924460/is-there-any-way-to-install-libv... -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Thu, Aug 29, 2013 at 12:24:41 +0100, Daniel Berrange wrote: ...
IMHO we should / must listen to our users here before it is too late.
We can still release libvirt python at the same time as normal libvirt releases, and require that people update the bindings whenever adding new APIs (if the generator doesn't cope with them). We should simply distribute python as a separate tar.gz, as we do for all other languages, and upload it to PyPi, as well as libvirt.org FTP when doing a release.
Obviously there will be some work to separate things out, but I don't see that being insurmountable, since all other language bindings manage to be separate, even when doing code generation. We'd also want to change to use distutils, rather than autoconf, since that's what the python world wants.
Your suggestion looks reasonable from the python community point of view. However, the main benefit in having python bindings in the same repo with libvirt itself is that it's always (for a bit relaxed definition of always) in sync with libvirt. In case we split it, I'd like us to do it in a way that anyone hacking libvirt will also automatically get and build python bindings. Is git submodule something that could help with that? Or is this a complete nonsense? Jirka

On Thu, Aug 29, 2013 at 02:50:22PM +0200, Jiri Denemark wrote:
On Thu, Aug 29, 2013 at 12:24:41 +0100, Daniel Berrange wrote: ...
IMHO we should / must listen to our users here before it is too late.
We can still release libvirt python at the same time as normal libvirt releases, and require that people update the bindings whenever adding new APIs (if the generator doesn't cope with them). We should simply distribute python as a separate tar.gz, as we do for all other languages, and upload it to PyPi, as well as libvirt.org FTP when doing a release.
Obviously there will be some work to separate things out, but I don't see that being insurmountable, since all other language bindings manage to be separate, even when doing code generation. We'd also want to change to use distutils, rather than autoconf, since that's what the python world wants.
Your suggestion looks reasonable from the python community point of view. However, the main benefit in having python bindings in the same repo with libvirt itself is that it's always (for a bit relaxed definition of always) in sync with libvirt. In case we split it, I'd like us to do it in a way that anyone hacking libvirt will also automatically get and build python bindings. Is git submodule something that could help with that? Or is this a complete nonsense?
To be honest, I don't really see why the python binding needs to be treated as a special case amongst all our language bindings, other than due to the historical accident that DV put them in tree when writing libvirt. With the Perl bindings I have a test case which analyses the libvirt-api.xml file and the symbols referenced by the binding code. It then reports on any APIs which have not be mapped to the Perl. Likewise for header file constants. If we wrote a similar test case for the python, and then had an automated build we'd quickly detect any case where we added a new API that was not automatically handled by the python generator.py Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Thu, Aug 29, 2013 at 6:24 AM, Daniel P. Berrange <berrange@redhat.com>wrote: <snip>
In RHEL world too, bundling of libvirt + its python binding is causing pain with the fairly recent concept of "software collections"[2]. This allows users to install multiple versions of languages like Python, Perl, etc on the same box in parallel. To use libvirt python with thse alternate python installs though, requires that they recompile the entire libvirt distribution just to get the Python binding. This is obviously not an approach that works for most people, particularly if they're looking to populate their software collection using 'pip' rather than RPM.
<snip>
Same reason on Gentoo as well. Gentoo has supported multiple Python installations for years now and I've alternated between some kludge magic to try to make some more work with libvirt and finally ended up hardcoding Gentoo's libvirt Python bindings to just 2.7 which constantly causes me grief in the form of bug reports for other versions. So I support this effort whole heartedly and will gladly lend a hand where time permits. -- Doug Goldstein

On 08/29/2013 05:24 AM, Daniel P. Berrange wrote:
I don't think these issues are going to go away, in fact I think they will likely become more pressing, until the point where some 3rd party takes the step of providing libvirt python bindings themselves. I don't think we want to let ourselves drift into the situation where we loose control over releasing libvirt python bindings.
Splitting the python bindings into their own project makes sense to me. We've got enough interest in python on this list that I'm not too worried about enforcing that new APIs to the main project be accompanied with patches to libvirt-python.git, and keep up with a release of the bindings for each upstream release. I don't think the python bindings should be a submodule of libvirt proper, but I wouldn't be opposed to a meta-git project that has both libvirt, libvirt-python, and possibly other libvirt-* binding subprojects as submodules, so that you could update the metaproject and pick up all the bindings at once. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

The 29/08/13, Eric Blake wrote:
On 08/29/2013 05:24 AM, Daniel P. Berrange wrote:
I don't think these issues are going to go away, in fact I think they will likely become more pressing, until the point where some 3rd party takes the step of providing libvirt python bindings themselves. I don't think we want to let ourselves drift into the situation where we loose control over releasing libvirt python bindings.
Splitting the python bindings into their own project makes sense to me. We've got enough interest in python on this list that I'm not too worried about enforcing that new APIs to the main project be accompanied with patches to libvirt-python.git, and keep up with a release of the bindings for each upstream release.
I'm a bit out of topic but I feel good benefits with the APIs having its own releases. Notice I'm talking about the APIs. What makes it hard for small projects to use the python bindings are the API changes (up to the point that I don't use them). I guess this issue will stand as long as the APIs keep highly tied to the python bindings. In order to get smart backward and forward compatible APIs, I guess it would make sense to decorelate them from the "low level" bindings. Introducing a new interface API <-> bindings could do the job of checking the bindings version and make the correct bindings calls. Perhaps it would worth the trouble to initiate a new project for such a "public API"? I think this is another way to solve the issues of this request, plus the benefits of provinding a very stable public API. -- Nicolas Sebrecht

On Tue, Sep 03, 2013 at 01:27:50PM +0200, Nicolas Sebrecht wrote:
The 29/08/13, Eric Blake wrote:
On 08/29/2013 05:24 AM, Daniel P. Berrange wrote:
I don't think these issues are going to go away, in fact I think they will likely become more pressing, until the point where some 3rd party takes the step of providing libvirt python bindings themselves. I don't think we want to let ourselves drift into the situation where we loose control over releasing libvirt python bindings.
Splitting the python bindings into their own project makes sense to me. We've got enough interest in python on this list that I'm not too worried about enforcing that new APIs to the main project be accompanied with patches to libvirt-python.git, and keep up with a release of the bindings for each upstream release.
I'm a bit out of topic but I feel good benefits with the APIs having its own releases.
Notice I'm talking about the APIs. What makes it hard for small projects to use the python bindings are the API changes (up to the point that I don't use them). I guess this issue will stand as long as the APIs keep highly tied to the python bindings.
Err, what API changes are you talking about ? Both the libvirt C API, and any language bindings, including the python, are intended to be long term stable APIs. We only ever add new APIs or flags - never change existing APis. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

The 03/09/13, Daniel P. Berrange wrote:
Err, what API changes are you talking about ? Both the libvirt C API, and any language bindings, including the python, are intended to be long term stable APIs. We only ever add new APIs or flags - never change existing APis.
I can't give a precise example, sorry. I had a wrong surprise in the late 2010 and since kept far from the python bindings. I might change to rely on the bindings if you actually think APIs are long term stable. -- Nicolas Sebrecht

On Tue, Sep 03, 2013 at 02:13:58PM +0200, Nicolas Sebrecht wrote:
The 03/09/13, Daniel P. Berrange wrote:
Err, what API changes are you talking about ? Both the libvirt C API, and any language bindings, including the python, are intended to be long term stable APIs. We only ever add new APIs or flags - never change existing APis.
I can't give a precise example, sorry. I had a wrong surprise in the late 2010 and since kept far from the python bindings. I might change to rely on the bindings if you actually think APIs are long term stable.
They have always been stable & will continue to be stable. Of course sometimes there may be bugs in particular releases that get fixed, but that's normal & would not be changing the semantics of the APIs in an backwards-incompatible manner. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Thu, 2013-08-29 at 12:24 +0100, Daniel P. Berrange wrote:
In OpenStack, in particular, their development and test environments aim to be able to not rely on any system installed python packages. They use a virtualenv and pip to install all python deps from PyPi (equivalent of Perl's CPAN in Python world). This approach works for everything except the libvirt Python code which is not available standalone on PyPi. This is causing so much pain that people have suggested taking the libvirt python code we ship and just uploading it to PyPi themselves[1]. This would obviously be a somewhat hostile action to take, but the way we distribute libvirt python is forcing OpenStack to consider such things.
As always, Dan is spot on. You see little side effects of this problem in a number of different places. For example, Nova has to enable "sitepackages" in its virtualenv used for unit testing because libvirt can't be installed in the virtualenv: https://github.com/openstack/nova/blob/86c97ff/tox.ini#L5 Or similar, here in TripleO: https://github.com/openstack/tripleo-image-elements/blob/e14861e/elements/os... Or there's the hack for running Nova's unit tests without libvirt installed: https://github.com/openstack/nova/blob/86c97ff0/nova/tests/virt/libvirt/test... I've seen this come up again and again on the project and generally the reaction is "gah! libvirt!" and while a workaround is found, it leaves a bitter taste. qpid was, up until 6 months ago, the only other that suffered from this problem but (AFAIR) someone involved in OpenStack did the work to make it available on pypi: https://pypi.python.org/pypi/qpid-python/ As Dan mentions, it's really quite possible that somebody will dive in and come up with a temporary hack to get it up on PyPI. I know this will take some significant work to get right, but it will make a huge difference for OpenStack and libvirt's perception within OpenStack. Thanks, Mark.

On Thu, Aug 29, 2013 at 12:24:41PM +0100, Daniel P. Berrange wrote:
As everyone knows, we have historically always shipped the python binding as part of the libvirt primary tar.gz distribution. In some ways that has simplified life for people, since we know they'll always have a libvirt python that matches their libvirt C library.
At the same time though, this policy of ours is causing increasing amounts of pain for a number of our downstream users.
In OpenStack, in particular, their development and test environments aim to be able to not rely on any system installed python packages. They use a virtualenv and pip to install all python deps from PyPi (equivalent of Perl's CPAN in Python world). This approach works for everything except the libvirt Python code which is not available standalone on PyPi. This is causing so much pain that people have suggested taking the libvirt python code we ship and just uploading it to PyPi themselves[1]. This would obviously be a somewhat hostile action to take, but the way we distribute libvirt python is forcing OpenStack to consider such things.
In RHEL world too, bundling of libvirt + its python binding is causing pain with the fairly recent concept of "software collections"[2]. This allows users to install multiple versions of languages like Python, Perl, etc on the same box in parallel. To use libvirt python with thse alternate python installs though, requires that they recompile the entire libvirt distribution just to get the Python binding. This is obviously not an approach that works for most people, particularly if they're looking to populate their software collection using 'pip' rather than RPM.
Looking on google there are a number of other people asking for libvirt python as a separate module, eg on Stackoverflow[3].
I don't think these issues are going to go away, in fact I think they will likely become more pressing, until the point where some 3rd party takes the step of providing libvirt python bindings themselves. I don't think we want to let ourselves drift into the situation where we loose control over releasing libvirt python bindings.
IMHO we should / must listen to our users here before it is too late.
We can still release libvirt python at the same time as normal libvirt releases, and require that people update the bindings whenever adding new APIs (if the generator doesn't cope with them). We should simply distribute python as a separate tar.gz, as we do for all other languages, and upload it to PyPi, as well as libvirt.org FTP when doing a release.
Obviously there will be some work to separate things out, but I don't see that being insurmountable, since all other language bindings manage to be separate, even when doing code generation. We'd also want to change to use distutils, rather than autoconf, since that's what the python world wants.
Okay, message received :-) First we keep the status quo for 1.1.2, obvious but better to state it. Second the key point is really to have tarballs of the python bindings available as separate source from upstream (us !), and make sure we remove the bindings from the libvirt tarball releases. Right ? Third, having a separate repository for the python bindings doesn't bring much unless we really want to generate tarball on a regular basis for that project independantly of libvirt ones. On the other hand having the merged repository like we do now means the python bindings patches tend to be reviewed and tested as the new APIs are added, i.e. when it's fresh, and keep people more inclined to actually think about them :) On the other hand moving to a separate repo would likely loose our git history (not sure if we can keep it, i doubt) which would be a bummer IMHO. So I would be tempted by the minimal approach of scinding the tarballs making sure that we get in the python subdirectory that will lead to python bindings a classic and up to date setup.py, maybe move some of the auto* python specific code there as equivalent scripts or focuse our effort into making sure the setup.py is good enough. Then the corresponding spec.in goes there too. At libvirt release time I regenerate 2 tarballs instead of one, and we have the pleasure to create new components builds for the python bindings from there. Opinions about this plan ? Daniel -- Daniel Veillard | Open Source and Standards, Red Hat veillard@redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | virtualization library http://libvirt.org/

On Fri, Aug 30, 2013 at 03:27:42PM +0800, Daniel Veillard wrote:
On Thu, Aug 29, 2013 at 12:24:41PM +0100, Daniel P. Berrange wrote:
As everyone knows, we have historically always shipped the python binding as part of the libvirt primary tar.gz distribution. In some ways that has simplified life for people, since we know they'll always have a libvirt python that matches their libvirt C library.
At the same time though, this policy of ours is causing increasing amounts of pain for a number of our downstream users.
BTW on a related issue, the bindings generation is very much similar from the one from libxml2, and I ported the libxml2 one to work with python3, they are alike a dozen set of patches in libxml2 git in GNOME from thsi spring and it may be a bit easier to carry the conversion before doing the split. The patches obviously won't apply as is as the generators have diverged somehow since the creation of libvirt but I think there is enough commonality that it is worth trying to do this before the change. I had plan to do the porting myself but I have a crazy workload those days and unless taking forced vacations for it it may be a bit hard for me to do this before the 1.1.3 release if we want to do the split by then <grin/> Daniel -- Daniel Veillard | Open Source and Standards, Red Hat veillard@redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | virtualization library http://libvirt.org/

On Fri, Aug 30, 2013 at 03:27:42PM +0800, Daniel Veillard wrote:
On the other hand moving to a separate repo would likely lose our git history (not sure if we can keep it, i doubt) which would be a bummer IMHO.
You can use git filter-branch to extract the python bindings (assuming it's self contained in the python/ directory) while keeping history: « To rewrite the repository to look as if foodir/ had been its project root, and discard all other history: git filter-branch --subdirectory-filter foodir -- --all » (this example comes from git filter-branch manpage) Christophe

On Fri, Aug 30, 2013 at 10:18:19AM +0200, Christophe Fergeau wrote:
On Fri, Aug 30, 2013 at 03:27:42PM +0800, Daniel Veillard wrote:
On the other hand moving to a separate repo would likely lose our git history (not sure if we can keep it, i doubt) which would be a bummer IMHO.
You can use git filter-branch to extract the python bindings (assuming it's self contained in the python/ directory) while keeping history:
« To rewrite the repository to look as if foodir/ had been its project root, and discard all other history:
git filter-branch --subdirectory-filter foodir -- --all » (this example comes from git filter-branch manpage)
Okay, good to know :-) i would still slightly prefer to keep them in tree for the other reasons, but that would less be an issue if we move to a separate repo. thanks ! Daniel -- Daniel Veillard | Open Source and Standards, Red Hat veillard@redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | virtualization library http://libvirt.org/

On Fri, Aug 30, 2013 at 10:18:19AM +0200, Christophe Fergeau wrote:
On Fri, Aug 30, 2013 at 03:27:42PM +0800, Daniel Veillard wrote:
On the other hand moving to a separate repo would likely lose our git history (not sure if we can keep it, i doubt) which would be a bummer IMHO.
You can use git filter-branch to extract the python bindings (assuming it's self contained in the python/ directory) while keeping history:
« To rewrite the repository to look as if foodir/ had been its project root, and discard all other history:
git filter-branch --subdirectory-filter foodir -- --all » (this example comes from git filter-branch manpage)
Yes, upon splitting them, I'd absolutely want to keep the git history intact for the python part of the tree. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Fri, Aug 30, 2013 at 03:27:42PM +0800, Daniel Veillard wrote:
On Thu, Aug 29, 2013 at 12:24:41PM +0100, Daniel P. Berrange wrote:
As everyone knows, we have historically always shipped the python binding as part of the libvirt primary tar.gz distribution. In some ways that has simplified life for people, since we know they'll always have a libvirt python that matches their libvirt C library.
At the same time though, this policy of ours is causing increasing amounts of pain for a number of our downstream users.
In OpenStack, in particular, their development and test environments aim to be able to not rely on any system installed python packages. They use a virtualenv and pip to install all python deps from PyPi (equivalent of Perl's CPAN in Python world). This approach works for everything except the libvirt Python code which is not available standalone on PyPi. This is causing so much pain that people have suggested taking the libvirt python code we ship and just uploading it to PyPi themselves[1]. This would obviously be a somewhat hostile action to take, but the way we distribute libvirt python is forcing OpenStack to consider such things.
In RHEL world too, bundling of libvirt + its python binding is causing pain with the fairly recent concept of "software collections"[2]. This allows users to install multiple versions of languages like Python, Perl, etc on the same box in parallel. To use libvirt python with thse alternate python installs though, requires that they recompile the entire libvirt distribution just to get the Python binding. This is obviously not an approach that works for most people, particularly if they're looking to populate their software collection using 'pip' rather than RPM.
Looking on google there are a number of other people asking for libvirt python as a separate module, eg on Stackoverflow[3].
I don't think these issues are going to go away, in fact I think they will likely become more pressing, until the point where some 3rd party takes the step of providing libvirt python bindings themselves. I don't think we want to let ourselves drift into the situation where we loose control over releasing libvirt python bindings.
IMHO we should / must listen to our users here before it is too late.
We can still release libvirt python at the same time as normal libvirt releases, and require that people update the bindings whenever adding new APIs (if the generator doesn't cope with them). We should simply distribute python as a separate tar.gz, as we do for all other languages, and upload it to PyPi, as well as libvirt.org FTP when doing a release.
Obviously there will be some work to separate things out, but I don't see that being insurmountable, since all other language bindings manage to be separate, even when doing code generation. We'd also want to change to use distutils, rather than autoconf, since that's what the python world wants.
Okay, message received :-)
First we keep the status quo for 1.1.2, obvious but better to state it.
Of course, I'm not suggesting we rush into anything. Perhaps the next release, perhaps the one after, etc....
Second the key point is really to have tarballs of the python bindings available as separate source from upstream (us !), and make sure we remove the bindings from the libvirt tarball releases. Right ?
Yes, that is the key point.
Third, having a separate repository for the python bindings doesn't bring much unless we really want to generate tarball on a regular basis for that project independantly of libvirt ones. On the other hand having the merged repository like we do now means the python bindings patches tend to be reviewed and tested as the new APIs are added, i.e. when it's fresh, and keep people more inclined to actually think about them :) On the other hand moving to a separate repo would likely loose our git history (not sure if we can keep it, i doubt) which would be a bummer IMHO.
IMHO I'd rather see us have separate python repository, since I like the clarity of one repo == one dist. In particular I've never been a fan of projects where there are multiple different build systems used for different parts of the git tree. I'd like to thing we can address the issue of API additions via automated testing. More generally I'd like to see us get a bit more serious about several of our language bindings. For both Perl and Python I'd like to see us guarantee that they will always be in sync via automated testing, and aim to also bring other bindings upto parity too over time.
So I would be tempted by the minimal approach of scinding the tarballs making sure that we get in the python subdirectory that will lead to python bindings a classic and up to date setup.py, maybe move some of the auto* python specific code there as equivalent scripts or focuse our effort into making sure the setup.py is good enough. Then the corresponding spec.in goes there too.
At libvirt release time I regenerate 2 tarballs instead of one, and we have the pleasure to create new components builds for the python bindings from there.
Opinions about this plan ?
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 29.08.2013 13:24, Daniel P. Berrange wrote:
As everyone knows, we have historically always shipped the python binding as part of the libvirt primary tar.gz distribution. In some ways that has simplified life for people, since we know they'll always have a libvirt python that matches their libvirt C library.
At the same time though, this policy of ours is causing increasing amounts of pain for a number of our downstream users.
In OpenStack, in particular, their development and test environments aim to be able to not rely on any system installed python packages. They use a virtualenv and pip to install all python deps from PyPi (equivalent of Perl's CPAN in Python world). This approach works for everything except the libvirt Python code which is not available standalone on PyPi. This is causing so much pain that people have suggested taking the libvirt python code we ship and just uploading it to PyPi themselves[1]. This would obviously be a somewhat hostile action to take, but the way we distribute libvirt python is forcing OpenStack to consider such things.
In RHEL world too, bundling of libvirt + its python binding is causing pain with the fairly recent concept of "software collections"[2]. This allows users to install multiple versions of languages like Python, Perl, etc on the same box in parallel. To use libvirt python with thse alternate python installs though, requires that they recompile the entire libvirt distribution just to get the Python binding. This is obviously not an approach that works for most people, particularly if they're looking to populate their software collection using 'pip' rather than RPM.
Looking on google there are a number of other people asking for libvirt python as a separate module, eg on Stackoverflow[3].
I am not sure about PyPi (I'm not an intense python user), but I think you are mixing two problems here. One is having some bindings in the git repo the other is packaging. If we are not shipping python bindings in a separate package (like we do for qemu for instance) or the -python package is not optional, then this is the problem. Personally, I think if we do this, then python bindings will suffer from the decision every time somebody introduces an API extension. Because sooner or later we will forget to update the python bindings. Anyway - I don't wanna be a show stopper, so take this as my concerns not a vote for or against. Michal

On Fri, Aug 30, 2013 at 09:44:26AM +0200, Michal Privoznik wrote:
On 29.08.2013 13:24, Daniel P. Berrange wrote:
As everyone knows, we have historically always shipped the python binding as part of the libvirt primary tar.gz distribution. In some ways that has simplified life for people, since we know they'll always have a libvirt python that matches their libvirt C library.
At the same time though, this policy of ours is causing increasing amounts of pain for a number of our downstream users.
In OpenStack, in particular, their development and test environments aim to be able to not rely on any system installed python packages. They use a virtualenv and pip to install all python deps from PyPi (equivalent of Perl's CPAN in Python world). This approach works for everything except the libvirt Python code which is not available standalone on PyPi. This is causing so much pain that people have suggested taking the libvirt python code we ship and just uploading it to PyPi themselves[1]. This would obviously be a somewhat hostile action to take, but the way we distribute libvirt python is forcing OpenStack to consider such things.
In RHEL world too, bundling of libvirt + its python binding is causing pain with the fairly recent concept of "software collections"[2]. This allows users to install multiple versions of languages like Python, Perl, etc on the same box in parallel. To use libvirt python with thse alternate python installs though, requires that they recompile the entire libvirt distribution just to get the Python binding. This is obviously not an approach that works for most people, particularly if they're looking to populate their software collection using 'pip' rather than RPM.
Looking on google there are a number of other people asking for libvirt python as a separate module, eg on Stackoverflow[3].
I am not sure about PyPi (I'm not an intense python user), but I think you are mixing two problems here. One is having some bindings in the git repo the other is packaging. If we are not shipping python bindings in a separate package (like we do for qemu for instance) or the -python package is not optional, then this is the problem.
Personally, I think if we do this, then python bindings will suffer from the decision every time somebody introduces an API extension. Because sooner or later we will forget to update the python bindings.
If we address this via automated testing though, I think the problem of keeping them in sync pretty much goes away. The generator will handle most additions, and we'd get notification of build problems in cases where it didn't handle them. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
participants (9)
-
Christophe Fergeau
-
Daniel P. Berrange
-
Daniel Veillard
-
Doug Goldstein
-
Eric Blake
-
Jiri Denemark
-
Mark McLoughlin
-
Michal Privoznik
-
Nicolas Sebrecht