[Libvir] Dom0 hypercall ABI changes in xen-unstable ?
by Daniel P. Berrange
This changeset concerns me in today's xen-unstable
http://xenbits2.xensource.com/xen-unstable.hg?rev/30af6cfdb05c
"Make domctl/sysctl interfaces 32-/64-bit invariant.
This kills off a fair amount of unpleasant CONFIG_COMPAT shimming and
avoids needing to keep the compat paths in sync as these interfaces
continue to develop."
For example
@@ -81,9 +81,9 @@ struct xen_sysctl_physinfo {
uint32_t sockets_per_node;
uint32_t nr_nodes;
uint32_t cpu_khz;
- uint64_t total_pages;
- uint64_t free_pages;
- uint64_t scrub_pages;
+ uint64_aligned_t total_pages;
+ uint64_aligned_t free_pages;
+ uint64_aligned_t scrub_pages;
uint32_t hw_cap[8];
Suggests to me that alignment of thse fields may now have changed on
32-bit hosts. Well, at least it would if it were not for public/xen.h
doing:
#define uint64_aligned_t uint64_t
So has it changed or not ? The interface version was incremented at
least...
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000004
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000005
-#define XEN_SYSCTL_INTERFACE_VERSION 0x00000002
+#define XEN_SYSCTL_INTERFACE_VERSION 0x00000003
I'm expecting the worst in terms of ABI compat, but I'm really not at all
certain yet.
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 10 months
[Libvir] libvirt and accessing remote systems
by Richard W.M. Jones
This is a follow on to this thread:
https://www.redhat.com/archives/libvir-list/2007-January/thread.html#00064
but I think it deserves a thread of its own for discussion.
Background:
Dan drew this diagram proposing a way to include remote access to
systems from within libvirt:
http://people.redhat.com/berrange/libvirt/libvirt-arch-remote-2.png
libvirt would continue as now to provide direct hypervisor calls,
direct access to xend and so on. But in addition, a new backend
would be written ("remote") which could talk to a remote daemon
("libvirtd") using some sort of RPC mechanism.
Position:
I gave this architecture some thought over the weekend, and I
like it for the following reasons (some not very technical):
* Authentication and encryption is handled entirely within the
libvirt / libvirtd library, allowing us to use whatever RPC
mechanism we like on top of a selection of transports of our
choosing (eg. GnuTLS, ssh, unencrypted TCP sockets, ...)
* We don't need to modify xend at all, and additionally we won't
need to modify future flavour-of-the-month virtual machine monitors.
I have a particular issue with xend (written in Python) because
in my own tests I've seen my Python XMLRPC/SSL server
actually segfault. It doesn't inspire me that this Python
solution is adding anything more than apparent security.
* The architecture is very flexible: It allows virt-manager to
run as root or as non-root, according to customer wishes.
virt-manager can make direct HV calls, or everything can be
remoted, and it's easy to explain to the user about the
performance vs management trade-offs.
* It's relatively easy to implement. Note that libvirtd is just
a thin server layer linked to its own copy of libvirt.
* Another proposal was to make all libvirt calls remote
(http://people.redhat.com/berrange/libvirt/libvirt-arch-remote-3.png)
but I don't think this is a going concern because (1) it requires
a daemon always be run, which is another installation problem and
another chance for sysadmins to give up, and (2) the perception will
be that this is slow, whether or not that is actually true.
Now some concerns:
* libvirtd will likely need to be run as root, so another root
daemon written in C listening on a public port. (On the other
hand, xend listening on a public port also isn't too desirable,
even with authentication).
* If Xen upstream in the meantime come up with a secure remote access
method then potentially this means clients could have to choose
between the two, or run two services (libvirtd + Xen/remote).
* There are issues with versioning the remote API. Do we allow
different versions of libvirt/libvirtd to talk to each other?
Do we provide backwards compatibility when we move to a new API?
* Do we allow more than one client to talk to a libvirtd daemon
(no | multiple readers one writer | multiple readers & writers).
* What's the right level to make a remote API? Should we batch
calls up together?
RPC mechanism:
I've been investigating RPC mechanisms and there seem to be two
reasonable possibilities, SunRPC and XMLRPC. (Both would need to
run over some sort of secure connection, so there is a layer below
both). My analysis of those is here:
http://et.redhat.com/~rjones/secure_rpc/
Rich.
--
Red Hat UK Ltd.
64 Baker Street, London, W1U 7DF
Mobile: +44 7866 314 421 (will change soon)
17 years, 10 months
[Libvir] what is virt-install doing differently?
by Aron Griffis
I'm looking at a problem where HVM xen/ia64 domains hang on boot using
virt-install, specifically at this point:
ACPI: Core revision 20060707
Boot processor id 0x0/0x0
Given VT-I capable hardware, this can be demonstrated with:
virt-install -n rhel5hvm1 -v -r 1024 --vcpus=2 -c \
/root/RHEL5-Server-20070112.3-ia64-DVD.iso -f \
/var/lib/xen/images/rhel5hvm1 -s 20 --vnc
The really strange thing is that the boot works fine, using the same
configuration, if virt-install is not involved. Actually,
I modify the configuration slightly to include the CD-ROM, but
otherwise it is the same:
# cat /etc/xen/rhel5rc1s7hvm2
# Automatically generated xen config file
name = "rhel5rc1s7hvm2"
builder = "hvm"
memory = "1024"
disk = [ 'phy:/dev/cciss/c0d1p1,hda,w', 'file:/root/RHEL5-Server-20070112.3-ia64-DVD.iso,hdc:cdrom,r' ]
vif = [ 'type=ioemu, mac=00:16:3e:50:4c:95, bridge=xenbr1', ]
uuid = "c228739b-9e6c-bb11-47d8-281ca2edf750"
device_model = "/usr/lib/xen/bin/qemu-dm"
kernel = "/usr/lib/xen/boot/hvmloader"
vnc=1
vncunused=1
apic=1
acpi=1
vcpus=2
serial = "pty" # enable serial console
on_reboot = 'restart'
on_crash = 'restart'
I don't understand what would be different about the virt-install boot
vs. booting straight from the generated configuration. Can somebody
shed some light on this?
Thanks,
Aron
17 years, 10 months
[Libvir] virtinst 0.100.0 and virt-manager 0.3.0 releases
by Daniel P. Berrange
Following on from the recent libvirt 0.1.11 release, I'm pleased to announce
new releases of the virtinst & virt-manager applications.
The virtinst 0.100.0 release is a major re-factoring to make use of the
libvirt inactive domain management APIs when provisioning guests. It also
tries to clean up many of the Xen specific bits to facilitate future use
with the QEMU backend of libvirt. From a usability point of view it also
now displays progress information when downloading kernel & initrd and
creating the filesystem images.
The virt-manager 0.3.0 release brings a major functionality update, enabling
of inactive domain management. This requires libvirt at least version 0.1.11
to provide implementations of inactive domain management for Xen 3.0.3
and Xen 3.0.4. With this new functionality the display will list all
guests which are in the 'shutoff' state. The guest can be started with
the 'Run' button in the virtual console window. The vistinst package
must also be updated to at least version 0.100.0 to ensure that during
provisioning of guests it uses the new inactive domain management APIs.
Finally there have been a variety of minor UI fixes & enhancements
such as progress bars during guest creation, reliability fixes to the
virtual console and even greater coverage for translations.
As of today, virtinst is now formally a part of the virt-manager project,
so downloads for both virtinst & virt-manager are available from the same
download page:
http://virt-manager.et.redhat.com/downloads.html
All historical release from virtinst 0.95.0 onwards are available there.
In addition, to encourage / facilitate broader community development &
feedback on the virtinst/virt-manager applications we have decided to
designate et-mgmt-tools(a)redhat.com as the project primary mailing list:
http://www.redhat.com/mailman/listinfo/et-mgmt-tools
We welcome feedback on what additional capabilities / features people would
like to see in future virtinst / virt-manager releases. As a taste of things
to come, we have active developments plans for:
- Secure authenticated remote management
- Support for QEMU & KVM virtualization
Regards,
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 10 months
[Libvir] Tricky provisioning problem with inactive domains
by Daniel P. Berrange
Adding support for inactive domains was supposed to make everyone's life
easier, but as luck would have it, its actually made one thing very much
harder. In the virt-inst/virt-manager tools provisioning works something
like this:
In paravirt case:
- Create a guest using an explicit kernel/initrd from the images/xen
directory on the install CD
- Write a config file to /etc/xen setup to boot using pygrub
In fullvirt case:
- Create a guest booting directly off a CDROM
- Write a config file to /etc/xen setup to boot off the harddisk
So in both these cases, the libvirt XML config for the very first boot of
the guest is different, from the XML config for subsequent boots. With
the new inactive domain support in libvirt & xend, we can't write out
config files directly, instead there is the virDomainDefine() API, which
calls to appropriate APIs in XenD. And this is where the problem arises:
1. If we call virDomainDefine() to write the long term config, then
virDomainStart() will not be using the correct boot method for the
initial install.
2. If we call virDomainDefine() to write the initial install config,
then virDomainStart() will kick off the install correctly, but on
subsequent boots we'll end up booting the installer again instead
of the just installed OS.
3. We could just use virDomainCreate() to start installer, and try to
use virDomainDefine() to write the long term config - the latter
call will fail though because there will already be a running guest
with that name.
4. Conversely if we use virDomainDefine() to write the config, and then
tried to create a one-off guest with virDomainCreate() the latter
will fail due to duplicate names.
So, thus far the only way out of the trap I can think of is:
1. Use virDomainCreate() to kick off the initial install
2. Poll virDomainLookupByXXX() to watch for this initial guest shutting
down
3. Write out the persistent config using virDomainDefine()
The big problem with this, is that if the user were to exit virt-manager
sometime after the guest install starts, but before step 3, the config
for the guest will never be written, even though it has successfully
installed.
There are two further ideas I've had - both requiring additional APIs in
libvirt & probably XenD too.
- Make it possible change the boot configuration of an existing guest.
This would let us do:
1. Use virDomainDefine() to define a config file suitable for installing
the guest, ie using explicit kernel/initrd
2. Use virDomainStart() to kick off the installer
3. Uew new API to change the guest config to remove explicit kernel
& initrd config, and add a bootloader for pygrub. Or in HVM case,
switch boot order to use harddisk instead of CDROM & detach the
CDROM device.
- Make it possible to start an existing inactive guest using an alternative
one-off configuration. This would let us do:
1. Use virDomainDefine() to define a config file suitable for running
the guest during normal operation.
2. Use virDomainStartConfig(xml) to start the guest with a special
config suitable for installing the guest.
Ultimately I think we do need the means to change arbitrary parts of a guest
configuration, so I think the first option would be the preferred approach.
The trouble is that I think implementing this would require using the new
XenAPI, or adding a number of new methods to the legacy SXPR API which is not
really very desirable.
Regards,
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules: http://search.cpan.org/~danberr/ -=|
|=- Projects: http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
17 years, 10 months
Re: [Libvir] Virtual networking
by Richard W.M. Jones
Hugh Brock wrote:
> Daniel P. Berrange wrote:
>
>> 3. The way I think you re suggesting - a libvirt server on every remote
>> host which calls into the regular libvirt internal driver model to
>> proxy remote calls. So even if the hypervisor in question provides a
>> remote network management API, we will always use the local API and
>> do *all* remote networking via the libvirt server
>>
>> http://people.redhat.com/berrange/libvirt/libvirt-arch-remote-2.png
>>
> This strikes me as *much* easier to manage, and the most consistent
> thus far with the idea that libvirt should remain as
> hypervisor-neutral as possible.
I guess the management issue is going to be versioning the protocol. If
the protocol is just a direct mapping of vir* calls and structures then
you'll quickly end up in a situation where even the smallest change
requires you to upgrade the world or old versions have to be maintained
indefinitely.
That's not saying I don't like the idea.
Rich.
--
Red Hat UK Ltd.
64 Baker Street, London, W1U 7DF
Mobile: +44 7866 314 421 (will change soon)
17 years, 10 months
Re: [Libvir] Virtual networking
by Hugh Brock
Richard W.M. Jones wrote:
> Hugh Brock wrote:
>> Daniel P. Berrange wrote:
>>> 3. The way I think you re suggesting - a libvirt server on every remote
>>> host which calls into the regular libvirt internal driver model to
>>> proxy remote calls. So even if the hypervisor in question provides a
>>> remote network management API, we will always use the local API and
>>> do *all* remote networking via the libvirt server
>>>
>>> http://people.redhat.com/berrange/libvirt/libvirt-arch-remote-2.png
>>>
>> This strikes me as *much* easier to manage, and the most consistent
>> thus far with the idea that libvirt should remain as
>> hypervisor-neutral as possible.
>
> I guess the management issue is going to be versioning the protocol. If
> the protocol is just a direct mapping of vir* calls and structures then
> you'll quickly end up in a situation where even the smallest change
> requires you to upgrade the world or old versions have to be maintained
> indefinitely.
>
> That's not saying I don't like the idea.
>
True enough... we're guaranteeing we're going to have backwards
compatibility problems. On the other hand the libvirt API is supposed to
be held pretty stable. DV, any thoughts?
--H
17 years, 10 months
Re: [Libvir] Certificate management APIs ?
by Richard W.M. Jones
[Apologies also that this is not threaded with the original post]
> $HOME/.libvirt/tls/
> |
> +- ca
> | |
> | +- cert.pem
> | +- ca-crl.pem
Note that there are standard locations for CA certs. On my Debian box
the standard locations appear to be /etc/ca-certificates.conf and
/usr/share/ca-certificates. Not sure yet about Fedora/RHEL.
I suppose you hope that people will be using formal CA's rather than
their own, or at least have a CA certificate issued by a formal CA from
which they can issue their own client & server certs.
Rich.
--
Red Hat UK Ltd.
64 Baker Street, London, W1U 7DF
Mobile: +44 7866 314 421 (will change soon)
17 years, 10 months