[Libvir] [PATCH] Remote 0/8: Plan

Below is the plan for delivering the remote patch for review in stages. More details about in each email. I only expect to get through the first two emails today. Rich. 1 Additional error handling --------------------------- M include/libvirt/virterror.h M src/virterror.c 2 Client-server protocol ------------------------ A qemud/remote_protocol.x A qemud/rpcgen_fix.pl A qemud/remote_protocol.c A qemud/remote_protocol.h 3 Client-side ------------- A src/remote_internal.c A src/remote_internal.h M src/driver.h M src/libvirt.c 4 Server-side call dispatch --------------------------- A qemud/remote.c A qemud/remote_generate_stubs.pl A qemud/remote_dispatch_localvars.h A qemud/remote_dispatch_proc_switch.h A qemud/remote_dispatch_prototypes.h 5 Client-side QEMUD modifications --------------------------------- M src/qemu_internal.c 6 Server-side QEMUD modifications --------------------------------- A qemud/protocol.x A qemud/protocol.c M qemud/protocol.h M qemud/Makefile.am M qemud/conf.c M qemud/dispatch.c M qemud/dispatch.h M qemud/internal.h M qemud/qemud.c M qemud/uuid.c 7 Documentation updates ----------------------- M docs/libvir.html M docs/remote.html 8 Build system -------------- M .cvsignore M configure.in A libvirtd.conf M src/Makefile.am A RENAMES -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

Richard W.M. Jones wrote:
1 Additional error handling ---------------------------
M include/libvirt/virterror.h M src/virterror.c
Attached is the patch to add the additional types of errors to virterror. VIR_FROM_REMOTE indicates that the error originated in the remote driver. There are two further subclasses for general RPC-related errors and GnuTLS-related errors. Note that if a regular error (eg. VIR_FROM_XEN) is generated on the server side it is passed through the connection to the client transparently (there is no "wrapping" of errors or anything like that). Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
1 Additional error handling
Ooops - subject line of that email was wrong. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

On Wed, May 02, 2007 at 07:08:28PM +0100, Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
1 Additional error handling ---------------------------
M include/libvirt/virterror.h M src/virterror.c
Attached is the patch to add the additional types of errors to virterror. VIR_FROM_REMOTE indicates that the error originated in the remote driver. There are two further subclasses for general RPC-related errors and GnuTLS-related errors.
Note that if a regular error (eg. VIR_FROM_XEN) is generated on the server side it is passed through the connection to the client transparently (there is no "wrapping" of errors or anything like that).
Looks fine to me, the only comment I have is that if you have an extra string you expect it to be the full message error instead of just an extra information, but that's okay. Please apply I don't see anything controversial there :-) Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

On Wed, May 02, 2007 at 07:08:28PM +0100, Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
1 Additional error handling ---------------------------
M include/libvirt/virterror.h M src/virterror.c
Attached is the patch to add the additional types of errors to virterror. VIR_FROM_REMOTE indicates that the error originated in the remote driver. There are two further subclasses for general RPC-related errors and GnuTLS-related errors.
Ack, trivial patch makes sense.
Note that if a regular error (eg. VIR_FROM_XEN) is generated on the server side it is passed through the connection to the client transparently (there is no "wrapping" of errors or anything like that).
That sounds very good - all existing error handling should work correctly in that respect. Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Richard W.M. Jones wrote:
2 Client-server protocol ------------------------
A qemud/remote_protocol.x A qemud/rpcgen_fix.pl A qemud/remote_protocol.c A qemud/remote_protocol.h
These are the additional files required to describe the XDR-based protocol for client-server communications, with the homebrew RPC mechanism on top. The first file is the protocol itself. The second file is a Perl script to run after rpcgen which allows the code to compile with -Wall -Werror -fstrict-aliasing, and fixes a 64 bit bug (that fix really should go upstream BTW). The last two files are the actual rpcgen-generated rpcgen_fix-fixed code. I put these two files into EXTRA_DIST because I feel that people shouldn't be required to have rpcgen & Perl just to compile libvirt. (These files are arch-independent so there is no need to recompile them unless remote_protocol.x itself changes). Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland) /* -*- c -*- * remote_protocol.x: private protocol for communicating between * remote_internal driver and libvirtd. This protocol is * internal and may change at any time. * * Copyright (C) 2006-2007 Red Hat, Inc. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * * Author: Richard Jones <rjones@redhat.com> */ /* Notes: * * (1) The protocol is internal and may change at any time, without * notice. Do not use it. Instead link to libvirt and use the remote * driver. * * (2) See bottom of this file for a description of the home-brew RPC. * * (3) Authentication/encryption is done outside this protocol. * * (4) For namespace reasons, all exported names begin 'remote_' or * 'REMOTE_'. This makes names quite long. */ %#include "libvirt/libvirt.h" /*----- Data types. -----*/ /* Maximum total message size (serialised). */ const REMOTE_MESSAGE_MAX = 262144; /* Length of long, but not unbounded, strings. * This is an arbitrary limit designed to stop the decoder from trying * to allocate unbounded amounts of memory when fed with a bad message. */ const REMOTE_STRING_MAX = 65536; /* A long string, which may NOT be NULL. */ typedef string remote_nonnull_string<REMOTE_STRING_MAX>; /* A long string, which may be NULL. */ typedef remote_nonnull_string *remote_string; /* This just places an upper limit on the length of lists of * domain IDs which may be sent via the protocol. */ const REMOTE_DOMAIN_ID_LIST_MAX = 16384; /* Upper limit on lists of domain names. */ const REMOTE_DOMAIN_NAME_LIST_MAX = 1024; /* Upper limit on cpumap (bytes) passed to virDomainPinVcpu. */ const REMOTE_CPUMAP_MAX = 256; /* Upper limit on number of info fields returned by virDomainGetVcpus. */ const REMOTE_VCPUINFO_MAX = 2048; /* Upper limit on cpumaps (bytes) passed to virDomainGetVcpus. */ const REMOTE_CPUMAPS_MAX = 16384; /* Upper limit on lists of network names. */ const REMOTE_NETWORK_NAME_LIST_MAX = 256; /* Domains and networks are represented on the wire by (name, UUID). */ typedef opaque remote_uuid[VIR_UUID_BUFLEN]; struct remote_name { remote_nonnull_string name; remote_uuid uuid; }; /* A domain or network which may not be NULL. */ typedef struct remote_name remote_nonnull_domain; typedef struct remote_name remote_nonnull_network; /* A domain or network which may be NULL. */ typedef remote_nonnull_domain *remote_domain; typedef remote_nonnull_network *remote_network; /* Error message. See <virterror.h> for explanation of fields. */ /* NB. Fields "code", "domain" and "level" are really enums. The * numeric value should remain compatible between libvirt and * libvirtd. This means, no changing or reordering the enums as * defined in <virterror.h> (but we don't do that anyway, for separate * ABI reasons). */ struct remote_error { int code; int domain; remote_string message; int level; remote_domain dom; remote_string str1; remote_string str2; remote_string str3; int int1; int int2; remote_network net; }; /* Wire encoding of virVcpuInfo. */ struct remote_vcpu_info { unsigned int number; int state; unsigned hyper cpu_time; int cpu; }; /*----- Calls. -----*/ /* For each call we may have a 'remote_CALL_args' and 'remote_CALL_ret' * type. These are omitted when they are void. The virConnectPtr * is not passed at all (it is inferred on the remote server from the * connection). Errors are returned implicitly in the RPC protocol. * * Please follow the naming convention carefully - this file is * parsed by 'remote_generate_stubs.pl'. */ struct remote_open_args { /* NB. "name" might be NULL although in practice you can't * yet do that using the remote_internal driver. */ remote_string name; int flags; }; struct remote_get_type_ret { remote_nonnull_string type; }; struct remote_get_version_ret { hyper hv_ver; }; struct remote_get_max_vcpus_args { /* The only backend which supports this call is Xen HV, and * there the type is ignored so it could be NULL. */ remote_string type; }; struct remote_get_max_vcpus_ret { int max_vcpus; }; struct remote_node_get_info_ret { char model[32]; hyper memory; int cpus; int mhz; int nodes; int sockets; int cores; int threads; }; struct remote_get_capabilities_ret { remote_nonnull_string capabilities; }; struct remote_list_domains_args { int maxids; }; struct remote_list_domains_ret { int ids<REMOTE_DOMAIN_ID_LIST_MAX>; }; struct remote_num_of_domains_ret { int num; }; struct remote_domain_create_linux_args { remote_nonnull_string xml_desc; int flags; }; struct remote_domain_create_linux_ret { remote_nonnull_domain dom; }; struct remote_domain_lookup_by_id_args { int id; }; struct remote_domain_lookup_by_id_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; }; struct remote_domain_lookup_by_uuid_args { remote_uuid uuid; }; struct remote_domain_lookup_by_uuid_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; }; struct remote_domain_lookup_by_name_args { remote_nonnull_string name; }; struct remote_domain_lookup_by_name_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; }; struct remote_domain_suspend_args { remote_nonnull_domain dom; }; struct remote_domain_resume_args { remote_nonnull_domain dom; }; struct remote_domain_shutdown_args { remote_nonnull_domain dom; }; struct remote_domain_reboot_args { remote_nonnull_domain dom; int flags; }; struct remote_domain_destroy_args { remote_nonnull_domain dom; }; struct remote_domain_get_os_type_args { remote_nonnull_domain dom; }; struct remote_domain_get_os_type_ret { remote_nonnull_string type; }; struct remote_domain_get_max_memory_args { remote_nonnull_domain dom; }; struct remote_domain_get_max_memory_ret { unsigned hyper memory; }; struct remote_domain_set_max_memory_args { remote_nonnull_domain dom; unsigned hyper memory; }; struct remote_domain_set_memory_args { remote_nonnull_domain dom; unsigned hyper memory; }; struct remote_domain_get_info_args { remote_nonnull_domain dom; }; struct remote_domain_get_info_ret { unsigned char state; unsigned hyper max_mem; unsigned hyper memory; unsigned short nr_virt_cpu; /* XXX cpu_time is declared as unsigned long long */ unsigned hyper cpu_time; }; struct remote_domain_save_args { remote_nonnull_domain dom; remote_nonnull_string to; }; struct remote_domain_restore_args { remote_nonnull_string from; }; struct remote_domain_core_dump_args { remote_nonnull_domain dom; remote_nonnull_string to; int flags; }; struct remote_domain_dump_xml_args { remote_nonnull_domain dom; int flags; }; struct remote_domain_dump_xml_ret { remote_nonnull_string xml; }; struct remote_domain_list_defined_domains_args { int maxnames; }; struct remote_domain_list_defined_domains_ret { remote_nonnull_string names<REMOTE_DOMAIN_NAME_LIST_MAX>; }; struct remote_domain_num_of_defined_domains_ret { int num; }; struct remote_domain_create_args { remote_nonnull_domain dom; }; struct remote_domain_define_xml_args { remote_nonnull_string xml; }; struct remote_domain_define_xml_ret { remote_nonnull_domain dom; }; struct remote_domain_undefine_args { remote_nonnull_domain dom; }; struct remote_domain_set_vcpus_args { int nvcpus; }; struct remote_domain_pin_vcpu_args { int vcpu; opaque cpumap<REMOTE_CPUMAP_MAX>; }; struct remote_domain_get_vcpus_args { remote_nonnull_domain dom; int maxinfo; int maplen; }; struct remote_domain_get_vcpus_ret { remote_vcpu_info info<REMOTE_VCPUINFO_MAX>; opaque cpumaps<REMOTE_CPUMAPS_MAX>; }; struct remote_domain_get_max_vcpus_args { remote_nonnull_domain dom; }; struct remote_domain_get_max_vcpus_ret { int num; }; struct remote_domain_attach_device_args { remote_nonnull_domain dom; remote_nonnull_string xml; }; struct remote_domain_detach_device_args { remote_nonnull_domain dom; remote_nonnull_string xml; }; struct remote_domain_get_autostart_args { remote_nonnull_domain dom; }; struct remote_domain_get_autostart_ret { int autostart; }; struct remote_domain_set_autostart_args { remote_nonnull_domain dom; int autostart; }; /* Network calls: */ struct remote_num_of_networks_ret { int num; }; struct remote_list_networks_args { int maxnames; }; struct remote_list_networks_ret { remote_nonnull_string names<REMOTE_NETWORK_NAME_LIST_MAX>; }; struct remote_num_of_defined_networks_ret { int num; }; struct remote_list_defined_networks_args { int maxnames; }; struct remote_list_defined_networks_ret { remote_nonnull_string names<REMOTE_NETWORK_NAME_LIST_MAX>; }; struct remote_network_lookup_by_uuid_args { remote_uuid uuid; }; struct remote_network_lookup_by_uuid_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_network net; }; struct remote_network_lookup_by_name_args { remote_nonnull_string name; }; struct remote_network_lookup_by_name_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_network net; }; struct remote_network_create_xml_args { remote_nonnull_string xml; }; struct remote_network_create_xml_ret { remote_nonnull_network net; }; struct remote_network_define_xml_args { remote_nonnull_string xml; }; struct remote_network_define_xml_ret { remote_nonnull_network net; }; struct remote_network_undefine_args { remote_nonnull_network net; }; struct remote_network_create_args { remote_nonnull_network net; }; struct remote_network_destroy_args { remote_nonnull_network net; }; struct remote_network_dump_xml_args { remote_nonnull_network net; int flags; }; struct remote_network_dump_xml_ret { remote_nonnull_string xml; }; struct remote_network_get_bridge_name_args { remote_nonnull_network net; }; struct remote_network_get_bridge_name_ret { remote_nonnull_string name; }; struct remote_network_get_autostart_args { remote_nonnull_network net; }; struct remote_network_get_autostart_ret { int autostart; }; struct remote_network_set_autostart_args { remote_nonnull_network net; int autostart; }; /*----- Protocol. -----*/ /* Define the program number, protocol version and procedure numbers here. */ const REMOTE_PROGRAM = 0x20008086; const REMOTE_PROTOCOL_VERSION = 1; enum remote_procedure { REMOTE_PROC_OPEN = 1, REMOTE_PROC_CLOSE = 2, REMOTE_PROC_GET_TYPE = 3, REMOTE_PROC_GET_VERSION = 4, REMOTE_PROC_GET_MAX_VCPUS = 5, REMOTE_PROC_NODE_GET_INFO = 6, REMOTE_PROC_GET_CAPABILITIES = 7, REMOTE_PROC_DOMAIN_ATTACH_DEVICE = 8, REMOTE_PROC_DOMAIN_CREATE = 9, REMOTE_PROC_DOMAIN_CREATE_LINUX = 10, REMOTE_PROC_DOMAIN_DEFINE_XML = 11, REMOTE_PROC_DOMAIN_DESTROY = 12, REMOTE_PROC_DOMAIN_DETACH_DEVICE = 13, REMOTE_PROC_DOMAIN_DUMP_XML = 14, REMOTE_PROC_DOMAIN_GET_AUTOSTART = 15, REMOTE_PROC_DOMAIN_GET_INFO = 16, REMOTE_PROC_DOMAIN_GET_MAX_MEMORY = 17, REMOTE_PROC_DOMAIN_GET_MAX_VCPUS = 18, REMOTE_PROC_DOMAIN_GET_OS_TYPE = 19, REMOTE_PROC_DOMAIN_GET_VCPUS = 20, REMOTE_PROC_DOMAIN_LIST_DEFINED_DOMAINS = 21, REMOTE_PROC_DOMAIN_LOOKUP_BY_ID = 22, REMOTE_PROC_DOMAIN_LOOKUP_BY_NAME = 23, REMOTE_PROC_DOMAIN_LOOKUP_BY_UUID = 24, REMOTE_PROC_DOMAIN_NUM_OF_DEFINED_DOMAINS = 25, REMOTE_PROC_DOMAIN_PIN_VCPU = 26, REMOTE_PROC_DOMAIN_REBOOT = 27, REMOTE_PROC_DOMAIN_RESUME = 28, REMOTE_PROC_DOMAIN_SET_AUTOSTART = 29, REMOTE_PROC_DOMAIN_SET_MAX_MEMORY = 30, REMOTE_PROC_DOMAIN_SET_MEMORY = 31, REMOTE_PROC_DOMAIN_SET_VCPUS = 32, REMOTE_PROC_DOMAIN_SHUTDOWN = 33, REMOTE_PROC_DOMAIN_SUSPEND = 34, REMOTE_PROC_DOMAIN_UNDEFINE = 35, REMOTE_PROC_LIST_DEFINED_NETWORKS = 36, REMOTE_PROC_LIST_DOMAINS = 37, REMOTE_PROC_LIST_NETWORKS = 38, REMOTE_PROC_NETWORK_CREATE = 39, REMOTE_PROC_NETWORK_CREATE_XML = 40, REMOTE_PROC_NETWORK_DEFINE_XML = 41, REMOTE_PROC_NETWORK_DESTROY = 42, REMOTE_PROC_NETWORK_DUMP_XML = 43, REMOTE_PROC_NETWORK_GET_AUTOSTART = 44, REMOTE_PROC_NETWORK_GET_BRIDGE_NAME = 45, REMOTE_PROC_NETWORK_LOOKUP_BY_NAME = 46, REMOTE_PROC_NETWORK_LOOKUP_BY_UUID = 47, REMOTE_PROC_NETWORK_SET_AUTOSTART = 48, REMOTE_PROC_NETWORK_UNDEFINE = 49, REMOTE_PROC_NUM_OF_DEFINED_NETWORKS = 50, REMOTE_PROC_NUM_OF_DOMAINS = 51, REMOTE_PROC_NUM_OF_NETWORKS = 52, REMOTE_PROC_DOMAIN_CORE_DUMP = 53, REMOTE_PROC_DOMAIN_RESTORE = 54, REMOTE_PROC_DOMAIN_SAVE = 55 }; /* Custom RPC structure. */ /* Each message consists of: * int length Number of bytes in message _including_ length. * remote_message_header Header. * then either: args Arguments (for REMOTE_CALL). * or: ret Return (for REMOTE_REPLY, status = REMOTE_OK) * or: remote_error Error (for REMOTE_REPLY, status = REMOTE_ERROR) * * The first two words (length, program number) are meant to be compatible * with the qemud protocol (qemud/protocol.x), although the rest of the * messages are completely different. */ enum remote_message_direction { REMOTE_CALL = 0, /* client -> server */ REMOTE_REPLY = 1, /* server -> client */ REMOTE_MESSAGE = 2 /* server -> client, asynchronous [NYI] */ }; enum remote_message_status { /* Status is always REMOTE_OK for calls. * For replies, indicates no error. */ REMOTE_OK = 0, /* For replies, indicates that an error happened, and a struct * remote_error follows. */ REMOTE_ERROR = 1 }; struct remote_message_header { unsigned prog; /* REMOTE_PROGRAM */ unsigned vers; /* REMOTE_PROTOCOL_VERSION */ remote_procedure proc; /* REMOTE_PROC_x */ remote_message_direction direction; unsigned serial; /* Serial number of message. */ remote_message_status status; }; /* * vim: set tabstop=4: * vim: set shiftwidth=4: * vim: set expandtab: */ /* * Local variables: * indent-tabs-mode: nil * c-indent-level: 4 * c-basic-offset: 4 * tab-width: 4 * End: */

On Wed, May 02, 2007 at 07:12:36PM +0100, Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
2 Client-server protocol ------------------------ A qemud/rpcgen_fix.pl
These are the additional files required to describe the XDR-based protocol for client-server communications, with the homebrew RPC mechanism on top. The first file is the protocol itself.
The second file is a Perl script to run after rpcgen which allows the code to compile with -Wall -Werror -fstrict-aliasing, and fixes a 64 bit bug (that fix really should go upstream BTW).
Hum, we don't require perl to build. Can't that be translated in python for coherency w.r.t. the existing requires ? I would do that if I were able to parse and understand perl, but I really can't !
The last two files are the actual rpcgen-generated rpcgen_fix-fixed code. I put these two files into EXTRA_DIST because I feel that people shouldn't be required to have rpcgen & Perl just to compile libvirt. (These files are arch-independent so there is no need to recompile them unless remote_protocol.x itself changes).
Hum, so far we tried to not commit generated files... paphio:~ -> rpm -qf /usr/bin/rpcgen glibc-common-2.5-12 paphio:~ -> but I could see how people on other OSes could have troubles with this. W.r.t. remote_protocol.x is it the version you posted earlier ? Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
2 Client-server protocol ------------------------
A qemud/remote_protocol.x A qemud/rpcgen_fix.pl A qemud/remote_protocol.c A qemud/remote_protocol.h
Turns out that the way virDomainPtr & virNetworkPtr are handled in those files is pretty broken. I'm reworking that now - so please ignore those. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

Richard W.M. Jones wrote:
2 Client-server protocol ------------------------
A qemud/remote_protocol.x A qemud/rpcgen_fix.pl A qemud/remote_protocol.c A qemud/remote_protocol.h
These are the additional files required to describe the XDR-based protocol for client-server communications, with the homebrew RPC mechanism on top. The first file is the protocol itself. The second file is a Perl script to run after rpcgen which allows the code to compile with -Wall -Werror -fstrict-aliasing, and fixes a 64 bit bug (that fix really should go upstream BTW). The strict-aliasing issue isn't fully fixed yet - I expect that eventually we should compile remote_protocol.c with -fno-strict-aliasing, for safety. Daniel Veillard also asked that this be rewritten in Python (pah!) which I might do at the same time. The last two files are the actual rpcgen-generated rpcgen_fix-fixed code. I put these two files into EXTRA_DIST because I feel that people shouldn't be required to have rpcgen & Perl just to compile libvirt. (These files are arch-independent so there is no need to recompile them unless remote_protocol.x itself changes). Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland) /* -*- c -*- * remote_protocol.x: private protocol for communicating between * remote_internal driver and libvirtd. This protocol is * internal and may change at any time. * * Copyright (C) 2006-2007 Red Hat, Inc. * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * * Author: Richard Jones <rjones@redhat.com> */ /* Notes: * * (1) The protocol is internal and may change at any time, without * notice. Do not use it. Instead link to libvirt and use the remote * driver. * * (2) See bottom of this file for a description of the home-brew RPC. * * (3) Authentication/encryption is done outside this protocol. * * (4) For namespace reasons, all exported names begin 'remote_' or * 'REMOTE_'. This makes names quite long. */ %#include "libvirt/libvirt.h" /*----- Data types. -----*/ /* Maximum total message size (serialised). */ const REMOTE_MESSAGE_MAX = 262144; /* Length of long, but not unbounded, strings. * This is an arbitrary limit designed to stop the decoder from trying * to allocate unbounded amounts of memory when fed with a bad message. */ const REMOTE_STRING_MAX = 65536; /* A long string, which may NOT be NULL. */ typedef string remote_nonnull_string<REMOTE_STRING_MAX>; /* A long string, which may be NULL. */ typedef remote_nonnull_string *remote_string; /* This just places an upper limit on the length of lists of * domain IDs which may be sent via the protocol. */ const REMOTE_DOMAIN_ID_LIST_MAX = 16384; /* Upper limit on lists of domain names. */ const REMOTE_DOMAIN_NAME_LIST_MAX = 1024; /* Upper limit on cpumap (bytes) passed to virDomainPinVcpu. */ const REMOTE_CPUMAP_MAX = 256; /* Upper limit on number of info fields returned by virDomainGetVcpus. */ const REMOTE_VCPUINFO_MAX = 2048; /* Upper limit on cpumaps (bytes) passed to virDomainGetVcpus. */ const REMOTE_CPUMAPS_MAX = 16384; /* Upper limit on lists of network names. */ const REMOTE_NETWORK_NAME_LIST_MAX = 256; /* UUID. VIR_UUID_BUFLEN definition comes from libvirt.h */ typedef opaque remote_uuid[VIR_UUID_BUFLEN]; /* A domain which may not be NULL. */ struct remote_nonnull_domain { remote_nonnull_string name; remote_uuid uuid; int id; }; /* A network which may not be NULL. */ struct remote_nonnull_network { remote_nonnull_string name; remote_uuid uuid; }; /* A domain or network which may be NULL. */ typedef remote_nonnull_domain *remote_domain; typedef remote_nonnull_network *remote_network; /* Error message. See <virterror.h> for explanation of fields. */ /* NB. Fields "code", "domain" and "level" are really enums. The * numeric value should remain compatible between libvirt and * libvirtd. This means, no changing or reordering the enums as * defined in <virterror.h> (but we don't do that anyway, for separate * ABI reasons). */ struct remote_error { int code; int domain; remote_string message; int level; remote_domain dom; remote_string str1; remote_string str2; remote_string str3; int int1; int int2; remote_network net; }; /* Wire encoding of virVcpuInfo. */ struct remote_vcpu_info { unsigned int number; int state; unsigned hyper cpu_time; int cpu; }; /*----- Calls. -----*/ /* For each call we may have a 'remote_CALL_args' and 'remote_CALL_ret' * type. These are omitted when they are void. The virConnectPtr * is not passed at all (it is inferred on the remote server from the * connection). Errors are returned implicitly in the RPC protocol. * * Please follow the naming convention carefully - this file is * parsed by 'remote_generate_stubs.pl'. */ struct remote_open_args { /* NB. "name" might be NULL although in practice you can't * yet do that using the remote_internal driver. */ remote_string name; int flags; }; struct remote_get_type_ret { remote_nonnull_string type; }; struct remote_get_version_ret { hyper hv_ver; }; struct remote_get_max_vcpus_args { /* The only backend which supports this call is Xen HV, and * there the type is ignored so it could be NULL. */ remote_string type; }; struct remote_get_max_vcpus_ret { int max_vcpus; }; struct remote_node_get_info_ret { char model[32]; hyper memory; int cpus; int mhz; int nodes; int sockets; int cores; int threads; }; struct remote_get_capabilities_ret { remote_nonnull_string capabilities; }; struct remote_list_domains_args { int maxids; }; struct remote_list_domains_ret { int ids<REMOTE_DOMAIN_ID_LIST_MAX>; }; struct remote_num_of_domains_ret { int num; }; struct remote_domain_create_linux_args { remote_nonnull_string xml_desc; int flags; }; struct remote_domain_create_linux_ret { remote_nonnull_domain dom; }; struct remote_domain_lookup_by_id_args { int id; }; struct remote_domain_lookup_by_id_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; }; struct remote_domain_lookup_by_uuid_args { remote_uuid uuid; }; struct remote_domain_lookup_by_uuid_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; }; struct remote_domain_lookup_by_name_args { remote_nonnull_string name; }; struct remote_domain_lookup_by_name_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; }; struct remote_domain_suspend_args { remote_nonnull_domain dom; }; struct remote_domain_resume_args { remote_nonnull_domain dom; }; struct remote_domain_shutdown_args { remote_nonnull_domain dom; }; struct remote_domain_reboot_args { remote_nonnull_domain dom; int flags; }; struct remote_domain_destroy_args { remote_nonnull_domain dom; }; struct remote_domain_get_os_type_args { remote_nonnull_domain dom; }; struct remote_domain_get_os_type_ret { remote_nonnull_string type; }; struct remote_domain_get_max_memory_args { remote_nonnull_domain dom; }; struct remote_domain_get_max_memory_ret { unsigned hyper memory; }; struct remote_domain_set_max_memory_args { remote_nonnull_domain dom; unsigned hyper memory; }; struct remote_domain_set_memory_args { remote_nonnull_domain dom; unsigned hyper memory; }; struct remote_domain_get_info_args { remote_nonnull_domain dom; }; struct remote_domain_get_info_ret { unsigned char state; unsigned hyper max_mem; unsigned hyper memory; unsigned short nr_virt_cpu; /* XXX cpu_time is declared as unsigned long long */ unsigned hyper cpu_time; }; struct remote_domain_save_args { remote_nonnull_domain dom; remote_nonnull_string to; }; struct remote_domain_restore_args { remote_nonnull_string from; }; struct remote_domain_core_dump_args { remote_nonnull_domain dom; remote_nonnull_string to; int flags; }; struct remote_domain_dump_xml_args { remote_nonnull_domain dom; int flags; }; struct remote_domain_dump_xml_ret { remote_nonnull_string xml; }; struct remote_list_defined_domains_args { int maxnames; }; struct remote_list_defined_domains_ret { remote_nonnull_string names<REMOTE_DOMAIN_NAME_LIST_MAX>; }; struct remote_num_of_defined_domains_ret { int num; }; struct remote_domain_create_args { remote_nonnull_domain dom; }; struct remote_domain_define_xml_args { remote_nonnull_string xml; }; struct remote_domain_define_xml_ret { remote_nonnull_domain dom; }; struct remote_domain_undefine_args { remote_nonnull_domain dom; }; struct remote_domain_set_vcpus_args { remote_nonnull_domain dom; int nvcpus; }; struct remote_domain_pin_vcpu_args { remote_nonnull_domain dom; int vcpu; opaque cpumap<REMOTE_CPUMAP_MAX>; }; struct remote_domain_get_vcpus_args { remote_nonnull_domain dom; int maxinfo; int maplen; }; struct remote_domain_get_vcpus_ret { remote_vcpu_info info<REMOTE_VCPUINFO_MAX>; opaque cpumaps<REMOTE_CPUMAPS_MAX>; }; struct remote_domain_get_max_vcpus_args { remote_nonnull_domain dom; }; struct remote_domain_get_max_vcpus_ret { int num; }; struct remote_domain_attach_device_args { remote_nonnull_domain dom; remote_nonnull_string xml; }; struct remote_domain_detach_device_args { remote_nonnull_domain dom; remote_nonnull_string xml; }; struct remote_domain_get_autostart_args { remote_nonnull_domain dom; }; struct remote_domain_get_autostart_ret { int autostart; }; struct remote_domain_set_autostart_args { remote_nonnull_domain dom; int autostart; }; /* Network calls: */ struct remote_num_of_networks_ret { int num; }; struct remote_list_networks_args { int maxnames; }; struct remote_list_networks_ret { remote_nonnull_string names<REMOTE_NETWORK_NAME_LIST_MAX>; }; struct remote_num_of_defined_networks_ret { int num; }; struct remote_list_defined_networks_args { int maxnames; }; struct remote_list_defined_networks_ret { remote_nonnull_string names<REMOTE_NETWORK_NAME_LIST_MAX>; }; struct remote_network_lookup_by_uuid_args { remote_uuid uuid; }; struct remote_network_lookup_by_uuid_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_network net; }; struct remote_network_lookup_by_name_args { remote_nonnull_string name; }; struct remote_network_lookup_by_name_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_network net; }; struct remote_network_create_xml_args { remote_nonnull_string xml; }; struct remote_network_create_xml_ret { remote_nonnull_network net; }; struct remote_network_define_xml_args { remote_nonnull_string xml; }; struct remote_network_define_xml_ret { remote_nonnull_network net; }; struct remote_network_undefine_args { remote_nonnull_network net; }; struct remote_network_create_args { remote_nonnull_network net; }; struct remote_network_destroy_args { remote_nonnull_network net; }; struct remote_network_dump_xml_args { remote_nonnull_network net; int flags; }; struct remote_network_dump_xml_ret { remote_nonnull_string xml; }; struct remote_network_get_bridge_name_args { remote_nonnull_network net; }; struct remote_network_get_bridge_name_ret { remote_nonnull_string name; }; struct remote_network_get_autostart_args { remote_nonnull_network net; }; struct remote_network_get_autostart_ret { int autostart; }; struct remote_network_set_autostart_args { remote_nonnull_network net; int autostart; }; /*----- Protocol. -----*/ /* Define the program number, protocol version and procedure numbers here. */ const REMOTE_PROGRAM = 0x20008086; const REMOTE_PROTOCOL_VERSION = 1; enum remote_procedure { REMOTE_PROC_OPEN = 1, REMOTE_PROC_CLOSE = 2, REMOTE_PROC_GET_TYPE = 3, REMOTE_PROC_GET_VERSION = 4, REMOTE_PROC_GET_MAX_VCPUS = 5, REMOTE_PROC_NODE_GET_INFO = 6, REMOTE_PROC_GET_CAPABILITIES = 7, REMOTE_PROC_DOMAIN_ATTACH_DEVICE = 8, REMOTE_PROC_DOMAIN_CREATE = 9, REMOTE_PROC_DOMAIN_CREATE_LINUX = 10, REMOTE_PROC_DOMAIN_DEFINE_XML = 11, REMOTE_PROC_DOMAIN_DESTROY = 12, REMOTE_PROC_DOMAIN_DETACH_DEVICE = 13, REMOTE_PROC_DOMAIN_DUMP_XML = 14, REMOTE_PROC_DOMAIN_GET_AUTOSTART = 15, REMOTE_PROC_DOMAIN_GET_INFO = 16, REMOTE_PROC_DOMAIN_GET_MAX_MEMORY = 17, REMOTE_PROC_DOMAIN_GET_MAX_VCPUS = 18, REMOTE_PROC_DOMAIN_GET_OS_TYPE = 19, REMOTE_PROC_DOMAIN_GET_VCPUS = 20, REMOTE_PROC_LIST_DEFINED_DOMAINS = 21, REMOTE_PROC_DOMAIN_LOOKUP_BY_ID = 22, REMOTE_PROC_DOMAIN_LOOKUP_BY_NAME = 23, REMOTE_PROC_DOMAIN_LOOKUP_BY_UUID = 24, REMOTE_PROC_NUM_OF_DEFINED_DOMAINS = 25, REMOTE_PROC_DOMAIN_PIN_VCPU = 26, REMOTE_PROC_DOMAIN_REBOOT = 27, REMOTE_PROC_DOMAIN_RESUME = 28, REMOTE_PROC_DOMAIN_SET_AUTOSTART = 29, REMOTE_PROC_DOMAIN_SET_MAX_MEMORY = 30, REMOTE_PROC_DOMAIN_SET_MEMORY = 31, REMOTE_PROC_DOMAIN_SET_VCPUS = 32, REMOTE_PROC_DOMAIN_SHUTDOWN = 33, REMOTE_PROC_DOMAIN_SUSPEND = 34, REMOTE_PROC_DOMAIN_UNDEFINE = 35, REMOTE_PROC_LIST_DEFINED_NETWORKS = 36, REMOTE_PROC_LIST_DOMAINS = 37, REMOTE_PROC_LIST_NETWORKS = 38, REMOTE_PROC_NETWORK_CREATE = 39, REMOTE_PROC_NETWORK_CREATE_XML = 40, REMOTE_PROC_NETWORK_DEFINE_XML = 41, REMOTE_PROC_NETWORK_DESTROY = 42, REMOTE_PROC_NETWORK_DUMP_XML = 43, REMOTE_PROC_NETWORK_GET_AUTOSTART = 44, REMOTE_PROC_NETWORK_GET_BRIDGE_NAME = 45, REMOTE_PROC_NETWORK_LOOKUP_BY_NAME = 46, REMOTE_PROC_NETWORK_LOOKUP_BY_UUID = 47, REMOTE_PROC_NETWORK_SET_AUTOSTART = 48, REMOTE_PROC_NETWORK_UNDEFINE = 49, REMOTE_PROC_NUM_OF_DEFINED_NETWORKS = 50, REMOTE_PROC_NUM_OF_DOMAINS = 51, REMOTE_PROC_NUM_OF_NETWORKS = 52, REMOTE_PROC_DOMAIN_CORE_DUMP = 53, REMOTE_PROC_DOMAIN_RESTORE = 54, REMOTE_PROC_DOMAIN_SAVE = 55 }; /* Custom RPC structure. */ /* Each message consists of: * int length Number of bytes in message _including_ length. * remote_message_header Header. * then either: args Arguments (for REMOTE_CALL). * or: ret Return (for REMOTE_REPLY, status = REMOTE_OK) * or: remote_error Error (for REMOTE_REPLY, status = REMOTE_ERROR) * * The first two words (length, program number) are meant to be compatible * with the qemud protocol (qemud/protocol.x), although the rest of the * messages are completely different. */ enum remote_message_direction { REMOTE_CALL = 0, /* client -> server */ REMOTE_REPLY = 1, /* server -> client */ REMOTE_MESSAGE = 2 /* server -> client, asynchronous [NYI] */ }; enum remote_message_status { /* Status is always REMOTE_OK for calls. * For replies, indicates no error. */ REMOTE_OK = 0, /* For replies, indicates that an error happened, and a struct * remote_error follows. */ REMOTE_ERROR = 1 }; struct remote_message_header { unsigned prog; /* REMOTE_PROGRAM */ unsigned vers; /* REMOTE_PROTOCOL_VERSION */ remote_procedure proc; /* REMOTE_PROC_x */ remote_message_direction direction; unsigned serial; /* Serial number of message. */ remote_message_status status; }; /* * vim: set tabstop=4: * vim: set shiftwidth=4: * vim: set expandtab: */ /* * Local variables: * indent-tabs-mode: nil * c-indent-level: 4 * c-basic-offset: 4 * tab-width: 4 * End: */

On Sat, May 05, 2007 at 11:53:32AM +0100, Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
2 Client-server protocol ------------------------
A qemud/remote_protocol.x A qemud/rpcgen_fix.pl A qemud/remote_protocol.c A qemud/remote_protocol.h
These are the additional files required to describe the XDR-based protocol for client-server communications, with the homebrew RPC mechanism on top. The first file is the protocol itself.
The second file is a Perl script to run after rpcgen which allows the code to compile with -Wall -Werror -fstrict-aliasing, and fixes a 64 bit bug (that fix really should go upstream BTW). The strict-aliasing issue isn't fully fixed yet - I expect that eventually we should compile remote_protocol.c with -fno-strict-aliasing, for safety. Daniel Veillard also asked that this be rewritten in Python (pah!) which I might do at the same time.
The last two files are the actual rpcgen-generated rpcgen_fix-fixed code. I put these two files into EXTRA_DIST because I feel that people shouldn't be required to have rpcgen & Perl just to compile libvirt. (These files are arch-independent so there is no need to recompile them unless remote_protocol.x itself changes).
While I generally don't like having auto-generated files in CVS, I wonder if this makes us better off for portability to Solaris / BSD. eg, the Perl post-processor probably wouldn't work with someone else's rpcgen output. As long as the generated files are calling 100% public RPC related APIs internally we're probably best off distributing the generated files as part of the release & only manually doing re-generation when we have real changes.
struct remote_open_args { /* NB. "name" might be NULL although in practice you can't * yet do that using the remote_internal driver. */ remote_string name; int flags; };
Historically NULL == 'Xen', so should we just force 'name = Xen' in src/libvirt.c and then no internal driver ever has to worry about NULLs in this scenario again.
struct remote_get_max_vcpus_args { /* The only backend which supports this call is Xen HV, and * there the type is ignored so it could be NULL. */ remote_string type; };
Good point. I'll implement this API for QEMU real soon now....
struct remote_domain_lookup_by_id_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; };
struct remote_domain_lookup_by_uuid_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; };
struct remote_domain_lookup_by_name_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; };
We should remember to formally define the semantics before release :-)
struct remote_domain_get_info_ret { unsigned char state; unsigned hyper max_mem; unsigned hyper memory; unsigned short nr_virt_cpu; /* XXX cpu_time is declared as unsigned long long */ unsigned hyper cpu_time; };
hyper == 64-bits according to the latest XDR spec isn't it. The only guarentee we need is that cpu_time is at least 64-bits. Given the granularity we should never hit a scenario where cpu_time would overflow 64-bits. So if long long ever happens to be 128, then it shouldn't matter that hyper were only 64. So I reckon we can remove the XXX there.
struct remote_network_lookup_by_uuid_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_network net; };
struct remote_network_lookup_by_name_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_network net; };
As above.
const REMOTE_PROGRAM = 0x20008086;
Any significance to this number :-) Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
Historically NULL == 'Xen', so should we just force 'name = Xen' in src/libvirt.c and then no internal driver ever has to worry about NULLs in this scenario again.
Is the plan not to have NULL meaning "get me the most appropriate connection", sort of in the same way that XOpenDisplay (NULL) works? OTOH I have no idea how to realise such a plan (but that's a completely separate issue).
struct remote_domain_lookup_by_id_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; };
struct remote_domain_lookup_by_uuid_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; };
struct remote_domain_lookup_by_name_ret { /* XXX "Not found" semantic is ill-defined. */ remote_nonnull_domain dom; };
We should remember to formally define the semantics before release :-)
So at the moment both Not found and Error conditions turn into virterrors, which was the point of my comment. As far as I can see the way out of this is either to add another call which has better semantics, such as: int virDomainLookupByID2 (virConnect Ptr conn, int id, virDomainPtr *domain_out); or else to define a special virterror for the Not found case. The latter is obviously more compatible and doesn't require extra interfaces. In any case, this is a separate issue from the remote patch.
struct remote_domain_get_info_ret { unsigned char state; unsigned hyper max_mem; unsigned hyper memory; unsigned short nr_virt_cpu; /* XXX cpu_time is declared as unsigned long long */ unsigned hyper cpu_time; };
hyper == 64-bits according to the latest XDR spec isn't it. The only guarentee we need is that cpu_time is at least 64-bits. Given the granularity we should never hit a scenario where cpu_time would overflow 64-bits. So if long long ever happens to be 128, then it shouldn't matter that hyper were only 64. So I reckon we can remove the XXX there.
Yes hyper is 64 bits (always). I'll remove the XXX. Rich.

OK so this is step 2.5 out of 8 ... it wasn't part of the original plan. 2.5 Export virGetDomain and virGetNetwork ----------------------------------------- M src/libvirt_sym.version M src/hash.c M src/internal.h This patch exports virGetDomain and virGetNetwork functions as "internal" functions (__virGetDomain and __virGetNetwork) for use by the remote daemon. The use is as follows: client needs to invoke a function such as virDomainSuspend (virDomainPtr dom). In order to do this it has to send a reference to "dom" over the wire to the server, which it does by encoding a remote_nonnull_domain XDR object (basically the name and UUID). On the server side we use virGetDomain to pull out the corresponding virDomainPtr from the hash associated with the connection. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

On Sat, May 05, 2007 at 12:00:49PM +0100, Richard W.M. Jones wrote:
OK so this is step 2.5 out of 8 ... it wasn't part of the original plan.
2.5 Export virGetDomain and virGetNetwork -----------------------------------------
M src/libvirt_sym.version M src/hash.c M src/internal.h
This patch exports virGetDomain and virGetNetwork functions as "internal" functions (__virGetDomain and __virGetNetwork) for use by the remote daemon.
The use is as follows: client needs to invoke a function such as virDomainSuspend (virDomainPtr dom). In order to do this it has to send a reference to "dom" over the wire to the server, which it does by encoding a remote_nonnull_domain XDR object (basically the name and UUID). On the server side we use virGetDomain to pull out the corresponding virDomainPtr from the hash associated with the connection.
Looks reasonable. The only other alternative would be to explicitly lookup a domain by UUID on every call, eg dom = virDomainLookupByUUID(uuid) ..do the work... virDomainFree(dom) The virConnectPtr object has these objects cached so it wouldn't neccessarily be any slower this way. It would have the downside of slight semantic change - LookupByUUID may actually hit the undering XenD apis if the cache was empty. This may make for hard to debug problems & complicate error reporting. So, I think it is reasonable to export virGetDomain & virGetNetwork to allow direct use of the cache in this scenario. Dan, -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
On Sat, May 05, 2007 at 12:00:49PM +0100, Richard W.M. Jones wrote:
OK so this is step 2.5 out of 8 ... it wasn't part of the original plan.
2.5 Export virGetDomain and virGetNetwork -----------------------------------------
M src/libvirt_sym.version M src/hash.c M src/internal.h The virConnectPtr object has these objects cached so it wouldn't neccessarily be any slower this way. It would have the downside of slight semantic change - LookupByUUID may actually hit the undering XenD apis if the cache was empty. This may make for hard to debug problems & complicate error reporting. So, I think it is reasonable to export virGetDomain & virGetNetwork to allow direct use of the cache in this scenario.
Committed to CVS. Rich.

Richard W.M. Jones wrote:
3 Client-side -------------
A src/remote_internal.c A src/remote_internal.h M src/driver.h M src/libvirt.c
This is the code which serialises requests on the client side. First up is a small patch against driver.h and libvirt.c which just declares and registers the new driver. Secondly, the driver itself (remote_internal.h / remote_internal.c) which I will describe in more detail. The header file is small and I think non-controversial. It just declares the remoteRegister function and a handful of constants like paths and port numbers. The driver itself (remote_internal.c) consists of the following parts, arranged here in the same order that they appear in the file: (1) remoteOpen and associated, GnuTLS initialisation This function and associated functions deals with parsing the remote URI, deciding which transport to use, deciding the URI to forward to the remote end, creating the transport (including initialising GnuTLS, forking ssh, ...), and verifying the server's certificate. (2) The driver functions, remoteClose, remoteType, etc. etc. (3) The network driver functions, remoteNetworkOpen, etc. etc. For each function we serialise the parameters as an XDR remote_foo_args object, use the generic "call" function to dispatch the call, then deserialise the XDR remote_foo_ret return value. Most of the heavy lifting for the RPC is done in the "call" function itself ... (4) The RPC implementation (function "call"), error handling, serialisation of virDomainPtr and virNetworkPtr. ... which also deals with making error handling transparent. There are also some functions at the bottom to handle serialising and deserialising virDomainPtr and virNetworkPtr. We do this by essentially translating them into (name, UUID) pairs. (5) Table of functions and registration. And finally comes the table of functions and declaration of remoteRegister. If you're having difficulty following XDR code, it may help to read this first: http://et.redhat.com/~rjones/xdr_tests/ Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

On Sat, May 05, 2007 at 12:17:44PM +0100, Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
3 Client-side -------------
A src/remote_internal.c A src/remote_internal.h M src/driver.h M src/libvirt.c
This is the code which serialises requests on the client side. First up is a small patch against driver.h and libvirt.c which just declares and registers the new driver.
Secondly, the driver itself (remote_internal.h / remote_internal.c) which I will describe in more detail.
The header file is small and I think non-controversial. It just declares the remoteRegister function and a handful of constants like paths and port numbers.
What sort of info is currently stored in the $sysconfdir/libvirtd.conf file ? It seems to be referred to by both the client & server. Although there will be options common to both the client & server I think it may be worth having them refer to separate files (cf ssh_config, sshd_config). ACAICT there's no compelling need to have them refer to same file since the client & server will be on different machines anyway. The separation could be useful if there turns out to be a reasonble to make libvirtd.conf mode 0600. On the subject of
/* GnuTLS functions used by remoteOpen. */ #define CAFILE "demoCA/cacert.pem" /* XXX */ #define KEY_FILE "127001key.pem" /* XXX */ #define CERT_FILE "127001cert.pem" /* XXX */
We've got to think about best way to provide TLS parameters. Bearing in mind that people may want to port to Mozilla NSS in the future I think it is probably best to not provide formal APIs for these options at this time, and instead pick them up from the client's config file. As previously discussed there's no clear standard for location of TLS certs, so I reckon we just default to something like tls_ca_cert = "/etc/pks/tls/ca-cert.pem" tls_secret = "/etc/pks/tls/libvirt-key.pem" tls_cert = "/etc/pks/tls/libvirt-cert.pem" This actually does suggest we need separate libvirt.conf & libvirtd.conf because obviously the server needs to point to its own key/cert & also keep a whitelist of client keys. The client might actually need a more flexible config - allowing it to present different keys, per host. We could do something based on a placeholder, with a default fallback tls_ca_cert = "/etc/pks/tls/ca-cert.pem" tls_default_secret = "/etc/pks/tls/libvirt-key.pem" tls_default_cert = "/etc/pks/tls/libvirt-cert.pem" tls_host_cert = "/etc/pks/tls/libvirt-%h-key.pem" tls_host_cert = "/etc/pks/tls/libvirt-%h-cert.pem" Since we're using client certs for authentication to start with, we ought to actually make it possible to keep the certs in per-user home dirs rather than globally readable. eg $HOME/.pki/tls/libvirt-key.pem and mode 0600
The driver itself (remote_internal.c) consists of the following parts, arranged here in the same order that they appear in the file:
(1) remoteOpen and associated, GnuTLS initialisation
This function and associated functions deals with parsing the remote URI, deciding which transport to use, deciding the URI to forward to the remote end, creating the transport (including initialising GnuTLS, forking ssh, ...), and verifying the server's certificate.
Wrt to this snippet in negotiate_gnutls_on_connection()
const int cert_type_priority[3] = { GNUTLS_CRT_X509, GNUTLS_CRT_OPENPGP, 0 };
Do we have support in the client/server for doing whatever checks are needed for the OpenPGP style 'certs'. If not we should probably comment out the OpenGPG option for now.
(2) The driver functions, remoteClose, remoteType, etc. etc. (3) The network driver functions, remoteNetworkOpen, etc. etc.
For each function we serialise the parameters as an XDR remote_foo_args object, use the generic "call" function to dispatch the call, then deserialise the XDR remote_foo_ret return value. Most of the heavy lifting for the RPC is done in the "call" function itself ...
This all loooks good to me. Only thought I had was perhaps that GET_PRIVATE (conn, -1); is slightly more readable as struct private_data *priv = GET_PRIVATE (conn, -1); ie, to make it clear to the readable what local variable is being provided, while still hiding away all the tedious error checking / assertion logic.
(4) The RPC implementation (function "call"), error handling, serialisation of virDomainPtr and virNetworkPtr.
In the 'call' method:
/* Get a unique serial number for this message. */ /* XXX This is supposed to be atomic over threads, but I don't know * if it is. */ static sig_atomic_t counter = 1; sig_atomic_t serial = counter++;
Serial numbers only need to be unique within the scope of a single client <-> server socket connection. So could we just store the counter in the virConectPtr private data struct to avoid the threading question here
... which also deals with making error handling transparent.
static void error (virConnectPtr conn, virErrorNumber code, const char *info) { const char *errmsg; /* XXX I don't think this is correct use of virErrorMsg. */ errmsg = __virErrorMsg (code, info);
This looks consistent with the way other drivers are using __virErrorMsg at least. Anything in particular you thought was wrong ? Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
On Sat, May 05, 2007 at 12:17:44PM +0100, Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
3 Client-side -------------
A src/remote_internal.c A src/remote_internal.h M src/driver.h M src/libvirt.c [...] What sort of info is currently stored in the $sysconfdir/libvirtd.conf file ? It seems to be referred to by both the client & server.
Hopefully only the server is reading this file. It contains things like: * What ports the server should listen on? * Should the server verify TLS certs? * ACL (list of IP addresses) of clients * Location of various PKI files All of these things have sensible defaults, so you can run the server without the file at all. If you look at qemud/qemud.c:remoteReadConfigFile you'll find the code which reads this file. Yes, this needs to be documented (see step 7 of 8).
On the subject of
/* GnuTLS functions used by remoteOpen. */ #define CAFILE "demoCA/cacert.pem" /* XXX */ #define KEY_FILE "127001key.pem" /* XXX */ #define CERT_FILE "127001cert.pem" /* XXX */
We've got to think about best way to provide TLS parameters. Bearing in mind that people may want to port to Mozilla NSS in the future I think it is probably best to not provide formal APIs for these options at this time, and instead pick them up from the client's config file. As previously discussed there's no clear standard for location of TLS certs, so I reckon we just default to something like
tls_ca_cert = "/etc/pks/tls/ca-cert.pem" tls_secret = "/etc/pks/tls/libvirt-key.pem" tls_cert = "/etc/pks/tls/libvirt-cert.pem"
Yes, agreed. That is a to-do action, as well as documenting how to create these files. One thing I need to do is go back over my notes and work out how _I_ created the files ...
Since we're using client certs for authentication to start with, we ought to actually make it possible to keep the certs in per-user home dirs rather than globally readable. eg $HOME/.pki/tls/libvirt-key.pem and mode 0600
Yes, I'll make a note to talk to the Fedora people about this.
The driver itself (remote_internal.c) consists of the following parts, arranged here in the same order that they appear in the file:
(1) remoteOpen and associated, GnuTLS initialisation
This function and associated functions deals with parsing the remote URI, deciding which transport to use, deciding the URI to forward to the remote end, creating the transport (including initialising GnuTLS, forking ssh, ...), and verifying the server's certificate.
Wrt to this snippet in negotiate_gnutls_on_connection()
const int cert_type_priority[3] = { GNUTLS_CRT_X509, GNUTLS_CRT_OPENPGP, 0 };
Do we have support in the client/server for doing whatever checks are needed for the OpenPGP style 'certs'. If not we should probably comment out the OpenGPG option for now.
Actually looking at the code it's worse than that - at the moment it accepts the certificate and then rejects it later on because it's not an X.509 cert. --> To fix.
(2) The driver functions, remoteClose, remoteType, etc. etc. (3) The network driver functions, remoteNetworkOpen, etc. etc.
For each function we serialise the parameters as an XDR remote_foo_args object, use the generic "call" function to dispatch the call, then deserialise the XDR remote_foo_ret return value. Most of the heavy lifting for the RPC is done in the "call" function itself ...
This all loooks good to me. Only thought I had was perhaps that
GET_PRIVATE (conn, -1);
is slightly more readable as
struct private_data *priv = GET_PRIVATE (conn, -1);
ie, to make it clear to the readable what local variable is being provided, while still hiding away all the tedious error checking / assertion logic.
The problem is that GET_PRIVATE has side effects, in particular it can return with an error if the connection object or private pointer is corrupt. Would this require a gcc extension to implement? I'll check. BTW, although this business of checking if connections are NULL and checking magics and then returning is done a lot in libvirt, I don't agree with it _at all_. I think that the library should abort / assert fail in such cases, because the earlier that memory corruption is detected the better. (Better still would be to use a language where this can't happen).
(4) The RPC implementation (function "call"), error handling, serialisation of virDomainPtr and virNetworkPtr.
In the 'call' method:
/* Get a unique serial number for this message. */ /* XXX This is supposed to be atomic over threads, but I don't know * if it is. */ static sig_atomic_t counter = 1; sig_atomic_t serial = counter++;
Serial numbers only need to be unique within the scope of a single client <-> server socket connection. So could we just store the counter in the virConectPtr private data struct to avoid the threading question here
Yes, good thinking. I'll fix it.
... which also deals with making error handling transparent.
static void error (virConnectPtr conn, virErrorNumber code, const char *info) { const char *errmsg; /* XXX I don't think this is correct use of virErrorMsg. */ errmsg = __virErrorMsg (code, info);
This looks consistent with the way other drivers are using __virErrorMsg at least. Anything in particular you thought was wrong ?
I looked at what precisely happened inside virterror on Friday and came to the conclusion that this was correct, so I'll remove the comment. Rich.

On Sat, May 05, 2007 at 12:17:44PM +0100, Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
3 Client-side -------------
A src/remote_internal.c A src/remote_internal.h M src/driver.h M src/libvirt.c
This is the code which serialises requests on the client side. First up is a small patch against driver.h and libvirt.c which just declares and registers the new driver.
Secondly, the driver itself (remote_internal.h / remote_internal.c) which I will describe in more detail.
The header file is small and I think non-controversial. It just declares the remoteRegister function and a handful of constants like paths and port numbers.
The driver itself (remote_internal.c) consists of the following parts, arranged here in the same order that they appear in the file:
(1) remoteOpen and associated, GnuTLS initialisation
I've got a question about this comment /* XXX This loop contains a subtle problem. In the case * where a host is accessible over IPv4 and IPv6, it will * try the IPv4 and IPv6 addresses in turn. However it * should be able to present different client certificates * (because the commonName field in a client cert contains * the client IP address, which is different for IPv4 and * IPv6). At the moment we only have a single client * certificate, and no way to specify what address family * that certificate belongs to. */ It is my understanding that the 'commonName' should be the public user facing name of the server. eg, if the user is accessing https://foo.example.com/ Then commonName in the certificate would be 'foo.example.com'. The commonName should be verified against the user supplied address, which in this case would also be foo.example.com. So if foo.example.com in turn resolved to both IPv4 & IPv6, eg $ host foo.example.com foo.example.com has address 60.64.20.142 foo.example.com has IPv6 address 2001:71c2:1:ee34::2 Then it really shouldn't matter whether we ended up connecting over IPv4 or IPv6, because we're still comparing foo.example.com against foo.example.com AFAICT. Now, if the certificate was configured with an IP adress in it commonName, and the user connected to 'foo.example.com' then certificate validation should fail because 60.64.20.142 != foo.example.com and neither is the IPv6 address 2001:71c2:1:ee34::2. It should only succeeed if the user ewas explicitly connecting with https://60.64.20.142/ Basically DNS lookups should not be involved in any part of the certificate validation process - user provided hostname should be validated unchanged against the certificate's embedded commonName. Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
On Sat, May 05, 2007 at 12:17:44PM +0100, Richard W.M. Jones wrote:
(1) remoteOpen and associated, GnuTLS initialisation
I've got a question about this comment
/* XXX This loop contains a subtle problem. In the case * where a host is accessible over IPv4 and IPv6, it will * try the IPv4 and IPv6 addresses in turn. However it * should be able to present different client certificates * (because the commonName field in a client cert contains * the client IP address, which is different for IPv4 and * IPv6). At the moment we only have a single client * certificate, and no way to specify what address family * that certificate belongs to. */
It is my understanding that the 'commonName' should be the public user facing name of the server. eg, if the user is accessing
Then commonName in the certificate would be 'foo.example.com'. The commonName should be verified against the user supplied address, which in this case would also be foo.example.com.
Right, but this comment is about the client's certificate which is presented to and checked by the server. The server knows only the IP address of the client (well, it could do a DNS PTR lookup, but it shouldn't trust the results since they are under the control of the client too!) But what is the real solution here? Either allow the client to have multiple certificates (of course marked as IPv4 or IPv6 certificates, and perhaps other namespaces too?!), or else do some name-mangling so that IPv4 and IPv6 addresses can be compared, prepending or removing ::ffff: as appropriate? Rich.

On Tue, May 08, 2007 at 12:20:17PM +0100, Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
On Sat, May 05, 2007 at 12:17:44PM +0100, Richard W.M. Jones wrote:
(1) remoteOpen and associated, GnuTLS initialisation
I've got a question about this comment
/* XXX This loop contains a subtle problem. In the case * where a host is accessible over IPv4 and IPv6, it will * try the IPv4 and IPv6 addresses in turn. However it * should be able to present different client certificates * (because the commonName field in a client cert contains * the client IP address, which is different for IPv4 and * IPv6). At the moment we only have a single client * certificate, and no way to specify what address family * that certificate belongs to. */
It is my understanding that the 'commonName' should be the public user facing name of the server. eg, if the user is accessing
Then commonName in the certificate would be 'foo.example.com'. The commonName should be verified against the user supplied address, which in this case would also be foo.example.com.
Right, but this comment is about the client's certificate which is presented to and checked by the server.
Ahh.
The server knows only the IP address of the client (well, it could do a DNS PTR lookup, but it shouldn't trust the results since they are under the control of the client too!)
But what is the real solution here? Either allow the client to have multiple certificates (of course marked as IPv4 or IPv6 certificates, and perhaps other namespaces too?!), or else do some name-mangling so that IPv4 and IPv6 addresses can be compared, prepending or removing ::ffff: as appropriate?
So the question is, is there any meaningful security to be gained by having the server check the commonName field of the client's certificate against the client's incoming IP addr whether v4 or v6 ? Perhaps the only thing the server should be using the client cert's commonName field for is lookups in its whitelist of allowed clients ? Have you any idea what, say Exim or Apache, do for validation when getting a client cert ? Do they bother to check the commonName against the client's source addr, or do they merely use it for access control lookups ? Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
So the question is, is there any meaningful security to be gained by having the server check the commonName field of the client's certificate against the client's incoming IP addr whether v4 or v6 ? Perhaps the only thing the server should be using the client cert's commonName field for is lookups in its whitelist of allowed clients ? Have you any idea what, say Exim or Apache, do for validation when getting a client cert ? Do they bother to check the commonName against the client's source addr, or do they merely use it for access control lookups ?
I'm sure the extra security afforded must be very marginal indeed. Perhaps protection against IP address spoofing attacks? However those aren't very common since operating systems started to choose decent sequence numbers, and in any case while it might be possible to spoof a three-way TCP handshake, I wouldn't want to try spoofing a TLS handshake... So I don't know, but I'll take a look at the source for exim to see what they do. Rich.

Hey, On Tue, 2007-05-08 at 12:28 +0100, Daniel P. Berrange wrote:
On Tue, May 08, 2007 at 12:20:17PM +0100, Richard W.M. Jones wrote:
The server knows only the IP address of the client (well, it could do a DNS PTR lookup, but it shouldn't trust the results since they are under the control of the client too!)
But what is the real solution here? Either allow the client to have multiple certificates (of course marked as IPv4 or IPv6 certificates, and perhaps other namespaces too?!), or else do some name-mangling so that IPv4 and IPv6 addresses can be compared, prepending or removing ::ffff: as appropriate?
So the question is, is there any meaningful security to be gained by having the server check the commonName field of the client's certificate against the client's incoming IP addr whether v4 or v6 ? Perhaps the only thing the server should be using the client cert's commonName field for is lookups in its whitelist of allowed clients ? Have you any idea what, say Exim or Apache, do for validation when getting a client cert ? Do they bother to check the commonName against the client's source addr, or do they merely use it for access control lookups ?
When we discussed this on irc back in February, I looked this up in Postfix. So, looking back over the irc log: * Postfix will auth a client if it can validate the cert (i.e. the CA which issued the cert is trusted) and the fingerprint of the client's cert is listed in the list of allowed clients: http://www.postfix.org/postconf.5.html#relay_clientcerts * It would seem reasonable to me that you could list *either* the cert fingerprint of an allowed client or its SubjectName. In the latter case, you'd merely check that the SubjectName field in the (already validated) cert matches an entry in the list of allowed clients - i.e. you don't interpret the contents of SubjectName. I've no clue why Postfix doesn't allow this - if you trust the CA, then you can trust the SubjectName. This scheme would e.g. allow you to issue a new certificate for a given client without updatin the list of allowed clients on the server. * Also, Postfix allows you to trust all clients with certs from trusted CAs: http://www.postfix.org/postconf.5.html#permit_tls_all_clientcerts It seems like an odd configuration option to me. You'd probably only use this with a single trusted CA which you have direct control over. Cheers, Mark.

Mark McLoughlin wrote:
* Also, Postfix allows you to trust all clients with certs from trusted CAs:
http://www.postfix.org/postconf.5.html#permit_tls_all_clientcerts
It seems like an odd configuration option to me. You'd probably only use this with a single trusted CA which you have direct control over.
This is actually a common and useful configuration. You set up your own CA and point the server's CACERT to your own CA's certificate (and no other CA). Then only the clients for which you issue certificates can connect, and this is controlled by distribution of the private keys, not by explicit access control lists. If a private key file goes AWOL then you can revoke it. Note that libvirtd _doesn't_ quite support this sort of access because it doesn't support wildcards in the commonNames in the client certificates, but that would be a useful and simple addition. Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

On Mon, 2007-05-14 at 09:27 +0100, Richard W.M. Jones wrote:
Mark McLoughlin wrote:
* Also, Postfix allows you to trust all clients with certs from trusted CAs:
http://www.postfix.org/postconf.5.html#permit_tls_all_clientcerts
It seems like an odd configuration option to me. You'd probably only use this with a single trusted CA which you have direct control over.
This is actually a common and useful configuration.
You set up your own CA and point the server's CACERT to your own CA's certificate (and no other CA). Then only the clients for which you issue certificates can connect, and this is controlled by distribution of the private keys, not by explicit access control lists. If a private key file goes AWOL then you can revoke it.
Yes.
Note that libvirtd _doesn't_ quite support this sort of access because it doesn't support wildcards in the commonNames in the client certificates, but that would be a useful and simple addition.
I don't grok this ... why would you want a wildcard in the subjectName of a client certificate? Or do you mean allowing wildcards in the access control list of client subjectNames? Cheers, Mark.

Mark McLoughlin wrote:
Note that libvirtd _doesn't_ quite support this sort of access because it doesn't support wildcards in the commonNames in the client certificates, but that would be a useful and simple addition.
I don't grok this ... why would you want a wildcard in the subjectName of a client certificate?
Or do you mean allowing wildcards in the access control list of client subjectNames?
At the moment: The server reliably knows only the IP address of the client. It is given a certificate by the client, which it checks for validity against the CA. It also checks the subjectAltName.iPAddress or commonName field is the IP address (just using strcmp). It may also check that the client's IP address is on a whitelist contained in the server configuration file, although by default this check is switched off. So you can set up a CA and issue certificates to your clients to control access, but the certificates must contain the right IP address for the client (the client cannot be mobile in other words). This weekend I was coincidentally looking at how client certification works in browsers, and there authentication is based on all fields in the Distinguished Name. So you can use any CA, and an access control list of clients held on the server. See for example: http://www.modssl.org/docs/2.8/ssl_howto.html#auth-particular I'm not sure what is better and I don't plan on implementing this right away. I think we need to talk to some real world users. Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

On Mon, 2007-05-14 at 10:01 +0100, Richard W.M. Jones wrote:
Mark McLoughlin wrote:
Note that libvirtd _doesn't_ quite support this sort of access because it doesn't support wildcards in the commonNames in the client certificates, but that would be a useful and simple addition.
I don't grok this ... why would you want a wildcard in the subjectName of a client certificate?
Or do you mean allowing wildcards in the access control list of client subjectNames?
At the moment: The server reliably knows only the IP address of the client.
It is given a certificate by the client, which it checks for validity against the CA.
Yep.
It also checks the subjectAltName.iPAddress or commonName field is the IP address (just using strcmp).
I don't think this is useful or good practice. The IP address of the client is irrelevant, ignore it. Think about clients connecting from behind NAT, for example.
It may also check that the client's IP address is on a whitelist contained in the server configuration file, although by default this check is switched off.
And this has nothing to do with TLS or X.509 certificates. It's no different from e.g. libwrap.
So you can set up a CA and issue certificates to your clients to control access, but the certificates must contain the right IP address for the client (the client cannot be mobile in other words).
Again, I don't think this is useful.
This weekend I was coincidentally looking at how client certification works in browsers, and there authentication is based on all fields in the Distinguished Name. So you can use any CA, and an access control list of clients held on the server. See for example:
http://www.modssl.org/docs/2.8/ssl_howto.html#auth-particular
This looks essentially like a list of strings which will be compared against the SubjectName field in certificates presented by clients. It looks like it requires that the contents of the SubjectName is an X.500 DN, which is probably sane, and allows matching against individual fields in the DN which is similar to wildcards. I think this is just another confirmation of what I was saying would be a sane way to do this: 1) Validate the cert was issued by a trusted CA, deny if no 2) Ignore the IP address of client 3) First check whether the cert fingerprint is on the list of allowed client fingerprints, allow if yes 4) Otherwise check whether the contents of the SubjectName name field is on the list of allowed client SubjectNames, allow if yes, deny if no Postfix does (3), but not (4). Apache does (4), in a fairly fancy way, but not (3). Cheers, Mark.

Mark McLoughlin wrote:
It may also check that the client's IP address is on a whitelist contained in the server configuration file, although by default this check is switched off.
And this has nothing to do with TLS or X.509 certificates. It's no different from e.g. libwrap.
Sure, separate issue.
1) Validate the cert was issued by a trusted CA, deny if no 2) Ignore the IP address of client 3) First check whether the cert fingerprint is on the list of allowed client fingerprints, allow if yes 4) Otherwise check whether the contents of the SubjectName name field is on the list of allowed client SubjectNames, allow if yes, deny if no
Postfix does (3), but not (4). Apache does (4), in a fairly fancy way, but not (3).
My reading of: http://www.postfix.org/TLS_README.html#server_access <quote> The Postfix list manipulation routines give special treatment to whitespace and some other characters, making the use of certificate names impractical. Instead we use the certificate fingerprints as they are difficult to fake but easy to use for lookup. </quote> ... is that Postfix would do (4), but does (3) because of a shortcoming in its configuration file format. (I read "certificate name" to mean DN). We don't have that problem. Mark, what do you think? Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

Hi Rich, On Mon, 2007-05-14 at 14:04 +0100, Richard W.M. Jones wrote:
Mark McLoughlin wrote:
1) Validate the cert was issued by a trusted CA, deny if no 2) Ignore the IP address of client 3) First check whether the cert fingerprint is on the list of allowed client fingerprints, allow if yes 4) Otherwise check whether the contents of the SubjectName name field is on the list of allowed client SubjectNames, allow if yes, deny if no
Postfix does (3), but not (4). Apache does (4), in a fairly fancy way, but not (3).
My reading of:
http://www.postfix.org/TLS_README.html#server_access
<quote> The Postfix list manipulation routines give special treatment to whitespace and some other characters, making the use of certificate names impractical. Instead we use the certificate fingerprints as they are difficult to fake but easy to use for lookup. </quote>
... is that Postfix would do (4), but does (3) because of a shortcoming in its configuration file format.
Ah, that explains it.
(I read "certificate name" to mean DN).
Just to be pedantic: + Subject Name : the field in X.509 certs which details the identity of the holder of the associated private key + Distinguished Name : the X.500 format for describing identity which should be used in the Subject Name field + Common Name : one of the possible fields in a Distinguished Name
We don't have that problem.
Nope, we don't. A list of allowed subject names would be fine for us. I do like the option of a list of fingerprints too, but it's not that important if we have the subject name list. And we could also have the "allow all clients with valid certs" option. Cheers, Mark.

Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
3 Client-side -------------
A src/remote_internal.c A src/remote_internal.h M src/driver.h M src/libvirt.c
I've fixed the default paths that the client and server use to find PKI certificates now. The updated header file remote_internal.h is attached showing these paths. Obviously there are corresponding changes in the implementation files too, but they are trivial and so I won't bore anyone with them. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

On Sat, May 05, 2007 at 12:17:44PM +0100, Richard W.M. Jones wrote:
Richard W.M. Jones wrote:
3 Client-side -------------
A src/remote_internal.c A src/remote_internal.h M src/driver.h M src/libvirt.c
A small bug in there - If the TLS session fails to init, then we die with SEGV when calling gnutls_bye() on a NULL priv.session - In the remoteOpen() method the goto is in the wrong place if (priv.uses_tls) { priv.session = negotiate_gnutls_on_connection (conn, priv.sock, no_verify, server); if (!priv.session) { close (priv.sock); priv.sock = -1; continue; } goto tcp_connected; } Noeeds to be if (priv.uses_tls) { priv.session = negotiate_gnutls_on_connection (conn, priv.sock, no_verify, server); if (!priv.session) { close (priv.sock); priv.sock = -1; continue; } } goto tcp_connected; Otherwise tcp connections will never succceed. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

4 Server-side call dispatch ---------------------------
A qemud/remote.c A qemud/remote_generate_stubs.pl A qemud/remote_dispatch_localvars.h A qemud/remote_dispatch_proc_switch.h A qemud/remote_dispatch_prototypes.h
As with the client side, the server side dispatch consists mainly of a long and rather tedious list of calls which deal with deserialising the args, calling the appropriate function, and serialising the result back to the client. In remote.c we enter from the QEMU daemon at remoteDispatchClientRequest with a full request which has been read in but not decoded yet. This function deserialises the request header (remote_message_header defined in remote_protocol.x) and dispatches the request to the correct remoteDispatch* function. This function is arranged so that the remoteDispatch* functions need to do as little as possible -- in particular error handling is handled transparently as far as they are concerned. The function then serialises the reply (or error) and returns it up to the QEMU daemon to be sent out. You may notice in remoteDispatchClientRequest that parts are generated automatically, eg. the dispatch switch statement: switch (req.proc) { #include "remote_dispatch_proc_switch.h" default: remoteDispatchError (client, &req, "unknown procedure: %d", req.proc); xdr_destroy (&xdr); return; } This is to remove the need to write this tedious and error-prone code by hand. Each remoteDispatch* function can be as small as possible and deals only with deserialising the remote_foo_args (parameters), calling the appropriate libvirt function, then serialising the remote_foo_ret (return value). Error cases are handled entirely by remoteDispatchClientRequest in order to remove duplicated code. The remoteDispatch* functions may return one of the following values: 0 = OK -1 = The libvirt function returned an error (remoteDispatchClientRequest will pick up and serialise the error) -2 = There was an error in the dispatch function itself, something to do with the mechanics of RPC. In this case the function has already set up an error buffer using remoteDispatchError directly. The four other attached files are the script used to generate the automatically generated code for remoteDispatchClientRequest, and the 3 files of generated code that it creates, which I suggest we put in EXTRA_DIST. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

On Sat, May 05, 2007 at 12:30:13PM +0100, Richard W.M. Jones wrote:
You may notice in remoteDispatchClientRequest that parts are generated automatically, eg. the dispatch switch statement:
switch (req.proc) { #include "remote_dispatch_proc_switch.h"
default: remoteDispatchError (client, &req, "unknown procedure: %d", req.proc); xdr_destroy (&xdr); return; }
This is to remove the need to write this tedious and error-prone code by hand.
Good idea. Not really many comments for these files.
void remoteDispatchClientRequest (struct qemud_server *server ATTRIBUTE_UNUSED, struct qemud_client *client) .... char *args = NULL, *ret = NULL;
Could those two variables instead be void * - to avoid the need to cast all assignments to them ? Finally, with
CHECK_CONN;
Is it possible to tweak the macro definition so its use appears as CHECK_CONN(client); So its clear what variable this macro is doing work against. Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
void remoteDispatchClientRequest (struct qemud_server *server ATTRIBUTE_UNUSED, struct qemud_client *client) .... char *args = NULL, *ret = NULL;
Could those two variables instead be void * - to avoid the need to cast all assignments to them ?
Swings & roundabouts ... The "convention" in XDR code would be for them to be char *, but of course that's because XDR code is pre-ANSI and void * didn't exist :-)
Finally, with
CHECK_CONN;
Is it possible to tweak the macro definition so its use appears as
CHECK_CONN(client);
So its clear what variable this macro is doing work against.
Will do. Rich.

Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
Finally, with
CHECK_CONN;
Is it possible to tweak the macro definition so its use appears as
CHECK_CONN(client);
So its clear what variable this macro is doing work against.
Will do.
Attached is the updated remote.c. The only difference is that CHECK_CONN now takes an extra client connection as requested above. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

5 Client-side QEMUD modifications ---------------------------------
M src/qemu_internal.c
This is part of Dan's patch to refactor the QEMU daemon to use XDR instead of the current homebrew protocol. http://www.redhat.com/archives/libvir-list/2007-March/msg00333.html I have modified it in two ways: (1) Get the max message size from remote_protocol.x (2) Make the RPC format superficially compatible with the remote protocol, so that at least we can dispatch requests to either remote or QEMU based on the first two words. All I did was swap around the "length" and "magic" fields. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

6 Server-side QEMUD modifications ---------------------------------
A qemud/protocol.x A qemud/protocol.c M qemud/protocol.h M qemud/Makefile.am M qemud/conf.c M qemud/dispatch.c M qemud/dispatch.h M qemud/internal.h M qemud/qemud.c M qemud/uuid.c
Again, this is Dan's patch to make QEMUD use the XDR protocol: http://www.redhat.com/archives/libvir-list/2007-March/msg00333.html The modifications I have made: (1) Use REMOTE_MESSAGE_MAX from remote_protocol.x (2) Make the protocol superficially similar to the remote protocol so that we can dispatch on it, although in the current implementation this is actually not used. Instead we make our dispatch decision based on whether the daemon was started with --remote on the command line. (3) Allow the daemon to listen on TCP sockets. (4) Allow the daemon to require GnuTLS encryption negotiation on sockets. In addition, the daemon can now check the client's IP address and certificate against a whitelist. (5) Add support for a configuration file (/etc/libvirt/libvirtd.conf by default, but can be overridden on the command line), which allows the sockets and certificate stuff to be configured. To do: Find a good default place in the filesystem to find the CA and client certificates. Change the start-up script so that it can start a QEMU/network daemon and a remote daemon. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland) /* -*- c -*- * protocol_xdr.x: wire protocol message format & data structures * * Copyright (C) 2006, 2007 Red Hat, Inc. * Copyright (C) 2006 Daniel P. Berrange * * This library is free software; you can redistribute it and/or * modify it under the terms of the GNU Lesser General Public * License as published by the Free Software Foundation; either * version 2.1 of the License, or (at your option) any later version. * * This library is distributed in the hope that it will be useful, * but WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * Lesser General Public License for more details. * * You should have received a copy of the GNU Lesser General Public * License along with this library; if not, write to the Free Software * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA * * Author: Daniel P. Berrange <berrange@redhat.com> */ const QEMUD_UUID_RAW_LEN = 16; const QEMUD_MAX_NAME_LEN = 50; const QEMUD_MAX_XML_LEN = 4096; /*#define QEMUD_MAX_IFNAME_LEN IF_NAMESIZE */ const QEMUD_MAX_IFNAME_LEN = 50; const QEMUD_MAX_NUM_DOMAINS = 100; const QEMUD_MAX_NUM_NETWORKS = 100; /* * Damn, we can't do multiplcation when declaring * constants with XDR ! * These two should be QEMUD_MAX_NUM_DOMAIN * QEMUD_MAX_NAME_LEN */ const QEMUD_MAX_DOMAINS_NAME_BUF = 5000; const QEMUD_MAX_NETWORKS_NAME_BUF = 5000; const QEMUD_MAX_ERROR_LEN = 1024; /* Possible guest VM states */ enum qemud_domain_runstate { QEMUD_STATE_RUNNING = 1, QEMUD_STATE_PAUSED, QEMUD_STATE_STOPPED }; /* Message sent by a client */ enum qemud_packet_client_data_type { QEMUD_CLIENT_PKT_GET_VERSION, QEMUD_CLIENT_PKT_GET_NODEINFO, QEMUD_CLIENT_PKT_LIST_DOMAINS, QEMUD_CLIENT_PKT_NUM_DOMAINS, QEMUD_CLIENT_PKT_DOMAIN_CREATE, QEMUD_CLIENT_PKT_DOMAIN_LOOKUP_BY_ID, QEMUD_CLIENT_PKT_DOMAIN_LOOKUP_BY_UUID, QEMUD_CLIENT_PKT_DOMAIN_LOOKUP_BY_NAME, QEMUD_CLIENT_PKT_DOMAIN_SUSPEND, QEMUD_CLIENT_PKT_DOMAIN_RESUME, QEMUD_CLIENT_PKT_DOMAIN_DESTROY, QEMUD_CLIENT_PKT_DOMAIN_GET_INFO, QEMUD_CLIENT_PKT_DOMAIN_SAVE, QEMUD_CLIENT_PKT_DOMAIN_RESTORE, QEMUD_CLIENT_PKT_DUMP_XML, QEMUD_CLIENT_PKT_LIST_DEFINED_DOMAINS, QEMUD_CLIENT_PKT_NUM_DEFINED_DOMAINS, QEMUD_CLIENT_PKT_DOMAIN_START, QEMUD_CLIENT_PKT_DOMAIN_DEFINE, QEMUD_CLIENT_PKT_DOMAIN_UNDEFINE, QEMUD_CLIENT_PKT_NUM_NETWORKS, QEMUD_CLIENT_PKT_LIST_NETWORKS, QEMUD_CLIENT_PKT_NUM_DEFINED_NETWORKS, QEMUD_CLIENT_PKT_LIST_DEFINED_NETWORKS, QEMUD_CLIENT_PKT_NETWORK_LOOKUP_BY_UUID, QEMUD_CLIENT_PKT_NETWORK_LOOKUP_BY_NAME, QEMUD_CLIENT_PKT_NETWORK_CREATE, QEMUD_CLIENT_PKT_NETWORK_DEFINE, QEMUD_CLIENT_PKT_NETWORK_UNDEFINE, QEMUD_CLIENT_PKT_NETWORK_START, QEMUD_CLIENT_PKT_NETWORK_DESTROY, QEMUD_CLIENT_PKT_NETWORK_DUMP_XML, QEMUD_CLIENT_PKT_NETWORK_GET_BRIDGE_NAME, QEMUD_CLIENT_PKT_DOMAIN_GET_AUTOSTART, QEMUD_CLIENT_PKT_DOMAIN_SET_AUTOSTART, QEMUD_CLIENT_PKT_NETWORK_GET_AUTOSTART, QEMUD_CLIENT_PKT_NETWORK_SET_AUTOSTART, QEMUD_CLIENT_PKT_GET_CAPABILITIES, QEMUD_CLIENT_PKT_MAX }; /* Messages sent by a server */ enum qemud_packet_server_data_type { QEMUD_SERVER_PKT_FAILURE = 0, QEMUD_SERVER_PKT_GET_VERSION, QEMUD_SERVER_PKT_GET_NODEINFO, QEMUD_SERVER_PKT_LIST_DOMAINS, QEMUD_SERVER_PKT_NUM_DOMAINS, QEMUD_SERVER_PKT_DOMAIN_CREATE, QEMUD_SERVER_PKT_DOMAIN_LOOKUP_BY_ID, QEMUD_SERVER_PKT_DOMAIN_LOOKUP_BY_UUID, QEMUD_SERVER_PKT_DOMAIN_LOOKUP_BY_NAME, QEMUD_SERVER_PKT_DOMAIN_SUSPEND, QEMUD_SERVER_PKT_DOMAIN_RESUME, QEMUD_SERVER_PKT_DOMAIN_DESTROY, QEMUD_SERVER_PKT_DOMAIN_GET_INFO, QEMUD_SERVER_PKT_DOMAIN_SAVE, QEMUD_SERVER_PKT_DOMAIN_RESTORE, QEMUD_SERVER_PKT_DUMP_XML, QEMUD_SERVER_PKT_LIST_DEFINED_DOMAINS, QEMUD_SERVER_PKT_NUM_DEFINED_DOMAINS, QEMUD_SERVER_PKT_DOMAIN_START, QEMUD_SERVER_PKT_DOMAIN_DEFINE, QEMUD_SERVER_PKT_DOMAIN_UNDEFINE, QEMUD_SERVER_PKT_NUM_NETWORKS, QEMUD_SERVER_PKT_LIST_NETWORKS, QEMUD_SERVER_PKT_NUM_DEFINED_NETWORKS, QEMUD_SERVER_PKT_LIST_DEFINED_NETWORKS, QEMUD_SERVER_PKT_NETWORK_LOOKUP_BY_UUID, QEMUD_SERVER_PKT_NETWORK_LOOKUP_BY_NAME, QEMUD_SERVER_PKT_NETWORK_CREATE, QEMUD_SERVER_PKT_NETWORK_DEFINE, QEMUD_SERVER_PKT_NETWORK_UNDEFINE, QEMUD_SERVER_PKT_NETWORK_START, QEMUD_SERVER_PKT_NETWORK_DESTROY, QEMUD_SERVER_PKT_NETWORK_DUMP_XML, QEMUD_SERVER_PKT_NETWORK_GET_BRIDGE_NAME, QEMUD_SERVER_PKT_DOMAIN_GET_AUTOSTART, QEMUD_SERVER_PKT_DOMAIN_SET_AUTOSTART, QEMUD_SERVER_PKT_NETWORK_GET_AUTOSTART, QEMUD_SERVER_PKT_NETWORK_SET_AUTOSTART, QEMUD_SERVER_PKT_GET_CAPABILITIES, QEMUD_SERVER_PKT_MAX }; struct qemud_packet_failure_reply { uint32_t code; char message[QEMUD_MAX_ERROR_LEN]; }; struct qemud_packet_get_version_reply { uint32_t versionNum; }; struct qemud_packet_get_node_info_reply { char model[32]; uint32_t memory; uint32_t cpus; uint32_t mhz; uint32_t nodes; uint32_t sockets; uint32_t cores; uint32_t threads; }; struct qemud_packet_get_capabilities_reply { char xml[QEMUD_MAX_XML_LEN]; }; struct qemud_packet_list_domains_reply { int32_t numDomains; int32_t domains[QEMUD_MAX_NUM_DOMAINS]; }; struct qemud_packet_num_domains_reply{ int32_t numDomains; }; struct qemud_packet_domain_create_request { char xml[QEMUD_MAX_XML_LEN]; }; struct qemud_packet_domain_create_reply { int32_t id; unsigned char uuid[QEMUD_UUID_RAW_LEN]; char name[QEMUD_MAX_NAME_LEN]; }; struct qemud_packet_domain_lookup_by_id_request { int32_t id; }; struct qemud_packet_domain_lookup_by_id_reply { unsigned char uuid[QEMUD_UUID_RAW_LEN]; char name[QEMUD_MAX_NAME_LEN]; }; struct qemud_packet_domain_lookup_by_name_request { char name[QEMUD_MAX_NAME_LEN]; }; struct qemud_packet_domain_lookup_by_name_reply { int32_t id; unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_domain_lookup_by_uuid_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_domain_lookup_by_uuid_reply { int32_t id; char name[QEMUD_MAX_NAME_LEN]; }; struct qemud_packet_domain_suspend_request { int32_t id; }; struct qemud_packet_domain_resume_request { int32_t id; }; struct qemud_packet_domain_destroy_request { int32_t id; }; struct qemud_packet_domain_get_info_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_domain_get_info_reply { uint64_t cpuTime; uint32_t runstate; uint32_t memory; uint32_t maxmem; uint32_t nrVirtCpu; }; struct qemud_packet_domain_save_request { int32_t id; char file[PATH_MAX]; }; struct qemud_packet_domain_restore_request { char file[PATH_MAX]; }; struct qemud_packet_domain_restore_reply { int32_t id; }; struct qemud_packet_domain_dump_xml_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_domain_dump_xml_reply { char xml[QEMUD_MAX_XML_LEN]; }; struct qemud_packet_list_defined_domains_reply{ uint32_t numDomains; char domains[QEMUD_MAX_DOMAINS_NAME_BUF]; }; struct qemud_packet_num_defined_domains_reply{ uint32_t numDomains; }; struct qemud_packet_domain_start_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_domain_start_reply { int32_t id; }; struct qemud_packet_domain_define_request { char xml[QEMUD_MAX_XML_LEN]; }; struct qemud_packet_domain_define_reply { unsigned char uuid[QEMUD_UUID_RAW_LEN]; char name[QEMUD_MAX_NAME_LEN]; }; struct qemud_packet_domain_undefine_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_num_networks_reply { uint32_t numNetworks; }; struct qemud_packet_list_networks_reply { uint32_t numNetworks; char networks[QEMUD_MAX_NETWORKS_NAME_BUF]; }; struct qemud_packet_num_defined_networks_reply { uint32_t numNetworks; }; struct qemud_packet_list_defined_networks_reply { uint32_t numNetworks; char networks[QEMUD_MAX_NETWORKS_NAME_BUF]; }; struct qemud_packet_network_lookup_by_name_request { char name[QEMUD_MAX_NAME_LEN]; }; struct qemud_packet_network_lookup_by_name_reply { int32_t id; unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_network_lookup_by_uuid_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_network_lookup_by_uuid_reply { int32_t id; char name[QEMUD_MAX_NAME_LEN]; }; struct qemud_packet_network_create_request { char xml[QEMUD_MAX_XML_LEN]; }; struct qemud_packet_network_create_reply { unsigned char uuid[QEMUD_UUID_RAW_LEN]; char name[QEMUD_MAX_NAME_LEN]; }; struct qemud_packet_network_define_request { char xml[QEMUD_MAX_XML_LEN]; }; struct qemud_packet_network_define_reply { unsigned char uuid[QEMUD_UUID_RAW_LEN]; char name[QEMUD_MAX_NAME_LEN]; }; struct qemud_packet_network_undefine_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_network_start_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_network_destroy_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_network_dump_xml_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_network_dump_xml_reply { char xml[QEMUD_MAX_XML_LEN]; }; struct qemud_packet_network_get_bridge_name_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_network_get_bridge_name_reply { char ifname[QEMUD_MAX_IFNAME_LEN]; }; struct qemud_packet_domain_get_autostart_request{ unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_domain_get_autostart_reply { uint32_t autostart; }; struct qemud_packet_domain_set_autostart_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; uint32_t autostart; }; struct qemud_packet_network_get_autostart_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; }; struct qemud_packet_network_get_autostart_reply { uint32_t autostart; }; struct qemud_packet_network_set_autostart_request { unsigned char uuid[QEMUD_UUID_RAW_LEN]; uint32_t autostart; }; union qemud_packet_client_data switch (qemud_packet_client_data_type type) { case QEMUD_CLIENT_PKT_GET_VERSION: void; case QEMUD_CLIENT_PKT_GET_NODEINFO: void; case QEMUD_CLIENT_PKT_LIST_DOMAINS: void; case QEMUD_CLIENT_PKT_NUM_DOMAINS: void; case QEMUD_CLIENT_PKT_DOMAIN_CREATE: qemud_packet_domain_create_request domainCreateRequest; case QEMUD_CLIENT_PKT_DOMAIN_LOOKUP_BY_ID: qemud_packet_domain_lookup_by_id_request domainLookupByIDRequest; case QEMUD_CLIENT_PKT_DOMAIN_LOOKUP_BY_UUID: qemud_packet_domain_lookup_by_uuid_request domainLookupByUUIDRequest; case QEMUD_CLIENT_PKT_DOMAIN_LOOKUP_BY_NAME: qemud_packet_domain_lookup_by_name_request domainLookupByNameRequest; case QEMUD_CLIENT_PKT_DOMAIN_SUSPEND: qemud_packet_domain_suspend_request domainSuspendRequest; case QEMUD_CLIENT_PKT_DOMAIN_RESUME: qemud_packet_domain_resume_request domainResumeRequest; case QEMUD_CLIENT_PKT_DOMAIN_DESTROY: qemud_packet_domain_destroy_request domainDestroyRequest; case QEMUD_CLIENT_PKT_DOMAIN_GET_INFO: qemud_packet_domain_get_info_request domainGetInfoRequest; case QEMUD_CLIENT_PKT_DOMAIN_SAVE: qemud_packet_domain_save_request domainSaveRequest; case QEMUD_CLIENT_PKT_DOMAIN_RESTORE: qemud_packet_domain_restore_request domainRestoreRequest; case QEMUD_CLIENT_PKT_DUMP_XML: qemud_packet_domain_dump_xml_request domainDumpXMLRequest; case QEMUD_CLIENT_PKT_LIST_DEFINED_DOMAINS: void; case QEMUD_CLIENT_PKT_NUM_DEFINED_DOMAINS: void; case QEMUD_CLIENT_PKT_DOMAIN_START: qemud_packet_domain_start_request domainStartRequest; case QEMUD_CLIENT_PKT_DOMAIN_DEFINE: qemud_packet_domain_define_request domainDefineRequest; case QEMUD_CLIENT_PKT_DOMAIN_UNDEFINE: qemud_packet_domain_undefine_request domainUndefineRequest; case QEMUD_CLIENT_PKT_NUM_NETWORKS: void; case QEMUD_CLIENT_PKT_LIST_NETWORKS: void; case QEMUD_CLIENT_PKT_NUM_DEFINED_NETWORKS: void; case QEMUD_CLIENT_PKT_LIST_DEFINED_NETWORKS: void; case QEMUD_CLIENT_PKT_NETWORK_LOOKUP_BY_UUID: qemud_packet_network_lookup_by_uuid_request networkLookupByUUIDRequest; case QEMUD_CLIENT_PKT_NETWORK_LOOKUP_BY_NAME: qemud_packet_network_lookup_by_name_request networkLookupByNameRequest; case QEMUD_CLIENT_PKT_NETWORK_CREATE: qemud_packet_network_create_request networkCreateRequest; case QEMUD_CLIENT_PKT_NETWORK_DEFINE: qemud_packet_network_define_request networkDefineRequest; case QEMUD_CLIENT_PKT_NETWORK_UNDEFINE: qemud_packet_network_undefine_request networkUndefineRequest; case QEMUD_CLIENT_PKT_NETWORK_START: qemud_packet_network_start_request networkStartRequest; case QEMUD_CLIENT_PKT_NETWORK_DESTROY: qemud_packet_network_destroy_request networkDestroyRequest; case QEMUD_CLIENT_PKT_NETWORK_DUMP_XML: qemud_packet_network_dump_xml_request networkDumpXMLRequest; case QEMUD_CLIENT_PKT_NETWORK_GET_BRIDGE_NAME: qemud_packet_network_get_bridge_name_request networkGetBridgeNameRequest; case QEMUD_CLIENT_PKT_DOMAIN_GET_AUTOSTART: qemud_packet_domain_get_autostart_request domainGetAutostartRequest; case QEMUD_CLIENT_PKT_DOMAIN_SET_AUTOSTART: qemud_packet_domain_set_autostart_request domainSetAutostartRequest; case QEMUD_CLIENT_PKT_NETWORK_GET_AUTOSTART: qemud_packet_network_get_autostart_request networkGetAutostartRequest; case QEMUD_CLIENT_PKT_NETWORK_SET_AUTOSTART: qemud_packet_network_set_autostart_request networkSetAutostartRequest; case QEMUD_CLIENT_PKT_GET_CAPABILITIES: void; }; union qemud_packet_server_data switch (qemud_packet_server_data_type type) { case QEMUD_SERVER_PKT_FAILURE: qemud_packet_failure_reply failureReply; case QEMUD_SERVER_PKT_GET_VERSION: qemud_packet_get_version_reply getVersionReply; case QEMUD_SERVER_PKT_GET_NODEINFO: qemud_packet_get_node_info_reply getNodeInfoReply; case QEMUD_SERVER_PKT_LIST_DOMAINS: qemud_packet_list_domains_reply listDomainsReply; case QEMUD_SERVER_PKT_NUM_DOMAINS: qemud_packet_num_domains_reply numDomainsReply; case QEMUD_SERVER_PKT_DOMAIN_CREATE: qemud_packet_domain_create_reply domainCreateReply; case QEMUD_SERVER_PKT_DOMAIN_LOOKUP_BY_ID: qemud_packet_domain_lookup_by_id_reply domainLookupByIDReply; case QEMUD_SERVER_PKT_DOMAIN_LOOKUP_BY_UUID: qemud_packet_domain_lookup_by_uuid_reply domainLookupByUUIDReply; case QEMUD_SERVER_PKT_DOMAIN_LOOKUP_BY_NAME: qemud_packet_domain_lookup_by_name_reply domainLookupByNameReply; case QEMUD_SERVER_PKT_DOMAIN_SUSPEND: void; case QEMUD_SERVER_PKT_DOMAIN_RESUME: void; case QEMUD_SERVER_PKT_DOMAIN_DESTROY: void; case QEMUD_SERVER_PKT_DOMAIN_GET_INFO: qemud_packet_domain_get_info_reply domainGetInfoReply; case QEMUD_SERVER_PKT_DOMAIN_SAVE: void; case QEMUD_SERVER_PKT_DOMAIN_RESTORE: qemud_packet_domain_restore_reply domainRestoreReply; case QEMUD_SERVER_PKT_DUMP_XML: qemud_packet_domain_dump_xml_reply domainDumpXMLReply; case QEMUD_SERVER_PKT_LIST_DEFINED_DOMAINS: qemud_packet_list_defined_domains_reply listDefinedDomainsReply; case QEMUD_SERVER_PKT_NUM_DEFINED_DOMAINS: qemud_packet_num_defined_domains_reply numDefinedDomainsReply; case QEMUD_SERVER_PKT_DOMAIN_START: qemud_packet_domain_start_reply domainStartReply; case QEMUD_SERVER_PKT_DOMAIN_DEFINE: qemud_packet_domain_define_reply domainDefineReply; case QEMUD_SERVER_PKT_DOMAIN_UNDEFINE: void; case QEMUD_SERVER_PKT_NUM_NETWORKS: qemud_packet_num_networks_reply numNetworksReply; case QEMUD_SERVER_PKT_LIST_NETWORKS: qemud_packet_list_networks_reply listNetworksReply; case QEMUD_SERVER_PKT_NUM_DEFINED_NETWORKS: qemud_packet_num_defined_networks_reply numDefinedNetworksReply; case QEMUD_SERVER_PKT_LIST_DEFINED_NETWORKS: qemud_packet_list_defined_networks_reply listDefinedNetworksReply; case QEMUD_SERVER_PKT_NETWORK_LOOKUP_BY_UUID: qemud_packet_network_lookup_by_uuid_reply networkLookupByUUIDReply; case QEMUD_SERVER_PKT_NETWORK_LOOKUP_BY_NAME: qemud_packet_network_lookup_by_name_reply networkLookupByNameReply; case QEMUD_SERVER_PKT_NETWORK_CREATE: qemud_packet_network_create_reply networkCreateReply; case QEMUD_SERVER_PKT_NETWORK_DEFINE: qemud_packet_network_define_reply networkDefineReply; case QEMUD_SERVER_PKT_NETWORK_UNDEFINE: void; case QEMUD_SERVER_PKT_NETWORK_START: void; case QEMUD_SERVER_PKT_NETWORK_DESTROY: void; case QEMUD_SERVER_PKT_NETWORK_DUMP_XML: qemud_packet_network_dump_xml_reply networkDumpXMLReply; case QEMUD_SERVER_PKT_NETWORK_GET_BRIDGE_NAME: qemud_packet_network_get_bridge_name_reply networkGetBridgeNameReply; case QEMUD_SERVER_PKT_DOMAIN_GET_AUTOSTART: qemud_packet_domain_get_autostart_reply domainGetAutostartReply; case QEMUD_SERVER_PKT_DOMAIN_SET_AUTOSTART: void; case QEMUD_SERVER_PKT_NETWORK_GET_AUTOSTART: qemud_packet_network_get_autostart_reply networkGetAutostartReply; case QEMUD_SERVER_PKT_NETWORK_SET_AUTOSTART: void; case QEMUD_SERVER_PKT_GET_CAPABILITIES: qemud_packet_get_capabilities_reply getCapabilitiesReply; }; struct qemud_packet_client { uint32_t serial; struct qemud_packet_client_data data; }; struct qemud_packet_server { uint32_t serial; uint32_t inReplyTo; struct qemud_packet_server_data data; }; /* The first two words in the messages are length and program number * (previously called "magic"). This makes the protocol compatible * with the remote protocol, although beyond the first two words * the protocols are completely different. * * Note the length is the total number of bytes in the message * _including_ the length and program number. */ const QEMUD_PROGRAM = 0x20001A64; const QEMUD_PKT_HEADER_XDR_LEN = 8; struct qemud_packet_header { uint32_t length; uint32_t prog; };

On Sat, May 05, 2007 at 12:45:33PM +0100, Richard W.M. Jones wrote:
6 Server-side QEMUD modifications ---------------------------------
A qemud/protocol.x A qemud/protocol.c M qemud/protocol.h M qemud/Makefile.am M qemud/conf.c M qemud/dispatch.c M qemud/dispatch.h M qemud/internal.h M qemud/qemud.c M qemud/uuid.c
Again, this is Dan's patch to make QEMUD use the XDR protocol:
http://www.redhat.com/archives/libvir-list/2007-March/msg00333.html
Minor typo in qemudClientRead of qemud/qemud.c + if (!client->tls) { + if ((ret = read (client->fd, data, len)) == -1) { + if (ret == 0 || errno != EAGAIN) { + if (ret != 0) + qemudLog (QEMUD_ERR, "read: %s", strerror (errno)); + qemudDispatchClientFailure(server, client); + } + return -1; + } The test against the return value of read needs to be <= 0 instead of == -1, otherwise we spin on EOF. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

8 Build system --------------
M .cvsignore M configure.in A libvirtd.conf M src/Makefile.am A RENAMES
These are the changes to make it build. libvirtd.conf is a suggested configuration file. I'm not really sure if we should include this. RENAMES is my collection of file renames that we might want to carry out to make the naming scheme of things like drivers in the libvirt source more consistent (particularly after xen-unified went it). Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland) # Example /etc/libvirt/libvirtd.conf file. tls_no_verify_certificate = 1 # Suggested list of file renames. # # File renames don't normally go into patches because they make # the patches much harder to read, so list them here instead. # # $Id$ # Clearer naming scheme after Xen-unified patch went in. src/xen_internal.c src/xen_internal_hv.c src/xen_internal.h src/xen_internal_hv.h src/proxy_internal.c src/xen_internal_proxy.c src/proxy_internal.h src/xen_internal_proxy.h src/xend_internal.c src/xen_internal_xend.c src/xend_internal.h src/xen_internal_xend.h src/xm_internal.c src/xen_internal_inactive.c src/xm_internal.h src/xen_internal_inactive.h src/xs_internal.c src/xen_internal_xenstore.c src/xs_internal.h src/xen_internal_xenstore.h src/xen_unified.c src/xen_internal.c src/xen_unified.h src/xen_internal.h # Test driver should really be called test_internal. src/test.c src/test_internal.c src/test.h src/test_internal.h # This would be better: src/*_internal*.c src/*_driver*.c src/*_internal*.h src/*_driver*.h # Qemud is now the qemud + remote driver. qemud/protocol.x qemud/qemud_protocol.x qemud/* remote/*

7 Documentation updates -----------------------
M docs/libvir.html M docs/remote.html
Need to write those two extra sections - not going to be today though... Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

Richard W.M. Jones wrote:
7 Documentation updates -----------------------
M docs/libvir.html M docs/remote.html
I've just pushed a documentation update which contains sections on how to generate and install TLS certificates and how to create a libvirtd.conf configuration file. The updated docs will appear within the next hour, here: http://libvirt.org/remote.html#Remote_certificates Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 3798903 Directors: Michael Cunningham (USA), Charlie Peters (USA) and David Owens (Ireland)

On Wed, May 02, 2007 at 07:04:44PM +0100, Richard W.M. Jones wrote:
Below is the plan for delivering the remote patch for review in stages. More details about in each email. I only expect to get through the first two emails today.
I've been testing this out for real today - IPv6 works correctly - Once I generated the client & server certs the TLS stuff was working pretty much without issue. Though we could do with printing out some clearer stuff in the scenario where user typos on cert/key path names as the current stuff is a littel obscure. - I've been testing with the QEMU driver and hit a few problems with the fact that qemuinternal.c will grab the URIs containing hostnames in the virConnectOpen call. So qemu was getting the connection before remote driver had a chance. Should be simply to path qemu_internal to ignore URIs with a hostname set. This last point was in turn killing virt-manager, with a hack workaround it seems virt-manager more or less works. Well with obvious exception of places where virt-manager uses local state outside the libvirt APIs, but that's another story :-) Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Fri, May 11, 2007 at 11:25:00PM +0100, Daniel P. Berrange wrote:
On Wed, May 02, 2007 at 07:04:44PM +0100, Richard W.M. Jones wrote:
Below is the plan for delivering the remote patch for review in stages. More details about in each email. I only expect to get through the first two emails today.
I've been testing this out for real today
- IPv6 works correctly - Once I generated the client & server certs the TLS stuff was working pretty much without issue. Though we could do with printing out some clearer stuff in the scenario where user typos on cert/key path names as the current stuff is a littel obscure. - I've been testing with the QEMU driver and hit a few problems with the fact that qemuinternal.c will grab the URIs containing hostnames in the virConnectOpen call. So qemu was getting the connection before remote driver had a chance. Should be simply to path qemu_internal to ignore URIs with a hostname set.
This last point was in turn killing virt-manager, with a hack workaround it seems virt-manager more or less works. Well with obvious exception of places where virt-manager uses local state outside the libvirt APIs, but that's another story :-)
I've just spent /hours/ trying to work out why c = libvirt.openReadOnly(uri); for i in range(100000): d = c.listDefinedDomains() for dom in d: f = c.lookupByName(dom) Was at least x100 slower with qemu+tcp://localhost/system than with qemu:///system Neither the server, nor the client showed *any* cpu time. The box was completely idle. No sleep states anywhere in the code either. After much confusion I finally realized that we were being hit by Nagle. Since each RPC operation requires < 100 bytes to be read & written, every write was being queued up for some 10's of ms being being sent out. Pointless since the RPC ops are synchronous we'd never fill a big enough packet to cause the data to be sent before Nagle timeout. When I do int no_slow_start = 1; setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (void *)&no_slow_start, sizeof(no_slow_start)); On both the client & server end every socket then performance using qemu+tcp://localhost/system was basically identical to qemu:///system Now if going across the LAN/WAN the delay caused by Nagle will be a smaller proportion of the RPC call time, due to extra round trip time on the real network. It is still wasteful to leave it enable though because its inserting arbitrary delays & due to the sync call-reply nature of our RPC it'll never get enough data to fill a packet. So I say disable Nagle all the time. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Sat, May 12, 2007 at 12:39:57AM +0100, Daniel P. Berrange wrote:
On both the client & server end every socket then performance using qemu+tcp://localhost/system was basically identical to qemu:///system Now if going across the LAN/WAN the delay caused by Nagle will be a smaller proportion of the RPC call time, due to extra round trip time on the real network. It is still wasteful to leave it enable though because its inserting arbitrary delays & due to the sync call-reply nature of our RPC it'll never get enough data to fill a packet. So I say disable Nagle all the time.
+1 For this kind of usage Nagle just can't provide benefits to anyone. Daniel -- Red Hat Virtualization group http://redhat.com/virtualization/ Daniel Veillard | virtualization library http://libvirt.org/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/

Daniel P. Berrange wrote:
On Wed, May 02, 2007 at 07:04:44PM +0100, Richard W.M. Jones wrote:
Below is the plan for delivering the remote patch for review in stages. More details about in each email. I only expect to get through the first two emails today.
I've been testing this out for real today
- IPv6 works correctly - Once I generated the client & server certs the TLS stuff was working pretty much without issue. Though we could do with printing out some clearer stuff in the scenario where user typos on cert/key path names as the current stuff is a littel obscure.
Yes, I've fixed this in the latest version so it prints out meaningful & helpful errors.
- I've been testing with the QEMU driver and hit a few problems with the fact that qemuinternal.c will grab the URIs containing hostnames in the virConnectOpen call. So qemu was getting the connection before remote driver had a chance. Should be simply to path qemu_internal to ignore URIs with a hostname set.
I don't know why this is necessary, since the remote driver is always supposed to be called first. But anyway I added: if (!uri->scheme || strcmp(uri->scheme, "qemu") || + uri->server || /* remote driver should handle these */ !uri->path) { xmlFreeURI(uri); return VIR_DRV_OPEN_DECLINED; } Rich. -- Emerging Technologies, Red Hat - http://et.redhat.com/~rjones/ Registered Address: Red Hat UK Ltd, Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, United Kingdom. Registered in England and Wales under Company Registration No. 03798903

On Mon, May 14, 2007 at 09:45:58AM +0100, Richard W.M. Jones wrote:
Daniel P. Berrange wrote:
On Wed, May 02, 2007 at 07:04:44PM +0100, Richard W.M. Jones wrote:
Below is the plan for delivering the remote patch for review in stages. More details about in each email. I only expect to get through the first two emails today.
I've been testing this out for real today
- IPv6 works correctly - Once I generated the client & server certs the TLS stuff was working pretty much without issue. Though we could do with printing out some clearer stuff in the scenario where user typos on cert/key path names as the current stuff is a littel obscure.
Yes, I've fixed this in the latest version so it prints out meaningful & helpful errors.
- I've been testing with the QEMU driver and hit a few problems with the fact that qemuinternal.c will grab the URIs containing hostnames in the virConnectOpen call. So qemu was getting the connection before remote driver had a chance. Should be simply to path qemu_internal to ignore URIs with a hostname set.
I don't know why this is necessary, since the remote driver is always supposed to be called first. But anyway I added:
Ahhhhh. That would be my dodgy merging of patches. The patch which registered the remote driver didn't apply to my checkout so I just put in a manual call to registerRemote(), but had it after all the other drivers. Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
participants (4)
-
Daniel P. Berrange
-
Daniel Veillard
-
Mark McLoughlin
-
Richard W.M. Jones