
John and Xu, On 11/19/2013 10:49 PM, John Ferlan wrote:
On 11/18/2013 09:59 AM, Boris Fiuczynski wrote:
John and Xu Wang, here are a few general observations from side:
First off - I tend to find myself agreeing with Boris here. I think the concept is important and necessary; however, I'm not convinced the implementation goes far enough.
1) I agree that it makes sense to preserve the unknown xml "entities" even so it can create complex scenarios and even new kinds of errors if unknown entities depend on known entities which get modified making them unusable for the unknown entities. This error would probably be the reversal of what is currently the problem when unknown entities simply disappear.
Is there a more concrete example of "new kinds of errors if unknown entities depend on known entities which get modified making them unusable for the unknown entities" that can be given? Just for clarity. I've read that line a few times and I'm still not sure :-) OK, let's take a look at device type disk. Since 1.0.2 the property sgio was added. Let's assume this is the unknown entity. sgio is only valid for property device "lun". If one now changes the property device to "disk" than the unknown entity sgio would cause an error when specified.
2) The implementation of putting hierarchical data (xml elements and properties) into a single "flat" list and storing the hierarchical dependencies with strings worries me. Wouldn't it make things easier to create the hierarchy in the others structure (it also would remove the need for the ids I guess!).
The primary reason why I went through the trouble of adjusting the commit was to learn a bit more about the code, but I also was trying to figure out a way to split up the submit into more manageable chunks... While going through the changes I found myself wondering what purpose the parent values were serving and why we kept passing -1 for ID values.
When I went to integrate the "device address" changes into this new model I found myself beginning to think that perhaps the model used there to grab the property key/value pairs from the node generically was more suited for what you were trying to do.
Since you're trying to protect against is someone changing/extending the XML "node" to add more "properties" which are then not restored when libvirt-cim goes to write out the XML, then why not take the next step and just handle the whole hierarchy?
The others structure is stored on the internal data object that represents a defined known xml entity. This defined known xml entity can itself be the parent for unknown properties (xml properites) and/or the parent for unknown sub entities (xml elements) where all these entities at the end of the "data parsing" would be flagged ACTIVE and the unknown ones INACTIVE.
/* This could actually be just other_node */ struct others { struct other_node *known_node; /* this is the root representing the defined known xml element */ };
struct other_node { char *name; char *value; struct other_prop *properties; /* list of the nodes properties */ struct other_node *children; /* list with the direct sets of children data */ enum { ACTIVE, INACTIVE } status; };
struct other_prop { char *name; char *value; struct other_prop *properties; enum { ACTIVE, INACTIVE } status; };
Probably the above data structures could be streamlined even more when one starts to use these structures writing the code. The structures are very close to the xml and recursive methods could be used for writing the xml out.
See the "device_address" structure and subsequent processing by parse_device_address() and add_device_address_property() for more opaque view on how to process the xml.
If you take the concept and apply it to say "<domain>" - I think you'll be able to create one structure for dominfo that contains one instance of the read XML. Then using the active/inactive tagging you should be able to tell what's been read/used by the libvirt-cim code. I'd even go as far to say why not add a "CHANGED" status, but I'm not as clear on the lifecycle with respect to how the code the code that ends up writing things out works. My one concern with a single instance would be what happens if it changes without libvirt-cim's knowledge? Not sure its possible or supported.
The idea to use one tree is fine with me. It would actually mean that the XML data structure would end up in a XML alike libvirt-cim data structure. This structure would be easier to convert back into the required xml document. If I understand you, John, correctly you also like to maintain the existing internal libvirt-cim data structures as they are today. If we don't maintain these data structures this would cause code changes at all rasd read and write methods. Not nice and more headaches. In addition I would like to suggest that for easier access into the new tree structure a reference to the corresponding tree node element should be stored as "other" (maybe rename it to node_ref) in the existing libvirt-cim data structure. A question comes to my mind about the usage of one tree: Does it create a problem for the helper methods (seek..., fetch..., ...) on instances that are not leafs (e.g. domain). I guess it would now be needed to know the hierarchy depth at which these methods stop. That is something Xu is the best to answer.
I think the way the code is written now there's perhaps more than one traversal of the tree, although without seeing in "in action" I cannot be sure.
References to the other_node and other_prop structures could be made available for writing and reading data from them via methods. e.g. writing back the console (just a snippet from xmlgen.c)
- console = xmlNewChild(root, NULL, BAD_CAST "console", NULL); - if (console == NULL) + other_node *console = NULL; + console = add_node_to_others(cdev->others->known_node, + "console", + NULL, /* text value of the xml element */ + "devices"); + + if (console == NULL) { + CU_DEBUG("Add tag <console> failed."); return XML_ERROR; + }
- xmlNewProp(console, BAD_CAST "type", - BAD_CAST - chardev_source_type_IDToStr(cdev->source_type)); + other_prop *cprop = NULL; + cprop = add_prop_to_others(console, + "type", + chardev_source_type_IDToStr( + cdev->source_type));
As you can see it is much closer to the known pluming as it used to be directly with xml and also hides the list internals (name & ids) used for referencing.
I know that this would cause a rather large rework but I think that the usability would be much enhanced for everyone writing/fixing the provider code and overall would improve code stability in the long run. Please use this as an idea for improvement.
I think as part of the rewrite creating macros to replace commonly cut-n-pasted code is a must. Currently there's numerous calls :
+ ddev->others = parse_data_to_others(ddev->others, + dnode, + 0, + BAD_CAST "devices"); + if (ddev->others == NULL) { + CU_DEBUG("parse xml failed."); + goto err; + } +
or
+ if (seek_in_others(&ddev->others, + -1, + "source", + TYPE_NODE, + -1, + (char *)dnode->name)) { + ddev->source = fetch_from_others(&ddev->others, + -1, + "dir", + TYPE_PROP, + -1, + "source"); + + + if (ddev->source == NULL) { + CU_DEBUG("no source dir"); + goto err; + } + }
There's macros that hide and do the error processing:
GET_BASE_NODE(ddev, dnode, "devices", err);
or
ddev->source = GET_NODE_NODE(ddev, dnode, "source", err); if (ddev->source) { ddev->dir = GET_NODE_PROP(ddev->source, "dir", err); ddev->startupPolicy = GET_NODE_PROP(ddev, "startupPolicy", err); GET_NODE_NODE(ddev, ddev->source, "address", err); if (ddev->address) { } }
The various macros would handle the CU_DEBUG (and any other error) processing. Similarly there would be PUT_BASE_NODE, PUT_NODE_NODE, PUT_NODE_PROP.
The macros aren't completely thought out, but I would hope you see their value...
I think it is a good idea but suggest to wait and see how complex the helper methods end up to be. Instead of a macro it would maybe also make more sense regarding performance to write a single method e.g. for seek&fetch.
Also note: The above is just email written code for reference and not thought to be bug free. :-)
Hah - bug free code... If it were all bug free we'd be out of a job :-)
I think we need to come up with an agreement on the architecture first (currently patches 1-3 and then what was the original 21/48).
Those should be submitted first without any other patches and without necessarily being called except perhaps through xml_parse_test which should be used to prove that the "new" parsing and generation does no worse than the old (a 5th patch perhaps).
Good point! The xml_parse_test is a good validation tool for the first part.
Once those patches are in, then submit the patches that read/write domain level data... Then submit the patches that read/write domain/mem data... then domain/vcpu, etc. Also, rather than separate the xml parsing & generating, keep it together for each type.
This seems like a valid and doable approach. Do you agree as well, Xu?
Doing things in smaller bunches will be easier to review and less prone to "other" changes causing issues.
As an aside, I'm personally not a fan of the name 'others', but I don't have a better suggestion :-)
I do ask that you make the testing component of this a priority. Being able to use xml_parse_test to validate is key. You may also want to add code to it in order to print out elements that are "inactive" - e.g. all the unknown stuff. Having visibility into what is not processed would be a good thing.
A smooth transition is key. Testing and comparison of old and new results seems crucial.
John
-- Mit freundlichen Grüßen/Kind regards Boris Fiuczynski IBM Deutschland Research & Development GmbH Vorsitzender des Aufsichtsrats: Martina Köderitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294