于 2013/11/20 5:49, John Ferlan 写道:
On 11/18/2013 09:59 AM, Boris Fiuczynski wrote:
> John and Xu Wang,
> here are a few general observations from side:
First off - I tend to find myself agreeing with Boris here. I think the
concept is important and necessary; however, I'm not convinced the
implementation goes far enough.
> 1) I agree that it makes sense to preserve the unknown xml "entities"
> even so it can create complex scenarios and even new kinds of errors if
> unknown entities depend on known entities which get modified making them
> unusable for the unknown entities. This error would probably be the
> reversal of what is currently the problem when unknown entities simply
> disappear.
Is there a more concrete example of "new kinds of errors if unknown
entities depend on known entities which get modified making them
unusable for the unknown entities" that can be given? Just for clarity.
I've read that line a few times and I'm still not sure :-)
> 2) The implementation of putting hierarchical data (xml elements and
> properties) into a single "flat" list and storing the hierarchical
> dependencies with strings worries me. Wouldn't it make things easier to
> create the hierarchy in the others structure (it also would remove the
> need for the ids I guess!).
The primary reason why I went through the trouble of adjusting the
commit was to learn a bit more about the code, but I also was trying to
figure out a way to split up the submit into more manageable chunks...
While going through the changes I found myself wondering what purpose
the parent values were serving and why we kept passing -1 for ID values.
When I went to integrate the "device address" changes into this new
model I found myself beginning to think that perhaps the model used
there to grab the property key/value pairs from the node generically was
more suited for what you were trying to do.
Since you're trying to protect against is someone changing/extending the
XML "node" to add more "properties" which are then not restored when
libvirt-cim goes to write out the XML, then why not take the next step
and just handle the whole hierarchy?
> The others structure is stored on the internal data object that
> represents a defined known xml entity. This defined known xml entity can
> itself be the parent for unknown properties (xml properites) and/or the
> parent for unknown sub entities (xml elements) where all these entities
> at the end of the "data parsing" would be flagged ACTIVE and the unknown
> ones INACTIVE.
>
> /* This could actually be just other_node */
> struct others {
> struct other_node *known_node; /* this is the root representing
> the defined known xml element */
> };
>
> struct other_node {
> char *name;
> char *value;
> struct other_prop *properties; /* list of the nodes properties */
> struct other_node *children; /* list with the direct sets of
> children data */
> enum {
> ACTIVE,
> INACTIVE
> } status;
> };
>
> struct other_prop {
> char *name;
> char *value;
> struct other_prop *properties;
> enum {
> ACTIVE,
> INACTIVE
> } status;
> };
>
> Probably the above data structures could be streamlined even more when
> one starts to use these structures writing the code. The structures are
> very close to the xml and recursive methods could be used for writing
> the xml out.
See the "device_address" structure and subsequent processing by
parse_device_address() and add_device_address_property() for more opaque
view on how to process the xml.
If you take the concept and apply it to say "<domain>" - I think
you'll
be able to create one structure for dominfo that contains one instance
of the read XML. Then using the active/inactive tagging you should be
able to tell what's been read/used by the libvirt-cim code. I'd even go
as far to say why not add a "CHANGED" status, but I'm not as clear on
the lifecycle with respect to how the code the code that ends up writing
things out works. My one concern with a single instance would be what
happens if it changes without libvirt-cim's knowledge? Not sure its
possible or supported.
I think the way the code is written now there's perhaps more than one
traversal of the tree, although without seeing in "in action" I cannot
be sure.
Dear John and Boris,
I am considering pros/cons of suggestion from Boris. If the parsing
process likes this,
1. Traversal the whole xml and save all data into data structure Boris
describe above;
2. Calling ***_parse() functions in device_parsing.c fetch data from the
data structure
generated in step 1.
3. All kinds of resource operations.
(Here I want to enhance the power of libvirt-cim after these patches
merged. Some
new name and value pairs could be passed from API and Management could
be done.
The work next step maybe more complex and I started to think about it)
4. Restore xml after operation, ***_xml() restore all data in the
virt_devices fetched from
data structure
5. Based on data structure restored, new xml re-generated.
I think code could be more tiny and maintained more easily.
Do you have any suggestion or think the original one is better? I'll
start to update all
patches once we have a better plan:-)
Other points,
1. Macros is a good idea. It could make code more clear.
2. I am not fans of 'others', too. It's just a 'historical issue' from
my early design.
Thanks,
Xu Wang
> References to the other_node and other_prop structures could be made
> available for writing and reading data from them via methods.
> e.g. writing back the console (just a snippet from xmlgen.c)
>
> - console = xmlNewChild(root, NULL, BAD_CAST "console",
> NULL);
> - if (console == NULL)
> + other_node *console = NULL;
> + console = add_node_to_others(cdev->others->known_node,
> + "console",
> + NULL, /* text value of the
> xml element */
> + "devices");
> +
> + if (console == NULL) {
> + CU_DEBUG("Add tag <console> failed.");
> return XML_ERROR;
> + }
>
> - xmlNewProp(console, BAD_CAST "type",
> - BAD_CAST
> -
> chardev_source_type_IDToStr(cdev->source_type));
> + other_prop *cprop = NULL;
> + cprop = add_prop_to_others(console,
> + "type",
> + chardev_source_type_IDToStr(
> + cdev->source_type));
>
> As you can see it is much closer to the known pluming as it used to be
> directly with xml and also hides the list internals (name & ids) used
> for referencing.
>
> I know that this would cause a rather large rework but I think that the
> usability would be much enhanced for everyone writing/fixing the
> provider code and overall would improve code stability in the long run.
> Please use this as an idea for improvement.
I think as part of the rewrite creating macros to replace commonly
cut-n-pasted code is a must. Currently there's numerous calls :
+ ddev->others = parse_data_to_others(ddev->others,
+ dnode,
+ 0,
+ BAD_CAST "devices");
+ if (ddev->others == NULL) {
+ CU_DEBUG("parse xml failed.");
+ goto err;
+ }
+
or
+ if (seek_in_others(&ddev->others,
+ -1,
+ "source",
+ TYPE_NODE,
+ -1,
+ (char *)dnode->name)) {
+ ddev->source = fetch_from_others(&ddev->others,
+ -1,
+ "dir",
+ TYPE_PROP,
+ -1,
+ "source");
+
+
+ if (ddev->source == NULL) {
+ CU_DEBUG("no source dir");
+ goto err;
+ }
+ }
There's macros that hide and do the error processing:
GET_BASE_NODE(ddev, dnode, "devices", err);
or
ddev->source = GET_NODE_NODE(ddev, dnode, "source", err);
if (ddev->source) {
ddev->dir = GET_NODE_PROP(ddev->source, "dir", err);
ddev->startupPolicy = GET_NODE_PROP(ddev, "startupPolicy", err);
GET_NODE_NODE(ddev, ddev->source, "address", err);
if (ddev->address) {
}
}
The various macros would handle the CU_DEBUG (and any other error)
processing. Similarly there would be PUT_BASE_NODE, PUT_NODE_NODE,
PUT_NODE_PROP.
The macros aren't completely thought out, but I would hope you see their
value...
> Also note: The above is just email written code for reference and not
> thought to be bug free. :-)
Hah - bug free code... If it were all bug free we'd be out of a job :-)
I think we need to come up with an agreement on the architecture first
(currently patches 1-3 and then what was the original 21/48).
Those should be submitted first without any other patches and without
necessarily being called except perhaps through xml_parse_test which
should be used to prove that the "new" parsing and generation does no
worse than the old (a 5th patch perhaps).
Once those patches are in, then submit the patches that read/write
domain level data... Then submit the patches that read/write domain/mem
data... then domain/vcpu, etc. Also, rather than separate the xml
parsing & generating, keep it together for each type.
Doing things in smaller bunches will be easier to review and less prone
to "other" changes causing issues.
As an aside, I'm personally not a fan of the name 'others', but I don't
have a better suggestion :-)
I do ask that you make the testing component of this a priority. Being
able to use xml_parse_test to validate is key. You may also want to add
code to it in order to print out elements that are "inactive" - e.g. all
the unknown stuff. Having visibility into what is not processed would be
a good thing.
John
> Mit freundlichen Grüßen/Kind regards
> Boris Fiuczynski
>
> IBM Deutschland Research & Development GmbH
> Vorsitzender des Aufsichtsrats: Martina Köderitz
> Geschäftsführung: Dirk Wittkopp
> Sitz der Gesellschaft: Böblingen
> Registergericht: Amtsgericht Stuttgart, HRB 243294
>
> On 11/15/2013 01:23 AM, John Ferlan wrote: