On Fri, Mar 19, 2010 at 12:45:28PM +0100, Jiri Denemark wrote:
> > +static const vshCmdOptDef opts_migrate_setmaxdowntime[] =
{
> > + {"domain", VSH_OT_DATA, VSH_OFLAG_REQ, N_("domain name, id
or uuid")},
> > + {"downtime", VSH_OT_DATA, VSH_OFLAG_REQ, N_("maximum
tolerable downtime (in nanoseconds) for migration")},
> > + {NULL, 0, 0, NULL}
> > +};
>
> Maybe for virsh command line we could use milliseconds, so that
> a "setmaxdowntime foo 1" can still have a chance to actually work,
> instead of blocking forever.
...
> On one hand from an usability POV it looks more reasonnable to use
> millisecs here, on the other hand each time we make virsh CLI and
> libvirt API diverge in some way this leads to confusion.
>
> So I'm still undecided :-)
Or we can do what Daniel suggested and use milliseconds in the API as well.
Okay, let's use milleseconds al, the way through !
Daniel
--
Daniel Veillard | libxml Gnome XML XSLT toolkit
http://xmlsoft.org/
daniel(a)veillard.com | Rpmfind RPM search engine
http://rpmfind.net/
http://veillard.com/ | virtualization library
http://libvirt.org/