On Monday, August 13, 2018 at 10:02 AM, Daniel P. Berrangé wrote:
On Mon, Aug 13, 2018 at 02:32:17PM +0800, Shi Lei wrote:
> Hi, everyone!
> It's possible that the running-time of a command is long than its caller
> expected. Perhaps, it's useful to provide timeout mode for virCommandRun or
> virCommandRunAsync + virCommandWait. And The caller can restrict the
> running time of the command if necessary. I introduce a new function
> virCommandSetTimeout for that.
>
> And then, virKModLoad will get a default timeout of 3 seconds. I hope it
> could solve the problem of "strange delay" as John mentioned.
I'm not really see what this is hoping to solve. If something in libvirt
is running virKModLoad, I cannot see how it will work better by not waiting
for it to complete.
Thanks for your quick reply! I hope I could explain my idea more clear:)
The virKModLoad is used for loading external kernel modules which are not under
the control of Libvirt. We cannot make sure they're always solid and strong
(especially for various pci drivers). If the INIT process of a module is blocked
for some reasons, the caller of the virKModLoad will be blocked itself and
cannot known what happened internally and have no chance to handle it.
Most modules need only dozens of milliseconds to complete loading.
On my slow PC, module 'nbd' needs about 2 milliseconds to load and '8021q'
needs
about 30 milliseconds. And I will try to test other pci drivers.
If loading time is longer than a few seconds, we can presume there are some exceptions
or errors in the loading process of the module. With timeout mode, virKModLoad can
report it to the caller and the caller could decide how to handle it.
Timeout mode for virKModLoad is just an insurance which could avoid blocking
indefinitely.
It's more complicated to call virCommandRun/Async. So timeout mode for
virCommandRun/Async is just an optional mode and the developers can decide
wether they need to use it according to the command.
With virCommandSetTimeout, commands can be set timeout in milliseconds.
It can be used under one of these conditions:
1) In synchronous mode, using virCommandRun NOT in daemon mode.
By default, virCommandRun waits for the command to complete & exit and then checks
its exit status. If the caller uses virCommandSetTimeout before it, the command will
be aborted internally when timeout and there is no need to use virCommandAbort to
reap the child process. For example:
int timeout = 3000; /* in milliseconds */
virCommandSetTimeout(cmd, timeout);
if (virCommandRun(cmd, NULL) < 0) {
if (virCommandGetErr(cmd) == ETIME)
... do something for timeout, e.g. report error ...
return -1;
}
The argument timeout of the virCommandSetTimeout accepts a positive value. When timeout,
virCommandRun return -1 and the caller can use virCommandGetErr to check errorcode.
And ETIME means command timeout.
Note:virCommandSetTimeout cannot mix with virCommandDaemonize. Or virCommandRun fails
directly.
2) In asynchronous mode, the caller can also call virCommandSetTimeout before
virCommandRunAsync to limit the running time of the command. The caller uses
virCommandWait to
wait for the child process to complete & exit. virCommandWait returns -1 if timeout
and
the caller can use virCommandGetErr to check the reason. For example:
char *outbuf = NULL;
char *errbuf = NULL;
int timeout = 3000; /* in milliseconds */
virCommandSetTimeout(cmd, timeout);
virCommandSetOutputBuffer(cmd, &outbuf);
virCommandSetErrorBuffer(cmd, &errbuf);
virCommandDoAsyncIO(cmd);
if (virCommandRunAsync(cmd, NULL) < 0)
return -1;
... do something ...
if (virCommandWait(cmd, NULL) < 0) {
if (virCommandGetErr(cmd) == ETIME)
... do something for timeout, e.g. report error ...
return -1;
}
Regards,
Shilei