Hi
On Tue, Mar 22, 2022 at 8:02 PM Michal Privoznik <mprivozn(a)redhat.com>
wrote:
When virCommandSetSendBuffer() is used over a virCommand that is
(or will be) daemonized, then VIR_EXEC_ASYNC_IO the command must
have VIR_EXEC_ASYNC_IO flag set no later than at
virCommandRunAsync() phase so that the thread that's doing IO is
spawned and thus buffers can be sent to the process.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/util/vircommand.c | 3 +++
1 file changed, 3 insertions(+)
diff --git a/src/util/vircommand.c b/src/util/vircommand.c
index 41cf552d7b..5f22bd0ac3 100644
--- a/src/util/vircommand.c
+++ b/src/util/vircommand.c
@@ -1719,6 +1719,9 @@ virCommandFreeSendBuffers(virCommand *cmd)
* @buffer is always stolen regardless of the return value. This function
* doesn't raise a libvirt error, but rather propagates the error via
virCommand.
* Thus callers don't need to take a special action if -1 is returned.
+ *
+ * When the @cmd is daemonized via virCommandDaemonize() remember to
request
+ * asynchronous IO via virCommandDoAsyncIO().
Or else the RunAsync() should return an error, no?
Why not call DoAsyncIO() implicitly in RunAsync() in this case?
(sorry to repeat maybe my earlier question, trying to be more precise :)
thanks
--
Marc-André Lureau