On 08/20/2012 02:10 AM, MATSUDA, Daiki wrote:
(2012/08/16 21:58), Martin Kletzander wrote:
> On 08/15/2012 03:36 AM, MATSUDA Daiki wrote:
>>
>> Add @seconds variable to qemuAgentSend().
>> When @tiemout is true, @seconds controls how long to wait for a
>> response
>> (if @seconds is VIR_DOMAIN_QEMU_AGENT_COMMAND_DEFAULT, default to
>> QEMU_AGENT_WAIT_TIME).
>> If @timeout is false, @seconds is ignored.
>>
>>
>> Signed-off-by: MATSUDA Daiki <matsudadik(a)intellilink.co.jp>
>> ---
>> include/libvirt/libvirt-qemu.h | 6 ++++++
>> src/qemu/qemu_agent.c | 30 ++++++++++++++++++++----------
>> 2 files changed, 26 insertions(+), 10 deletions(-)
>>
>> diff --git a/include/libvirt/libvirt-qemu.h
>> b/include/libvirt/libvirt-qemu.h
>> index a37f897..013ed5a 100644
>> --- a/include/libvirt/libvirt-qemu.h
>> +++ b/include/libvirt/libvirt-qemu.h
>> @@ -44,6 +44,12 @@ virDomainPtr virDomainQemuAttach(virConnectPtr
>> domain,
>> unsigned int pid_value,
>> unsigned int flags);
>>
>> +typedef enum {
>> + VIR_DOMAIN_QEMU_AGENT_COMMAND_BLOCK = -2,
>
> Correct me if I'm wrong, but isn't this the same as setting timeout to
> false?
Yes, it is same that @timeout = false to qemuAgentSend() from
qemuAgentCommand().
>> + VIR_DOMAIN_QEMU_AGENT_COMMAND_DEFAULT = -1,
>> + VIR_DOMAIN_QEMU_AGENT_COMMAND_NOWAIT = 0,
>> +} virDomainQemuAgentCommandTimeoutValues;
>> +
>> # ifdef __cplusplus
>> }
>> # endif
>> diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c
>> index 15af758..26c2726 100644
>> --- a/src/qemu/qemu_agent.c
>> +++ b/src/qemu/qemu_agent.c
>> @@ -837,6 +837,8 @@ void qemuAgentClose(qemuAgentPtr mon)
>> * @mon: Monitor
>> * @msg: Message
>> * @timeout: use timeout?
>> + * @seconds: timeout seconds. if
>> VIR_DOMAIN_QEMU_AGENT_COMMAND_DEFAULT and
>> + * @timeout is true, use default value.
>
> If my previous assumption is correct, than the timeout parameter can be
> completely thrown away.
@seconds is effective uder the situation @timeout = true.
Then @seconds must be > 0. But @seconds = 0, it is dangerous.
I do not understand whether it is needed, but other people discussed.
https://www.redhat.com/archives/libvir-list/2012-July/msg00603.html
I don't suggest to change the behavior, I just see that if the parameter
'timeout' is omitted, we can still get all the options to run the
command with (nonblock/timeout/block), so it's just a simplification.
>> *
>> * Send @msg to agent @mon.
>> * Wait max QEMU_AGENT_WAIT_TIME for agent
>> @@ -848,7 +850,8 @@ void qemuAgentClose(qemuAgentPtr mon)
>> */
>> static int qemuAgentSend(qemuAgentPtr mon,
>> qemuAgentMessagePtr msg,
>> - bool timeout)
>> + bool timeout,
>> + int seconds)
>> {
>> int ret = -1;
>> unsigned long long now, then = 0;
>> @@ -864,7 +867,8 @@ static int qemuAgentSend(qemuAgentPtr mon,
>> if (timeout) {
>> if (virTimeMillisNow(&now) < 0)
>> return -1;
>> - then = now + QEMU_AGENT_WAIT_TIME;
>> + then = now + (seconds ==
>> VIR_DOMAIN_QEMU_AGENT_COMMAND_DEFAULT ?
>> + QEMU_AGENT_WAIT_TIME : seconds * 1000ull);
>> }
>
> Also if seconds == VIR_DOMAIN_QEMU_AGENT_COMMAND_BLOCK, then this causes
> 'then' to be smaller than now. I'm not sure what that would do with
> pthread_cond_timedwait, but that's definitely not wanted behavior.
Its situation is blocked on calling qemuAgentSend() from
qemuAgentCommand().
But qemuAgentSend() may also have the block code.
OK, you're right. It is blocked when the qemuAgentSend is called. It
still looks unclean to me, though, because of the redundant option. It'd
be great to have another opinion from someone else :)
Martin