[libvirt-users] using virDomainMigrateSetMaxDowntime

I would like to use the virDomainMigrateSetMaxDowntime but I'm a bit confused about how to use it. If I try to set the downtime before I call domain.migrate(), I get the error "domain is not being migrated". But I cannot call it afterwards because domain.migrate() does not return until the migration has completed. Am I meant to put the call to domain.migrate() into a separate thread of control, and then call SetMaxDowntime from my main thread after the migration has begun? How do I even know if the migration has begun? Thanks! --Igor

于 2010年12月17日 16:34, Igor Serebryany 写道:
I would like to use the virDomainMigrateSetMaxDowntime but I'm a bit confused about how to use it.
If I try to set the downtime before I call domain.migrate(), I get the error "domain is not being migrated". But I cannot call it afterwards because domain.migrate() does not return until the migration has completed.
the API is intended to prevent the migration from lasting forever if the memory of source domain is dirtying so quickly that QEMU does not have time todo the final completion switchover, but not to guarantee the migration will be finished more quickly. so yes, domain.migration will not return till it's finished. - Osier

On Mon, Dec 20, 2010 at 09:48:21AM +0800, Osier Yang wrote:
so yes, domain.migration will not return till it's finished.
the problem is not that domain.migration will not return till it's finished. that makes perfect sense and is totally fine. the problem is that i can't seem to do ANYTHING else with libvirt while domain.migration is running, including getting a domain object i can call domain.setmaxdowntime on or even manage other totally unrelated vms. i am trying to figure out if this is just me doing something wrong, or if this is a problem/design decision inside libvirt. --igor
participants (2)
-
Igor Serebryany
-
Osier Yang