On 30.03.2020 13:41, Daniel P. Berrangé wrote:
On Sun, Mar 29, 2020 at 02:33:41PM +0300, nshirokovskiy wrote:
>
>
> On 26.03.2020 20:50, Daniel P. Berrangé wrote:
>> On Fri, Feb 28, 2020 at 10:09:41AM +0300, Nikolay Shirokovskiy wrote:
>>> On 27.02.2020 16:48, Daniel P. Berrangé wrote:
>>>> On Thu, Feb 27, 2020 at 03:57:04PM +0300, Nikolay Shirokovskiy wrote:
>>>>> Hi, everyone.
>>>>>
>>>>> I'm working on supporting domain renaming when it has snapshots
which is not
>>>>> supported now. And it strikes me things will be much simplier to
manage on
>>>>> renaming if we use uuid in filenames instead of domain names.
>>>>>
>>
>>
>>
>>>>> 4. No issues with long domain names and filename length limit
>>>>>
>>>>> If the above conversion makes sense I guess the good time to apply it
is
>>>>> on domain start (and rename to support renaming with snapshots).
>>>>
>>>> The above has not considered the benefit that using the VM name
>>>> has. Essentially the UUID is good for machines, the VM name is
>>>> good for humans. Seeing the guest XML files, or VM log files
>>>> using a filename based on UUID instead of name is a *really*
>>>> unappealing idea to me.
>>>
>>> I agree. But we can also keep symlinks with domain names for configs/logs
etc
>>> This can be done as a separate tool as I suggested in the letter or maintain
>>> symlinks always. The idea is failing in this symlinking won't affect
daemon
>>> functionality as symlinks are for humans)
>>
>> I've just realized that there is potential overlap between what we're
>> discussing in this thread, and in the thread about localhost migration:
>>
>>
https://www.redhat.com/archives/libvir-list/2020-February/msg00061.html
>>
>> In the localhost migration case, we need to be able to startup a new
>> guest with the same name as an existing guest. The way we can achieve
>> that is by thinking of localhost migration as being a pair of domain
>> rename operations.
>>
>> ie, consider guest "foo" we want to localhost-migrate
>>
>> - Start target guest "foo-incoming"
>> - Run live migration from "foo" -> "foo-incoming"
>> - Migration completes, CPUs stop
>> - Rename "foo" to "foo-outgoing"
>> - Rename "foo-incoming" to "foo"
>> - Tidy up migration state
>> - Destroy source guest "foo-outgoing"
>
> I think local migration does not fit really nicely in this scheme:
>
> - one can not treat outgoing and incoming VMs as just regular VMs as
> one can not put them into same list as they have same UUID
Yes, that is a tricky issue, but one that we need to solve, as the need
to have a completely separate of list VMs is the thing I dislike the
most about the local migration patches.
One option is to make the target VM have a different UUID by pickling
its UUID. eg have a migration UUID generated on daemon startup.
0466e1ae-a71a-4e75-89ca-c3591a4cf220. Then XOR this migration UUID
with the source VM's UUID. So during live migration the target VM
will appear with this XOR'd UUID, and once completed, it will get
the real UUID again.
A different option is to not keep the target VM in the domain list
at all. Instead virDomainObjPtr, could have a pointer to a second
virDomainObjPtr which stores the target VM temporarily.
Both choices have its issues/advantages
With the first approach incoming VM is visible as regular one. This
can be beneficial that one can inspect the VM in debug purpuose just
like regular one. On the other hand the appearance of the VM can
be unexpected to mgmt thus some may mgmt even try to destroy it. So the other
approach looks more transparent from mgmt POV.
I should say in Virtuozzo we have a patch series for local migration.
I decided not to send it to the list as previously the decision was
that the trade off for complexity/utility is not on the feature side.
I decided to use the latter approach to keep the link to the peer.
The link is bidirectional, thus is is very simple to get peer object
from both sides.
I want to mention another decision that turns to be successful -
using same mutex for both domain objects. This way we don't need
to care of "locking order"/"re-locking to keep the locking order"
to avoid deadlocks. Accessing peer object is as simple as vm->peer->...
And this plays nicely with current migration code when we drop
the lock at the end of migration phase so that next phase can
take the same lock.
As to handling clashing resources we support next cases:
- tap interfaces
- tcp/unix chardev backends
- vnc
For taps we use multiqueue mode so that we can have multiple
fds for a tap during local migration. The traffic is
splitted somehow between them so effectively traffic to the
domain is damaged during the migration but this should
not go on for a long time and tcp is supposed to handle it.
For chardev we decided to use unplug/plug approach for
backends. Incoming domain is started with null backend then
later right after CPU's of outgoing domain are stopped
backend is unplugged from the outgoing domain and plugged to
to incoming domain. I guess this is a bit worse than the approach
with symlinks as there is gap in time when there is no
backend available to client in case of server mode. At the
same time this is more suitable for client mode so there is
always just single client connection.
For the vnc approach is similar to chardevs - unplug after CPU's
are stopped from outgoing domain and plug into incoming domain.
This way we can make migration even without autoport.
Nikolay