On 09/22/14 18:37, Cole Robinson wrote:
On 09/21/2014 10:58 AM, Laszlo Ersek wrote:
> (2) Unfortunately, even libvirtd needs to be modified, in
addition.
>
> My patch for (1) *does* pass VIR_DOMAIN_UNDEFINE_NVRAM to libvirtd (I
> verified that with gdb), but libvirtd doesn't act upon it correctly.
> Namely, in the libvirtd source code, in qemuDomainUndefineFlags(),
> commit 273b6581 added:
>
> if (!virDomainObjIsActive(vm) &&
> vm->def->os.loader && vm->def->os.loader->nvram
&&
> virFileExists(vm->def->os.loader->nvram)) {
> if (!(flags & VIR_DOMAIN_UNDEFINE_NVRAM)) {
> virReportError(VIR_ERR_OPERATION_INVALID, "%s",
> _("cannot delete inactive domain with nvram"));
> goto cleanup;
> }
>
> if (unlink(vm->def->os.loader->nvram) < 0) {
> virReportSystemError(errno,
> _("failed to remove nvram: %s"),
> vm->def->os.loader->nvram);
> goto cleanup;
> }
> }
>
> Here "vm->def->os.loader->nvram" evaluates to NULL:
>
> 6468 if (!virDomainObjIsActive(vm) &&
> 6469 vm->def->os.loader &&
vm->def->os.loader->nvram &&
> 6470 virFileExists(vm->def->os.loader->nvram)) {
> 6471 if (!(flags & VIR_DOMAIN_UNDEFINE_NVRAM)) {
> 6472 virReportError(VIR_ERR_OPERATION_INVALID, "%s",
>
> (gdb) print vm->def->os.loader
> $1 = (virDomainLoaderDefPtr) 0x7ff59c00bf20
>
> (gdb) print vm->def->os.loader->nvram
> $2 = 0x0
>
> I thought that the auto-generation of the nvram pathname (internal to
> libvirtd, ie. not visible in the XML file) would occur on the undefine
> path as well. Apparently this is not the case.
>
> Indeed, the only call to qemuPrepareNVRAM() is from qemuProcessStart().
>
> Michal, would it be possible generate the *pathname* of the separate
> varstore in qemuDomainUndefineFlags() too?
>
> Please see the second attached patch; it works for me. (This patch is
> best looked at with "git diff -b" (or "git show -b").)
>
Thanks for the analysis Laszlo.
Things worked fine for me originally... but now that I've played with this
some more, there is definitely some funkiness here at the libvirt level that
explains it.
When the VM is defined, its XML looks like this:
<os>
<type arch='x86_64' machine='pc-i440fx-2.1'>hvm</type>
<loader readonly='yes'
type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
<boot dev='network'/>
</os>
Which is fine. After the first time the VM starts, the XML looks like:
<os>
<type arch='x86_64' machine='pc-i440fx-2.1'>hvm</type>
<loader readonly='yes'
type='pflash'>/usr/share/OVMF/OVMF_CODE.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/ovmf_VARS.fd</nvram>
<boot dev='network'/>
</os>
The generated nvram path is in the XML regardless of VM state from that point
forward. Seems fine.
However, when libvirtd is restarted, the generated nvram path is gone. So some
part of the process is not actually saving the path to disk.
Yes, the pathname is generated and/or the contents are copied in
qemuProcessStart()
qemuPrepareNVRAM()
which I gather are called when you start the VM, not when you define it.
That seems
problematic.
I say that at define time, if there's a template, we should do
the copying
then and hardcode the nvram path before even initially saving the XML to disk.
Then it's safely hardcoded forever. That's similar to the qemu-ga socket path
auto generation, where libvirt will generate a path for us in the proper
location, it seems to be hardcoded at define time (though it doesn't create
the socket).
(2) I think that minimally the copying from the varstore template must
happen at startup. Otherwise, if you lose your varstore file, you won't
be able to start the VM at all, until you recreate the varstore manually
from the template (which is exactly what libvirtd should save you from).
A particular, valid use case for this functionality is when you want to
"reset" your variable store. You just delete the old one (when the VM is
shut down) and libvirt will recreate it for you, for the pristine
template, at next VM startup.
(1) Moving the generation of the varstore pathname to define time is
also somewhat brittle, similarly to the above. If you accidentally
deleted the nvram element while in "virsh edit", then the proposed
change (== don't generate an nvram pathname at startup, if it's missing)
would prevent the VM from starting, and you'd have to restore the
element manually.
BTW I could not find the qemu-ga socket generation code in libvirt. (I
grepped for "org.qemu.guest_agent".) I did find something relevant in
"virtinst/devicechar.py" though, and CHANNEL_NAME_QEMUGA leads further.
Also I think if you accidentally remove the guest agent channel from the
domain XML, it won't break the VM completely, and (I think!) you can
even re-add it in virt-manager.
I think this can be made robust, we "just" need to start thinking about
the nvram element's text contents as "procedural" -- probably a getter
function should be factored out. If the text() child exists, then that's
the returned value (as a human-provided override), if it doesn't, then
the name should be auto-generated, based on the VM's UUID.
I agree that this is complex, but the domain XML should accommodate all
of the following:
- simple loader element, or type='rom': usual bios thing
- type='pflash' readonly='no': unified OVMF.fd file containing both
binary and varstore. Consequences:
- no concept of a varstore template
- no concept of a varstore
- hence no pathname generation and no contents copying
- this is a "fringe" case for completeness only
- type='pflash' readonly='yes': split OVMF binary and varstore.
Consequences:
a. loader specifies the binary only
b. nvram text() child enables a user-provided, VM-specific varstore
pathname, if present
c. lack of nvram text() child requests auto-generation based on VM
name ^W UUID (see my earlier patch about the UUID)
d. in nvram's template attribute, if present, the user provides the
pathname of a varstore template file compatible with the loader
binary
e. if nvram's template attribute is missing, then libvirtd insists on
being able to resolve the loader binary to a varstore template
using qemu.conf's "nvram" setting.
If you restrict (c) or (d)/(e) to define time only, then things will
work in the "optimal" case, but as soon as the user deletes or loses the
VM-specific varstore, and/or loses the <nvram> element, things will
break much less gracefully.
In any case I don't insist either way; I'll defer to Michal and you.
(also, another issue I just noticed... libvirt tries to use
/var/lib/libvirt...nvram path if connected to qemu:///session, probably wants
whatever the typical path under $HOME is)
I don't know what the "typical path under $HOME is", but this case is
certainly addressible with (b) -- as long as the tools expose it. :)
Virt-install does:
--boot
loader=/usr/share/OVMF/OVMF_CODE.fd,loader_ro=yes,loader_type=pflash,nvram=$HOME/.../guest-VARS.fd
This is (b)+(e).
--boot
loader=/custom/OVMF_CODE.fd,loader_ro=yes,loader_type=pflash,nvram_template=/custom/OVMF_VARS.fd,nvram=$HOME/.../guest-VARS.fd
This is (b)+(d).
> (3) I just realized that a domain's name can change during
its lifetime.
> Renaming a domain becomes a problem when the varstore's pathname is
> deduced from the domain's name. For this reason, the auto-generation
> should use the domain's UUID (which never changes). Michal, what do you
> think of the 3rd attached patch?
>
>
> I can resubmit these as standalone patches / series as well (the first
> to virt-tools-list, and the last two to libvir-list), if that's
> necessary.
>
Thanks for the virt-manager patch, I've applied it now.
Thanks!
Laszlo