On Tue, 2020-10-13 at 16:36 -0400, harry harry wrote:
Hi Paolo and Sean,
Thanks much for your prompt replies and clear explanations.
On Tue, Oct 13, 2020 at 2:43 AM Paolo Bonzini <pbonzini(a)redhat.com> wrote:
> No, the logic to find the HPA with a given HVA is the same as the
> hardware logic to translate HVA -> HPA. That is it uses the host
> "regular" page tables, not the nested page tables.
>
> In order to translate GPA to HPA, instead, KVM does not use the nested
> page tables.
I am curious why KVM does not directly use GPAs as HVAs and leverage
nested page tables to translate HVAs (i.e., GPAs) to HPAs? Is that
because 1) the hardware logic of ``GPA -> [extended/nested page
tables] -> HPA[*]'' is different[**] from the hardware logic of ``HVA
-> [host regular page tables] -> HPA''; 2) if 1) is true, it is
natural to reuse Linux's original functionality to translate HVAs to
HPAs through regular page tables.
I would like to emphisise again. The HVA space is
not fully free when a guest starts,
since it contains qemu's heap, code, data, and whatever qemu needs. However
guest't
GPA address space must be allocated fully. E.g if qemu heap starts at 0x40000,
then guest can't have physical memory at 0x40000 following your suggestion, which
is wrong. It can be in theory done by blacklisting these areas via ACPI/BIOS provided
memory map, but that would be very very difficult to maintain and not worth it.
Best regards,
Maxim Levitsky
[*]: Here, the translation means the last step for MMU to translate a
GVA's corresponding GPA to an HPA through the extended/nested page
tables.
[**]: To my knowledge, the hardware logic of ``GPA -> [extended/nested
page tables] -> HPA'' seems to be the same as the hardware logic of
``HVA -> [host regular page tables] -> HPA''. I appreciate it if you
could point out the differences I ignored. Thanks!
Best,
Harry