Hello!
I know Pavel Fedin was trying to revive kernel_irqchip=off once,
but I don't know if that effort was abandoned or not.
It should work with the latest kernel, at least i posted patches and all of them were
applied. If nothing got broken during later
rewrites.
The only missing part is generic timer support. There were problems with it, however,
after rewrite, they can be clearly addressed,
without need for any hacks. The following patchset implements this on kernel side, but it
has never been reviewed:
http://www.spinics.net/lists/kvm/msg124539.html. I also have qemu support in my
experimental tree and it works great, i can run
"virt" guest on a Samsung's proprietary board with FrankenGIC, but since
there was no interest, i never polished it up and
published.
I think it
could be a nice-to-have, in order to help isolate bugs with KVM,
but I agree running that way wouldn't be the norm.
IMHO it depends on what you want to achieve. If you strive for performance, then yes, of
course. But, if you want to emulate some
particular hardware on another hardware, then this can be the only way to do it if, for
example, you have GICv3-only hardware. KVM
without irqchip is still much better than TCG.
But yes, i never included it into Libvirt. Once i was thinking about something like
<gic version=off>, but perhaps it's not good
idea because this option is not ARM-specific, it's architecture-agnostic and
applicable to any KVM acceleration for IRQ controller.
Whether it works or not for the given platform, it is IMHO different story.
Kind regards,
Pavel Fedin
Senior Engineer
Samsung Electronics Research center Russia