On Thu, Apr 13, 2006 at 05:57:13PM -0400, Daniel Veillard wrote:
On Thu, Apr 13, 2006 at 10:37:17PM +0100, Daniel P. Berrange wrote:
>
> I've not really got any formal data on it at this time - it was just a random
> afternoon thought. I'll see if there's any useful way to get some data on
> the effects.
if running as root locally with Xen, then getting the data is a simple
hypercall, I would expect that to be nearly as fast as a gettimeofday(),
and this won't increase precision. In the case of a non-root local process
with an HTTP request to xend the time spent could potentially be quite large
actually not bounded at all due to potential I/O, the quality of the
data extracted then will be poor due to the tme of acquisition, would
that be worth it ? The last corner case is a remote monitoring, and there the
time spent is most likely to be due to network round trip, which in general
is approximated by taking the medium time between emission and reception
the time to do the 2 gettimeofday() are probably neglectible.
So in those 3 kind of extreme scenario it's a bit unclear how adding the
timestamp to the data would really help, except maybe as a convenience to
the user layer.
Actually getting some data about the costs of doing the call as root
though the hypervisor versus the xend HTTP RPC would be an interesting
datapoint in itself, I initially wanted to hack virsh to extract
statistics about this but never took the time to do it :-)
So I wrote a crude micro-benchmark to just analyse the cost of calling
virDomainGetInfo under different circumstances. Basically the loop
does 10,000 calls to the method & reports min,max,avg time. Like I said,
the test is crude, but the results give a picture which is consistent
with what I'm seeing in practice (ie the applet consume 5-10% CPU just
updating domain stats once a second).
All times are milliseconds in the following results..
1. Running the test as root, so virDomainGetINfo does a hypercall:
Total 239.397094726562
Avg: 0.0239397094726563
Min: 0.021484375
Max: 0.548974609375
2. Running the test unprivileged, so calls go via XenD/XenStoreD:
Total: 71546.1286621094
Avg: 7.15461286621094
Min: 6.1657958984375
Max: 45.3959228515625
So, as to be expected, the XenD/XenStoreD approach has significantly higher
overhead that direct HV calls. The question is, is a x350 overhead for
unprivileged user's acceptable / can it be improved to just one order of
magnitude worse.
As a proof of concept, I wrote a daemon wich exposes the APIs from libvirt
as a DBus service, then adapted the test case to call the DBus service
rather than libvirt directly.
3. Running the DBus service as root, so libvirt can make HV calls
Total: 11280.2186035156
Avg: 1.12802186035156
Min: 1.0397216796875
Max: 6.5512939453125
So this basic DBus service (written in Perl BTW) is approx x50 overhead
compared to HV calls, significantly better than the existing HTTP/SExpr
RPC method. It'll be interesting to see how the new XML-RPC method compares
in performance.
Getting back to the original point of my first mail, while there is definitely
a difference between calls via HV and those via XenD/XenStore, but even the worst
case is only 45 milliseconds - with the applet taking measurements once per
second it looks like CPU utilization calculations will be accurate enough. So
there is no pressing need to add a timestamp to virDomainInfo.
Regards,
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules:
http://search.cpan.org/~danberr/ -=|
|=- Projects:
http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|