[libvirt] [PATCH v2] lib: Add note that bulk stats API queries may overrun RPC buffers

Hint that the users should limit the number of VMs queried in the bulk stats API. --- v2: - added a suggestion of the number of queried VMs (valid after bump to 32M message) src/libvirt-domain.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 310b91b37..aef061943 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -11315,6 +11315,10 @@ virConnectGetDomainCapabilities(virConnectPtr conn, * VIR_CONNECT_GET_ALL_DOMAINS_STATS_SHUTOFF and/or * VIR_CONNECT_GET_ALL_DOMAINS_STATS_OTHER for all other states. * + * Note that this API is prone to exceeding maximum RPC message size on hosts + * with lots of VMs so it's suggested to use virDomainListGetStats with a + * reasonable list of VMs as the argument. + * * Returns the count of returned statistics structures on success, -1 on error. * The requested data are returned in the @retStats parameter. The returned * array should be freed by the caller. See virDomainStatsRecordListFree. @@ -11386,6 +11390,10 @@ virConnectGetAllDomainStats(virConnectPtr conn, * Note that any of the domain list filtering flags in @flags may be rejected * by this function. * + * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + * * Returns the count of returned statistics structures on success, -1 on error. * The requested data are returned in the @retStats parameter. The returned * array should be freed by the caller. See virDomainStatsRecordListFree. -- 2.12.2

On Tue, May 23, 2017 at 04:23:30PM +0200, Peter Krempa wrote:
Hint that the users should limit the number of VMs queried in the bulk stats API. --- v2: - added a suggestion of the number of queried VMs (valid after bump to 32M message)
src/libvirt-domain.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 310b91b37..aef061943 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -11315,6 +11315,10 @@ virConnectGetDomainCapabilities(virConnectPtr conn, * VIR_CONNECT_GET_ALL_DOMAINS_STATS_SHUTOFF and/or * VIR_CONNECT_GET_ALL_DOMAINS_STATS_OTHER for all other states. * + * Note that this API is prone to exceeding maximum RPC message size on hosts + * with lots of VMs so it's suggested to use virDomainListGetStats with a + * reasonable list of VMs as the argument. + * * Returns the count of returned statistics structures on success, -1 on error. * The requested data are returned in the @retStats parameter. The returned * array should be freed by the caller. See virDomainStatsRecordListFree. @@ -11386,6 +11390,10 @@ virConnectGetAllDomainStats(virConnectPtr conn, * Note that any of the domain list filtering flags in @flags may be rejected * by this function. * + * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + *
Coming to think about it... Why don't we just batch this ourselves under the hood and just return the merged result?
* Returns the count of returned statistics structures on success, -1 on error. * The requested data are returned in the @retStats parameter. The returned * array should be freed by the caller. See virDomainStatsRecordListFree. -- 2.12.2
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On 05/23/2017 04:35 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 04:23:30PM +0200, Peter Krempa wrote:
Hint that the users should limit the number of VMs queried in the bulk stats API. --- v2: - added a suggestion of the number of queried VMs (valid after bump to 32M message)
src/libvirt-domain.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 310b91b37..aef061943 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -11315,6 +11315,10 @@ virConnectGetDomainCapabilities(virConnectPtr conn, * VIR_CONNECT_GET_ALL_DOMAINS_STATS_SHUTOFF and/or * VIR_CONNECT_GET_ALL_DOMAINS_STATS_OTHER for all other states. * + * Note that this API is prone to exceeding maximum RPC message size on hosts + * with lots of VMs so it's suggested to use virDomainListGetStats with a + * reasonable list of VMs as the argument. + * * Returns the count of returned statistics structures on success, -1 on error. * The requested data are returned in the @retStats parameter. The returned * array should be freed by the caller. See virDomainStatsRecordListFree. @@ -11386,6 +11390,10 @@ virConnectGetAllDomainStats(virConnectPtr conn, * Note that any of the domain list filtering flags in @flags may be rejected * by this function. * + * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + *
Coming to think about it... Why don't we just batch this ourselves under the hood and just return the merged result?
Because: https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html Michal

On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
On 05/23/2017 04:35 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 04:23:30PM +0200, Peter Krempa wrote:
Hint that the users should limit the number of VMs queried in the bulk stats API. --- v2: - added a suggestion of the number of queried VMs (valid after bump to 32M message)
src/libvirt-domain.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 310b91b37..aef061943 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -11315,6 +11315,10 @@ virConnectGetDomainCapabilities(virConnectPtr conn, * VIR_CONNECT_GET_ALL_DOMAINS_STATS_SHUTOFF and/or * VIR_CONNECT_GET_ALL_DOMAINS_STATS_OTHER for all other states. * + * Note that this API is prone to exceeding maximum RPC message size on hosts + * with lots of VMs so it's suggested to use virDomainListGetStats with a + * reasonable list of VMs as the argument. + * * Returns the count of returned statistics structures on success, -1 on error. * The requested data are returned in the @retStats parameter. The returned * array should be freed by the caller. See virDomainStatsRecordListFree. @@ -11386,6 +11390,10 @@ virConnectGetAllDomainStats(virConnectPtr conn, * Note that any of the domain list filtering flags in @flags may be rejected * by this function. * + * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + *
Coming to think about it... Why don't we just batch this ourselves under the hood and just return the merged result?
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
Not on the RPC level, the API would just be syntax sugar to virDomainListGetStats() if a flag was passed (e.g. VIR_DOMAIN_GET_ALL_STATS_I_DONT_REALLY_CARE_IF_THIS_IS_DONE_IN_ONE_LIBVIRT_CALL)
Michal
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On 05/23/2017 05:19 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
On 05/23/2017 04:35 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 04:23:30PM +0200, Peter Krempa wrote:
Hint that the users should limit the number of VMs queried in the bulk stats API. --- v2: - added a suggestion of the number of queried VMs (valid after bump to 32M message)
src/libvirt-domain.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 310b91b37..aef061943 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -11315,6 +11315,10 @@ virConnectGetDomainCapabilities(virConnectPtr conn, * VIR_CONNECT_GET_ALL_DOMAINS_STATS_SHUTOFF and/or * VIR_CONNECT_GET_ALL_DOMAINS_STATS_OTHER for all other states. * + * Note that this API is prone to exceeding maximum RPC message size on hosts + * with lots of VMs so it's suggested to use virDomainListGetStats with a + * reasonable list of VMs as the argument. + * * Returns the count of returned statistics structures on success, -1 on error. * The requested data are returned in the @retStats parameter. The returned * array should be freed by the caller. See virDomainStatsRecordListFree. @@ -11386,6 +11390,10 @@ virConnectGetAllDomainStats(virConnectPtr conn, * Note that any of the domain list filtering flags in @flags may be rejected * by this function. * + * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + *
Coming to think about it... Why don't we just batch this ourselves under the hood and just return the merged result?
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
Not on the RPC level, the API would just be syntax sugar to virDomainListGetStats() if a flag was passed (e.g. VIR_DOMAIN_GET_ALL_STATS_I_DONT_REALLY_CARE_IF_THIS_IS_DONE_IN_ONE_LIBVIRT_CALL)
We can't guarantee atomicity of such call, so I don't see a reason for us to implement it. On the other hand, it's us who knows the limits for RPC, not users. So they might have some troubles writing it. But then again, we would have some troubles too because we might be talking to an older server with lower limits thus we'd need some guessing too. Also, the flag would expose what we want to hide - RPC layer. It should be no concern for users how we pass data around. Michal

On Tue, May 23, 2017 at 05:29:28PM +0200, Michal Privoznik wrote:
On 05/23/2017 05:19 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
On 05/23/2017 04:35 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 04:23:30PM +0200, Peter Krempa wrote:
Hint that the users should limit the number of VMs queried in the bulk stats API. --- v2: - added a suggestion of the number of queried VMs (valid after bump to 32M message)
src/libvirt-domain.c | 8 ++++++++ 1 file changed, 8 insertions(+)
diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 310b91b37..aef061943 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -11315,6 +11315,10 @@ virConnectGetDomainCapabilities(virConnectPtr conn, * VIR_CONNECT_GET_ALL_DOMAINS_STATS_SHUTOFF and/or * VIR_CONNECT_GET_ALL_DOMAINS_STATS_OTHER for all other states. * + * Note that this API is prone to exceeding maximum RPC message size on hosts + * with lots of VMs so it's suggested to use virDomainListGetStats with a + * reasonable list of VMs as the argument. + * * Returns the count of returned statistics structures on success, -1 on error. * The requested data are returned in the @retStats parameter. The returned * array should be freed by the caller. See virDomainStatsRecordListFree. @@ -11386,6 +11390,10 @@ virConnectGetAllDomainStats(virConnectPtr conn, * Note that any of the domain list filtering flags in @flags may be rejected * by this function. * + * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + *
Coming to think about it... Why don't we just batch this ourselves under the hood and just return the merged result?
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
Not on the RPC level, the API would just be syntax sugar to virDomainListGetStats() if a flag was passed (e.g. VIR_DOMAIN_GET_ALL_STATS_I_DONT_REALLY_CARE_IF_THIS_IS_DONE_IN_ONE_LIBVIRT_CALL)
We can't guarantee atomicity of such call, so I don't see a reason for
Yes, that's why it would *only* happen if that flag was present. That's precisely why I added info about the flag.
us to implement it. On the other hand, it's us who knows the limits for RPC, not users. So they might have some troubles writing it. But then again, we would have some troubles too because we might be talking to an older server with lower limits thus we'd need some guessing too.
We would definitely need guessing. You cannot predict the size very nicely.
Also, the flag would expose what we want to hide - RPC layer. It should be no concern for users how we pass data around.
No, this has nothing to do with RPC. Let me "pseudocode" this: int virConnectGetAllDomainStats(virConnectPtr conn, ..., unsigned int flags) { bool legacy = flags & VIR_CONNECT_GET_ALL_DOMAIN_STATS_OK_IF_NOT_ATOMIC; ... ret = conn->driver->connectGetAllDomainStats(conn, ..., flags); if (ret < 0 && lastError == RPC_SIZE_ERROR) { if (legacy || !conn->driver->domainListGetStats) goto cleanup; virResetLastError(); ndomains = conn->driver->connectListAllDomains(conn, ..., flags); if (ndomains < 0) goto cleanup; /* halve the number of domains requested until the RPC error is * avoided and next batches start from the number that worked * and for each batch call domainListGetStats() and then merge * the results. * * I was trying to implement it, but since it looks like nobody * else likes the idea, it would be wasting my time. */ } ... } Or we can expose it as a new helper function. I see no RPC exposition here.

On 05/24/2017 12:55 PM, Martin Kletzander wrote:
<snip/>
No, this has nothing to do with RPC. Let me "pseudocode" this:
int virConnectGetAllDomainStats(virConnectPtr conn, ..., unsigned int flags) { bool legacy = flags & VIR_CONNECT_GET_ALL_DOMAIN_STATS_OK_IF_NOT_ATOMIC;
The mere presence of the flags says we have a limit on RPC layer that users need to pass in order to avoid it. And if we ever introduce the flag, all users will use it in order to not having to care about it anymore. Michal

On Tue, May 23, 2017 at 17:19:53 +0200, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
On 05/23/2017 04:35 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 04:23:30PM +0200, Peter Krempa wrote:
[...]
+ * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + *
Coming to think about it... Why don't we just batch this ourselves under the hood and just return the merged result?
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
Not on the RPC level, the API would just be syntax sugar to virDomainListGetStats() if a flag was passed (e.g. VIR_DOMAIN_GET_ALL_STATS_I_DONT_REALLY_CARE_IF_THIS_IS_DONE_IN_ONE_LIBVIRT_CALL)
Also compared to a full fragmentation of the returned data, this would result into a worst-case-scenario memory usage of MAX_SIZE * NVMS_QUERIED_IN_ORIGINAL_CALL, when compared to an unbounded memory use of the full fragmentation approach.

On Tue, May 23, 2017 at 05:29:52PM +0200, Peter Krempa wrote:
On Tue, May 23, 2017 at 17:19:53 +0200, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
On 05/23/2017 04:35 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 04:23:30PM +0200, Peter Krempa wrote:
[...]
+ * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + *
Coming to think about it... Why don't we just batch this ourselves under the hood and just return the merged result?
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
Not on the RPC level, the API would just be syntax sugar to virDomainListGetStats() if a flag was passed (e.g. VIR_DOMAIN_GET_ALL_STATS_I_DONT_REALLY_CARE_IF_THIS_IS_DONE_IN_ONE_LIBVIRT_CALL)
Also compared to a full fragmentation of the returned data, this would result into a worst-case-scenario memory usage of MAX_SIZE * NVMS_QUERIED_IN_ORIGINAL_CALL, when compared to an unbounded memory use of the full fragmentation approach.
If I get what you are saying, then the same would happen if the mgmt app (or client) implemented it themselves. We would basically just provide the guessing logic.
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Wed, May 24, 2017 at 12:32:05 +0200, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:29:52PM +0200, Peter Krempa wrote:
On Tue, May 23, 2017 at 17:19:53 +0200, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
On 05/23/2017 04:35 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 04:23:30PM +0200, Peter Krempa wrote:
[...]
+ * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + *
Coming to think about it... Why don't we just batch this ourselves under the hood and just return the merged result?
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
Not on the RPC level, the API would just be syntax sugar to virDomainListGetStats() if a flag was passed (e.g. VIR_DOMAIN_GET_ALL_STATS_I_DONT_REALLY_CARE_IF_THIS_IS_DONE_IN_ONE_LIBVIRT_CALL)
Also compared to a full fragmentation of the returned data, this would result into a worst-case-scenario memory usage of MAX_SIZE * NVMS_QUERIED_IN_ORIGINAL_CALL, when compared to an unbounded memory use of the full fragmentation approach.
If I get what you are saying, then the same would happen if the mgmt app (or client) implemented it themselves. We would basically just provide the guessing logic.
Yes. I don't really think it's worth doing this. Besides the "logic" would be that we'd call it one-by one, to get the full buffer size for every single VM.

On 05/24/2017 12:32 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:29:52PM +0200, Peter Krempa wrote:
On Tue, May 23, 2017 at 17:19:53 +0200, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
On 05/23/2017 04:35 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 04:23:30PM +0200, Peter Krempa wrote:
[...]
+ * Note that this API is prone to exceeding maximum RPC if querying too many VMs + * with lots of statistics. It's suggested to query in batches of 10VMs, which + * should be good enough for VMs with 3000 disks + networks. + *
Coming to think about it... Why don't we just batch this ourselves under the hood and just return the merged result?
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
Not on the RPC level, the API would just be syntax sugar to virDomainListGetStats() if a flag was passed (e.g. VIR_DOMAIN_GET_ALL_STATS_I_DONT_REALLY_CARE_IF_THIS_IS_DONE_IN_ONE_LIBVIRT_CALL)
Also compared to a full fragmentation of the returned data, this would result into a worst-case-scenario memory usage of MAX_SIZE * NVMS_QUERIED_IN_ORIGINAL_CALL, when compared to an unbounded memory use of the full fragmentation approach.
If I get what you are saying, then the same would happen if the mgmt app (or client) implemented it themselves. We would basically just provide the guessing logic.
That's quite exact. I mean the word 'guessing'. We can't really provide reliable way of dealing with what you're suggesting (unless we cut the limit really small) nor we can guarantee atomicity. Therefore I think it would be a waste of time to work on this. Yes, it can be done, but the benefits are pretty small IMO. Michal

On Wed, May 24, 2017 at 12:49:58 +0200, Michal Privoznik wrote:
On 05/24/2017 12:32 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:29:52PM +0200, Peter Krempa wrote:
On Tue, May 23, 2017 at 17:19:53 +0200, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
[...]
Also compared to a full fragmentation of the returned data, this would result into a worst-case-scenario memory usage of MAX_SIZE * NVMS_QUERIED_IN_ORIGINAL_CALL, when compared to an unbounded memory use of the full fragmentation approach.
If I get what you are saying, then the same would happen if the mgmt app (or client) implemented it themselves. We would basically just provide the guessing logic.
That's quite exact. I mean the word 'guessing'. We can't really provide reliable way of dealing with what you're suggesting (unless we cut the limit really small) nor we can guarantee atomicity. Therefore I think it would be a waste of time to work on this. Yes, it can be done, but the benefits are pretty small IMO.
Yes, I agree. Therefore I opted to add the notice for this API.

On Wed, May 24, 2017 at 12:54:26PM +0200, Peter Krempa wrote:
On Wed, May 24, 2017 at 12:49:58 +0200, Michal Privoznik wrote:
On 05/24/2017 12:32 PM, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:29:52PM +0200, Peter Krempa wrote:
On Tue, May 23, 2017 at 17:19:53 +0200, Martin Kletzander wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
[...]
Also compared to a full fragmentation of the returned data, this would result into a worst-case-scenario memory usage of MAX_SIZE * NVMS_QUERIED_IN_ORIGINAL_CALL, when compared to an unbounded memory use of the full fragmentation approach.
If I get what you are saying, then the same would happen if the mgmt app (or client) implemented it themselves. We would basically just provide the guessing logic.
That's quite exact. I mean the word 'guessing'. We can't really provide reliable way of dealing with what you're suggesting (unless we cut the limit really small) nor we can guarantee atomicity. Therefore I think it would be a waste of time to work on this. Yes, it can be done, but the benefits are pretty small IMO.
Yes, I agree. Therefore I opted to add the notice for this API.
I was just discussing an idea on a side. I'm still OK with the notice. I'm not sure if that's exactly what Dan had in mind, though.
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Wed, May 24, 2017 at 12:49:58PM +0200, Michal Privoznik wrote:
That's quite exact. I mean the word 'guessing'. We can't really provide reliable way of dealing with what you're suggesting (unless we cut the limit really small) nor we can guarantee atomicity. Therefore I think it would be a waste of time to work on this. Yes, it can be done, but the benefits are pretty small IMO.
Why is atomicity a problem? Just structure the libvirtd messages so that you have: COLLECT_THE_STATS - saves the stats into an internal buffer in libvirtd and returns a handle and a number of stat items RETURN_THE_STATS - returns partial subset of previously collected stats, called multiple times to transfer the data back to libvirt FREE_THE_STATS - free the internal buffer Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html

On 05/24/2017 02:47 PM, Richard W.M. Jones wrote:
On Wed, May 24, 2017 at 12:49:58PM +0200, Michal Privoznik wrote:
That's quite exact. I mean the word 'guessing'. We can't really provide reliable way of dealing with what you're suggesting (unless we cut the limit really small) nor we can guarantee atomicity. Therefore I think it would be a waste of time to work on this. Yes, it can be done, but the benefits are pretty small IMO.
Why is atomicity a problem?
The atomicity steps in depending on what level we are talking about serialization. If we do it the way Martin suggested (= on public API level) then the list of domains may change between two iterations of the suggested loop. Thus the result obtained would not reflect the reality. However, if we are talking on RPC level, then there's no atomicity problem as RPC is 'blind' to the data (it doesn't care what data is sent).
Just structure the libvirtd messages so that you have:
COLLECT_THE_STATS - saves the stats into an internal buffer in libvirtd and returns a handle and a number of stat items RETURN_THE_STATS - returns partial subset of previously collected stats, called multiple times to transfer the data back to libvirt FREE_THE_STATS - free the internal buffer
This is exactly what I was proposing in my e-mail that I linked in the other thread. You just wrote it more cute. Yet again, what's the gain/advantage of this over one big message? Michal

On Wed, May 24, 2017 at 04:16:45PM +0200, Michal Privoznik wrote:
On 05/24/2017 02:47 PM, Richard W.M. Jones wrote:
On Wed, May 24, 2017 at 12:49:58PM +0200, Michal Privoznik wrote:
That's quite exact. I mean the word 'guessing'. We can't really provide reliable way of dealing with what you're suggesting (unless we cut the limit really small) nor we can guarantee atomicity. Therefore I think it would be a waste of time to work on this. Yes, it can be done, but the benefits are pretty small IMO.
Why is atomicity a problem?
The atomicity steps in depending on what level we are talking about serialization. If we do it the way Martin suggested (= on public API level) then the list of domains may change between two iterations of the suggested loop. Thus the result obtained would not reflect the reality. However, if we are talking on RPC level, then there's no atomicity problem as RPC is 'blind' to the data (it doesn't care what data is sent).
Just structure the libvirtd messages so that you have:
COLLECT_THE_STATS - saves the stats into an internal buffer in libvirtd and returns a handle and a number of stat items RETURN_THE_STATS - returns partial subset of previously collected stats, called multiple times to transfer the data back to libvirt FREE_THE_STATS - free the internal buffer
This is exactly what I was proposing in my e-mail that I linked in the other thread. You just wrote it more cute. Yet again, what's the gain/advantage of this over one big message?
In the libguestfs case there is an actual security concern. The daemon runs in the untrusted context of the guest, where a malicious guest filesystem or program could (in some cases quite easily) send back arbitrarily large messages to the library, tieing up infinite resources on the host. In libvirt the danger is possibly more on the other side (modified library sends infinitely large message to libvirtd). There's also an issue of libvirt connecting to a compromised remote host. Whether having a maximum message size prevents all such attacks I'm less clear about, but it probably helps against simple ones. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into KVM guests. http://libguestfs.org/virt-v2v

On 05/24/2017 04:58 PM, Richard W.M. Jones wrote:
On Wed, May 24, 2017 at 04:16:45PM +0200, Michal Privoznik wrote:
On 05/24/2017 02:47 PM, Richard W.M. Jones wrote:
On Wed, May 24, 2017 at 12:49:58PM +0200, Michal Privoznik wrote:
That's quite exact. I mean the word 'guessing'. We can't really provide reliable way of dealing with what you're suggesting (unless we cut the limit really small) nor we can guarantee atomicity. Therefore I think it would be a waste of time to work on this. Yes, it can be done, but the benefits are pretty small IMO.
Why is atomicity a problem?
The atomicity steps in depending on what level we are talking about serialization. If we do it the way Martin suggested (= on public API level) then the list of domains may change between two iterations of the suggested loop. Thus the result obtained would not reflect the reality. However, if we are talking on RPC level, then there's no atomicity problem as RPC is 'blind' to the data (it doesn't care what data is sent).
Just structure the libvirtd messages so that you have:
COLLECT_THE_STATS - saves the stats into an internal buffer in libvirtd and returns a handle and a number of stat items RETURN_THE_STATS - returns partial subset of previously collected stats, called multiple times to transfer the data back to libvirt FREE_THE_STATS - free the internal buffer
This is exactly what I was proposing in my e-mail that I linked in the other thread. You just wrote it more cute. Yet again, what's the gain/advantage of this over one big message?
In the libguestfs case there is an actual security concern. The daemon runs in the untrusted context of the guest, where a malicious guest filesystem or program could (in some cases quite easily) send back arbitrarily large messages to the library, tieing up infinite resources on the host.
In libvirt the danger is possibly more on the other side (modified library sends infinitely large message to libvirtd). There's also an issue of libvirt connecting to a compromised remote host.
Whether having a maximum message size prevents all such attacks I'm less clear about, but it probably helps against simple ones.
I'm not a security expert but I view both approach the same from security POV. I mean, I'm all up for limits. Don't get me wrong there. It's just that we have two options: a) one API call = one RPC call b) one API call split across multiple RPC messages So, it's obvious that in case a) we need bigger limit for the RPC message to fit all the data. In case b) the limit for the RPC message can be smaller, but in turn we need limit for the maximum messages per one API call. Both of these approaches consume about the same memory on src & dst (I think option b) does consume slightly more because of all the extra headers sent with each message, but that's not the point). Now, if I were an attacker I can very well send one big message as well as N small messages to fill up the buffers. Therefore I think the attack surface is the same for both of these approaches. a) is easier to implement IMO. Michal

On Wed, May 24, 2017 at 05:12:06PM +0200, Michal Privoznik wrote:
On 05/24/2017 04:58 PM, Richard W.M. Jones wrote:
On Wed, May 24, 2017 at 04:16:45PM +0200, Michal Privoznik wrote:
On 05/24/2017 02:47 PM, Richard W.M. Jones wrote:
On Wed, May 24, 2017 at 12:49:58PM +0200, Michal Privoznik wrote:
That's quite exact. I mean the word 'guessing'. We can't really provide reliable way of dealing with what you're suggesting (unless we cut the limit really small) nor we can guarantee atomicity. Therefore I think it would be a waste of time to work on this. Yes, it can be done, but the benefits are pretty small IMO.
Why is atomicity a problem?
The atomicity steps in depending on what level we are talking about serialization. If we do it the way Martin suggested (= on public API level) then the list of domains may change between two iterations of the suggested loop. Thus the result obtained would not reflect the reality. However, if we are talking on RPC level, then there's no atomicity problem as RPC is 'blind' to the data (it doesn't care what data is sent).
Just structure the libvirtd messages so that you have:
COLLECT_THE_STATS - saves the stats into an internal buffer in libvirtd and returns a handle and a number of stat items RETURN_THE_STATS - returns partial subset of previously collected stats, called multiple times to transfer the data back to libvirt FREE_THE_STATS - free the internal buffer
This is exactly what I was proposing in my e-mail that I linked in the other thread. You just wrote it more cute. Yet again, what's the gain/advantage of this over one big message?
In the libguestfs case there is an actual security concern. The daemon runs in the untrusted context of the guest, where a malicious guest filesystem or program could (in some cases quite easily) send back arbitrarily large messages to the library, tieing up infinite resources on the host.
In libvirt the danger is possibly more on the other side (modified library sends infinitely large message to libvirtd). There's also an issue of libvirt connecting to a compromised remote host.
Whether having a maximum message size prevents all such attacks I'm less clear about, but it probably helps against simple ones.
I'm not a security expert but I view both approach the same from security POV. I mean, I'm all up for limits. Don't get me wrong there. It's just that we have two options:
a) one API call = one RPC call b) one API call split across multiple RPC messages
So, it's obvious that in case a) we need bigger limit for the RPC message to fit all the data. In case b) the limit for the RPC message can be smaller, but in turn we need limit for the maximum messages per one API call. Both of these approaches consume about the same memory on src & dst (I think option b) does consume slightly more because of all the extra headers sent with each message, but that's not the point). Now, if I were an attacker I can very well send one big message as well as N small messages to fill up the buffers. Therefore I think the attack surface is the same for both of these approaches.
a) is easier to implement IMO.
You are forgetting one thing. If b) is only utilized on the way from the daemon to the client, then you can keep the number of messages per RPC call unlimited without any security concern. The only thing that would have to get compromised is the daemon and if that happens, there are bigger issues than DoS'ing the client. It would actually solve all of our current issues (not talking about client starting a domain with 10,000 disks, of course). Martin

On 05/24/2017 09:30 PM, Martin Kletzander wrote:
On Wed, May 24, 2017 at 05:12:06PM +0200, Michal Privoznik wrote:
On 05/24/2017 04:58 PM, Richard W.M. Jones wrote:
On Wed, May 24, 2017 at 04:16:45PM +0200, Michal Privoznik wrote:
On 05/24/2017 02:47 PM, Richard W.M. Jones wrote:
On Wed, May 24, 2017 at 12:49:58PM +0200, Michal Privoznik wrote:
That's quite exact. I mean the word 'guessing'. We can't really provide reliable way of dealing with what you're suggesting (unless we cut the limit really small) nor we can guarantee atomicity. Therefore I think it would be a waste of time to work on this. Yes, it can be done, but the benefits are pretty small IMO.
Why is atomicity a problem?
The atomicity steps in depending on what level we are talking about serialization. If we do it the way Martin suggested (= on public API level) then the list of domains may change between two iterations of the suggested loop. Thus the result obtained would not reflect the reality. However, if we are talking on RPC level, then there's no atomicity problem as RPC is 'blind' to the data (it doesn't care what data is sent).
Just structure the libvirtd messages so that you have:
COLLECT_THE_STATS - saves the stats into an internal buffer in libvirtd and returns a handle and a number of stat items RETURN_THE_STATS - returns partial subset of previously collected stats, called multiple times to transfer the data back to libvirt FREE_THE_STATS - free the internal buffer
This is exactly what I was proposing in my e-mail that I linked in the other thread. You just wrote it more cute. Yet again, what's the gain/advantage of this over one big message?
In the libguestfs case there is an actual security concern. The daemon runs in the untrusted context of the guest, where a malicious guest filesystem or program could (in some cases quite easily) send back arbitrarily large messages to the library, tieing up infinite resources on the host.
In libvirt the danger is possibly more on the other side (modified library sends infinitely large message to libvirtd). There's also an issue of libvirt connecting to a compromised remote host.
Whether having a maximum message size prevents all such attacks I'm less clear about, but it probably helps against simple ones.
I'm not a security expert but I view both approach the same from security POV. I mean, I'm all up for limits. Don't get me wrong there. It's just that we have two options:
a) one API call = one RPC call b) one API call split across multiple RPC messages
So, it's obvious that in case a) we need bigger limit for the RPC message to fit all the data. In case b) the limit for the RPC message can be smaller, but in turn we need limit for the maximum messages per one API call. Both of these approaches consume about the same memory on src & dst (I think option b) does consume slightly more because of all the extra headers sent with each message, but that's not the point). Now, if I were an attacker I can very well send one big message as well as N small messages to fill up the buffers. Therefore I think the attack surface is the same for both of these approaches.
a) is easier to implement IMO.
You are forgetting one thing. If b) is only utilized on the way from the daemon to the client, then you can keep the number of messages per RPC call unlimited without any security concern. The only thing that would have to get compromised is the daemon and if that happens, there are bigger issues than DoS'ing the client. It would actually solve all of our current issues (not talking about client starting a domain with 10,000 disks, of course).
That's the same as dropping the limit for one message if it's send from server to client. But I don't think that's a good idea. The limits are there not just to protect us from attackers (that's actually transport layer's job) but to protect us from bugs too, i.e. if we have a bug and one side would start sending garbage, the other side is protected from consuming all memory available. Therefore I don't think dropping limits in whatever direction is a good idea. But I don't want to stop any attempts in this area. Feel free to work and propose patches that split large messages into smaller ones. Just a couple of hints: since RPC is invisible to users, I think the solution should be transparent to users. That means no flag passed to an API. No such flag as SPLIT_LARGE_MESSAGES. Also, this API fixed here is probably most likely to hit the limit, but others are good candidates fir hitting the limit too. Therefore, if implementing this it should be generic enough that actually any RPC call can use it. Just like I'm describing in the linked thread. In fact, what I am suggesting is extending our RPC protocol so that potentially every API uses it whenever needed. Then, in order to keep compatibility with older clients/servers we can have a connection feature, say VIR_DRV_FEATURE_SPLIT_MESSAGES to signal side supporting this feature. Then again, I don't see a difference to one huge message, but let's not make it stop you from working on that. Maybe I am forgetting something and we'll find it out when having the code written. Michal

On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
I don't think this is a reason at all. Libguestfs uses an RPC system which was modelled on the libvirt one, and has exactly the same problem with message size limits, except smaller -- 4MB and we've never had to increase it. We get around this by batching operations over messages as necessary (eg [1]). This adds a little complexity in the implementation of the API, but the point is that the complexity is entirely hidden to users of the APIs. Rich. [1] https://github.com/libguestfs/libguestfs/blob/master/lib/file.c#L375-L415 In this code, ‘guestfs_impl_lstatnslist’ is the publicly visible API (it's the implementation of the public API guestfs_lstatnslist). ‘guestfs_internal_lstatnslist’ is the message which travels over the RPC. The batching of 1000 requests per message was chosen based on pathname limits on Linux so that each request will always fit into a single message. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org

On Wed, May 24, 2017 at 01:42:44PM +0100, Richard W.M. Jones wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
I don't think this is a reason at all.
Libguestfs uses an RPC system which was modelled on the libvirt one, and has exactly the same problem with message size limits, except smaller -- 4MB and we've never had to increase it.
We get around this by batching operations over messages as necessary (eg [1]). This adds a little complexity in the implementation of the API, but the point is that the complexity is entirely hidden to users of the APIs.
Rich.
[1] https://github.com/libguestfs/libguestfs/blob/master/lib/file.c#L375-L415
In this code, ‘guestfs_impl_lstatnslist’ is the publicly visible API (it's the implementation of the public API guestfs_lstatnslist). ‘guestfs_internal_lstatnslist’ is the message which travels over the RPC. The batching of 1000 requests per message was chosen based on pathname limits on Linux so that each request will always fit into a single message.
That's basically my point #2 from the RFC/v1 that we could just pass the data through a virStream (it would have to be as text, which is almost as bad as virTypedParam). What you suggest would not be that difficult, but it sounds to me like something others dismissed, IIRC.
-- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com libguestfs lets you edit virtual machines. Supports shell scripting, bindings from many languages. http://libguestfs.org
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On 05/24/2017 02:42 PM, Richard W.M. Jones wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
I don't think this is a reason at all.
Libguestfs uses an RPC system which was modelled on the libvirt one, and has exactly the same problem with message size limits, except smaller -- 4MB and we've never had to increase it.
So you're basically doing what I'm describing in point a). Transforming problem to another one. The maximum number of 4MB messages.
We get around this by batching operations over messages as necessary (eg [1]). This adds a little complexity in the implementation of the API, but the point is that the complexity is entirely hidden to users of the APIs.
Exactly. A little complexity. That's in your case. In our case it would be slightly more complex IMO (although I've never tried to write the code, so I cannot say really). BUT, more importantly why even bother when we can just raise the limit of the message? The limits are there so that if one side starts sending malicious packets it won't eat all the memory on the other side. Well, what if the attacker is slightly more ingenious and sends N messages that fit size limit for one message? I don't really see a difference between raising limit for one message and splitting the data into multiple messages. Michal

On Wed, May 24, 2017 at 04:09:35PM +0200, Michal Privoznik wrote:
On 05/24/2017 02:42 PM, Richard W.M. Jones wrote:
On Tue, May 23, 2017 at 05:07:40PM +0200, Michal Privoznik wrote:
Because:
https://www.redhat.com/archives/libvir-list/2017-May/msg00088.html
I don't think this is a reason at all.
Libguestfs uses an RPC system which was modelled on the libvirt one, and has exactly the same problem with message size limits, except smaller -- 4MB and we've never had to increase it.
So you're basically doing what I'm describing in point a). Transforming problem to another one. The maximum number of 4MB messages.
We get around this by batching operations over messages as necessary (eg [1]). This adds a little complexity in the implementation of the API, but the point is that the complexity is entirely hidden to users of the APIs.
Exactly. A little complexity. That's in your case. In our case it would be slightly more complex IMO (although I've never tried to write the code, so I cannot say really). BUT, more importantly why even bother when we can just raise the limit of the message? The limits are there so that if one side starts sending malicious packets it won't eat all the memory on the other side. Well, what if the attacker is slightly more ingenious and sends N messages that fit size limit for one message? I don't really see a difference between raising limit for one message and splitting the data into multiple messages.
I understand this as the only way this would go is daemon -> client. And daemon cannot transfer more messages than it has data for. The only thing we would need to make sure doesn't happen is daemon keeping the allocated data while client is requesting another data to be allocated. Basically error out if client is calling yet another API that uses this mechanism *and* there is still some data allocated and not read.
Michal
participants (4)
-
Martin Kletzander
-
Michal Privoznik
-
Peter Krempa
-
Richard W.M. Jones