On Mon, Jun 29, 2015 at 16:22:54 +0800, Luyao Huang wrote:
When we use iothreadinfo to get iothread information, we will get
error like this:
# virsh iothreadinfo rhel7.0 --live
error: Unable to get domain IOThreads information
error: Unable to encode message payload
This is because virProcessGetAffinity() return a bitmap which map_len
is too big to send via rpc:
(gdb) p *map
$7 = {max_bit = 262144, map_len = 4096, map = 0x7f9b6c2c0a20}
To fix this issue add a new parameter &maxcpu to virProcessGetAffinity()
to let callers specify the maxcpu (also in most machine, no need loop
262144 times to check if cpu is set), if set &maxcpu to zero,
virProcessGetAffinity() will use default value (262144 or 1024) to
create bitmap. This issue was introduced in commit 825df8c.
Signed-off-by: Luyao Huang <lhuang(a)redhat.com>
---
src/qemu/qemu_driver.c | 4 ++--
src/util/virprocess.c | 7 ++++---
src/util/virprocess.h | 2 +-
3 files changed, 7 insertions(+), 6 deletions(-)
I think a better fix will be to make virBitmapToData smarter when
formatting a bitmap by finding the last set bit rather than formatting
the full bitmap.
This will avoid re-introducing the maxcpu argument.
The diff for such change is following:
diff --git a/src/util/virbitmap.c b/src/util/virbitmap.c
index 9abc807..7234f7e 100644
--- a/src/util/virbitmap.c
+++ b/src/util/virbitmap.c
@@ -498,9 +498,12 @@ virBitmapPtr virBitmapNewData(void *data, int len)
*/
int virBitmapToData(virBitmapPtr bitmap, unsigned char **data, int *dataLen)
{
- int len;
+ ssize_t len;
- len = (bitmap->max_bit + CHAR_BIT - 1) / CHAR_BIT;
+ if ((len = virBitmapLastSetBit(bitmap)) < 0)
+ len = 1;
+ else
+ len = (len + CHAR_BIT - 1) / CHAR_BIT;
if (VIR_ALLOC_N(*data, len) < 0)
return -1;
Peter