[Libvir] python libvirt methods to virDomainBlockStats and virDomainInterfaceStats
by Marco Sinhoreli
Hello all,
I'm using libvirt from cvs, and in libvirt.h there are
virDomainBlockStats, and virDomainInterfaceStats calls. I think it is
compiled into libvirtmod, then I'm trying to write two methods,
blockStats, and interfacesStats in the class virDomain at the
libvirt.py library file. I'm not a specialist in python programming
and it's not working.
Bellow folow the code I have been wrote:
def blockStats(self):
"""Block device stats for virDomainBlockStats """
ret = libvirtmod.virDomainBlockStats(self._o)
if ret is None: raise libvirtError ('virDomainBlockStats()
failed', dom=self)
return ret
def interfaceStats(self):
"""Network interface stats for virDomainInterfaceStats."""
ret = libvirtmod.virDomainInterfaceStats(self._o)
if ret is None: raise libvirtError ('virDomainInterfaceStats()
failed', dom=self)
return ret
Any help will be welcome.
Best regards
--
Marco Sinhoreli
17 years, 2 months
[Libvir] [RFC][PATCH 1/2] NUMA memory and topology patches
by beth kon
[PATCH 1/2] - add capability to access free memory information on each
NUMA cell.
Signed-off-by: Beth Kon (eak(a)us.ibm.com)
--
Elizabeth Kon (Beth)
IBM Linux Technology Center
Open Hypervisor Team
email: eak(a)us.ibm.com
17 years, 2 months
[Libvir] [PATCH] virsh NUMA command freecell
by Daniel Veillard
Useful for testing the virNodeGetCellsFreeMemory() call, it requires the
3 precedent NUMA patches. The patch lack the documentation update.
The function takes an optional argument which is the cell number.
If no cell is provided it will print the total free memory available
on the Node. A more useful function would be to find the cell with the
most available memory, but that one should still be useful and not just
for testing.
Daniel
--
Red Hat Virtualization group http://redhat.com/virtualization/
Daniel Veillard | virtualization library http://libvirt.org/
veillard(a)redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/
http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/
17 years, 2 months
[Libvir] [RFC][PATCH 2/2] NUMA memory and topology patches
by beth kon
[PATCH 2/2] - add capability to access topology information (cell to cpu
mapping) for each numa cell.
Signed-off-by: Beth Kon (eak(a)us.ibm.com)
--
Elizabeth Kon (Beth)
IBM Linux Technology Center
Open Hypervisor Team
email: eak(a)us.ibm.com
17 years, 2 months
[Libvir] [RFC][PATCH 0/2] NUMA memory and topology patches
by beth kon
I am reposting an updated patch for accessing NUMA cell available
memory, and posting the initial patch for accessing NUMA topology
information.
Again, this code is primarily untested. I am looking for comments in
parallel with the testing I will begin tomorrow.
These patches prereq Daniel Veillard's initial framework patch:
https://www.redhat.com/archives/libvir-list/2007-September/msg00069.html
[PATCH 1/2] - add capability to access free memory information on each
NUMA cell.
[PATCH 2/2] - add capability to access topology information (cell to cpu
mapping) for each numa cell.
I have not yet added the full node available memory API call that was
discussed on the list. As I understand the NUMA code is more urgent. If
you would like to get this code in too, I can create another patch for
that. My focus for the moment is testing.
Signed-off-by: Beth Kon (eak(a)us.ibm.com)
--
Elizabeth Kon (Beth)
IBM Linux Technology Center
Open Hypervisor Team
email: eak(a)us.ibm.com
17 years, 2 months
[Libvir] RFC PATCH - Initial NodeGetCellsFreeMemory patch
by beth kon
Here is my first pass at a patch for accessing the available memory
associated with each NUMA cell through libvirt.
The initial framework patch provided by Daniel Veillard is a prereq of
this patch:
https://www.redhat.com/archives/libvir-list/2007-September/msg00069.html
I have not yet tested this. I'm just distributing for comments.
A few comments/questions:
1) I think I got the versioning stuff right but that deserves a close look.
2) In Daniel's patch, libvirt.h and libvirt.h.in were identical. I
assumed the patch would be before running autogen.sh, and only contain
changes in libvirt.h.in, so that's how I did mine. Let me know if I
misunderstand something here.
3) I had to put #ifdef PROXY around calls to virXenErrorFunc in
xenHypervisorNodeGetCellsFreeMemory to get this to build. I haven't
figured out how the proxy code works so am not sure if this is the right
approach.
--
Elizabeth Kon (Beth)
IBM Linux Technology Center
Open Hypervisor Team
email: eak(a)us.ibm.com
17 years, 2 months
[Libvir] Remotely starting a fully-virtualized guest
by Meng Kuan
Hi,
I am trying to start a fully virtualized guest called "full1"
remotely. The connection URI I am using is "xen+ssh://root@dell1/".
(Note that I am able to start/stop/resume/shutdown para-virtualized
guests without any problems with this connection URI.)
The XML definition for the fully virtualized guest is this:
<domain type='xen'>
<name>full1</name>
<os>
<type>hvm</type>
<loader>/usr/lib/xen/boot/hvmloader</loader>
<boot dev='hd'/>
</os>
<memory>1048576</memory>
<vcpu>2</vcpu>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<devices>
<emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
<interface type='bridge'>
<source bridge='xenbr0'/>
<script path='vif-bridge'/>
</interface>
<disk type='file' device='disk'>
<driver name='file'/>
<source file='/xen/full1.img'/>
<target dev='hda'/>
</disk>
<graphics type='vnc' port='5904'/>
</devices>
</domain>
The guest failed to start up and this is what I found in the xend.log
file:
[2007-09-24 18:55:46 xend.XendDomainInfo 4176] DEBUG (XendDomainInfo:
190) XendDomainInfo.create(['vm', ['name', 'full1']
, ['memory', '1024'], ['maxmem', '1024'], ['vcpus', '2'],
['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_crash
', 'restart'], ['image', ['hvm', ['kernel', '/usr/lib/xen/boot/
hvmloader'], ['device_model', '/usr/lib64/xen/bin/qemu-dm
'], ['vcpus', '2'], ['boot', 'c'], ['acpi', '1'], ['apic', '1'],
['pae', '1'], ['usb', '1'], ['vnc', '1'], ['vncdisplay'
, '4']]], ['device', ['vbd', ['dev', 'hda:disk'], ['uname', 'file:/
xen/full1.img'], ['mode', 'w']]], ['device', ['vif',
['bridge', 'xenbr0'], ['script', 'vif-bridge'], ['type', 'ioemu']]]])
[2007-09-24 18:55:46 xend.XendDomainInfo 4176] DEBUG (XendDomainInfo:
296) parseConfig: config is ['vm', ['name', 'full1'
], ['memory', '1024'], ['maxmem', '1024'], ['vcpus', '2'],
['on_poweroff', 'destroy'], ['on_reboot', 'restart'], ['on_cras
h', 'restart'], ['image', ['hvm', ['kernel', '/usr/lib/xen/boot/
hvmloader'], ['device_model', '/usr/lib64/xen/bin/qemu-d
m'], ['vcpus', '2'], ['boot', 'c'], ['acpi', '1'], ['apic', '1'],
['pae', '1'], ['usb', '1'], ['vnc', '1'], ['vncdisplay
', '4']]], ['device', ['vbd', ['dev', 'hda:disk'], ['uname', 'file:/
xen/full1.img'], ['mode', 'w']]], ['device', ['vif',
['bridge', 'xenbr0'], ['script', 'vif-bridge'], ['type', 'ioemu']]]]
[2007-09-24 18:55:46 xend.XendDomainInfo 4176] DEBUG (XendDomainInfo:
397) parseConfig: result is {'shadow_memory': None,
'start_time': None, 'uuid': None, 'on_crash': 'restart', 'on_reboot':
'restart', 'localtime': None, 'image': ['hvm', ['
kernel', '/usr/lib/xen/boot/hvmloader'], ['device_model', '/usr/lib64/
xen/bin/qemu-dm'], ['vcpus', '2'], ['boot', 'c'],
['acpi', '1'], ['apic', '1'], ['pae', '1'], ['usb', '1'], ['vnc',
'1'], ['vncdisplay', '4']], 'on_poweroff': 'destroy',
'bootloader_args': None, 'cpus': None, 'name': 'full1', 'backend':
[], 'vcpus': 2, 'cpu_weight': None, 'features': None,
'vcpu_avail': None, 'memory': 768, 'device': [('vbd', ['vbd', ['dev',
'hda:disk'], ['uname', 'file:/xen/full1.img'], ['
mode', 'w']]), ('vif', ['vif', ['bridge', 'xenbr0'], ['script', 'vif-
bridge'], ['type', 'ioemu']])], 'bootloader': None,
'cpu': None, 'maxmem': 768}
[2007-09-24 18:55:46 xend.XendDomainInfo 4176] DEBUG (XendDomainInfo:
1264) XendDomainInfo.construct: None
[2007-09-24 18:55:46 xend.XendDomainInfo 4176] DEBUG (XendDomainInfo:
1296) XendDomainInfo.initDomain: 91 1.0
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: boot, val: c
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: fda, val: None
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: fdb, val: None
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: soundhw, val:
None
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: localtime,
val: None
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: serial, val:
None
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: std-vga, val:
None
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: isa, val: None
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: vcpus, val: 2
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: acpi, val: 1
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: usb, val: 1
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: usbdevice,
val: None
[2007-09-24 18:55:46 xend 4176] DEBUG (image:329) args: k, val: None
[2007-09-24 18:55:46 xend.XendDomainInfo 4176] ERROR (XendDomainInfo:
202) Domain construction failed
Traceback (most recent call last):
File "/usr/lib64/python2.4/site-packages/xen/xend/
XendDomainInfo.py", line 195, in create
vm.initDomain()
File "/usr/lib64/python2.4/site-packages/xen/xend/
XendDomainInfo.py", line 1309, in initDomain
self.info['device'])
File "/usr/lib64/python2.4/site-packages/xen/xend/image.py", line
45, in create
return findImageHandlerClass(imageConfig)(vm, imageConfig,
deviceConfig)
File "/usr/lib64/python2.4/site-packages/xen/xend/image.py", line
75, in __init__
self.configure(imageConfig, deviceConfig)
File "/usr/lib64/python2.4/site-packages/xen/xend/image.py", line
272, in configure
self.dmargs += self.configVNC(imageConfig)
File "/usr/lib64/python2.4/site-packages/xen/xend/image.py", line
383, in configVNC
ret += ['-vnc', '%d' % vncdisplay]
TypeError: int argument required
[2007-09-24 18:55:46 xend.XendDomainInfo 4176] DEBUG (XendDomainInfo:
1463) XendDomainInfo.destroy: domid=91
[2007-09-24 18:55:46 xend.XendDomainInfo 4176] DEBUG (XendDomainInfo:
1471) XendDomainInfo.destroyDomain(91)
[2007-09-24 18:55:46 xend 4176] ERROR (SrvBase:88) Request create
failed.
Traceback (most recent call last):
File "/usr/lib64/python2.4/site-packages/xen/web/SrvBase.py", line
85, in perform
return op_method(op, req)
File "/usr/lib64/python2.4/site-packages/xen/xend/server/
SrvDomainDir.py", line 82, in op_create
raise XendError("Error creating domain: " + str(ex))
XendError: Error creating domain: int argument required
It seems that xend got a string instead of an integer for the vnc port.
I modified the file "image.py" at line 383 from this
ret += ['-vnc', '%d' % vncdisplay]
to this:
ret += ['-vnc', '%s' % vncdisplay]
Restarted xend for the change to take effect and then try starting
the guest again. This time the guest gets created but its state does
not change to "running".
# xm list
Name ID Mem(MiB) VCPUs State
Time(s)
Domain-0 0 451 2 r-----
356.7
full1 88 773 1
------ 0.0
I removed the following vnc line from the xml definition and try again:
<graphics type='vnc' port='5904'/>
The guest gets created but again its state is not "running".
According to xml format documentation the <graphics> tag is required.
At this point I am stumped. Any ideas what else I can try?
cheers,
mengkuan
17 years, 2 months
Re: [Libvir] RFC PATCH - Initial NodeGetCellsFreeMemory patch
by beth kon
oops... meant to post to list as well
beth kon wrote:
> Daniel Veillard wrote:
>
>> On Thu, Sep 20, 2007 at 05:10:49PM -0400, beth kon wrote:
>>
>>
>>> Here is my first pass at a patch for accessing the available memory
>>> associated with each NUMA cell through libvirt.
>>>
>>
>> Thanks !
>>
>>
> Thanks for the quick comments!
>
>>> +struct xen_v2s4_availheap {
>>> + uint32_t min_bitwidth; /* Smallest address width (zero if don't
>>> care). */
>>> + uint32_t max_bitwidth; /* Largest address width (zero if don't
>>> care). */
>>>
>>
>> I'm a bit puzzled by those 2 fields I still wonder what it is about :-)
>>
>>
>>
> I was puzzled too! These fields are related to the case when 32 bit
> guests need to reserve certain ranges of memory for DMA, for example.
> I checked with Ryan and he assured me we don't need to worry about
> those fields for these purposes.
>
>>> + int32_t node; /* NUMA node (-1 for sum across all nodes). */
>>> + uint64_t avail_bytes; /* Bytes available in the specified region. */
>>> +};
>>> +
>>> +typedef struct xen_v2s4_availheap xen_v2s4_availheap;
>>> +
>>>
>> [...]
>>
>>
> What does [...] mean? :-)
>
>>
>>
>>> +xenHypervisorNodeGetCellsFreeMemory(virConnectPtr conn, long long
>>> *freeMems,
>>> + int startCell, int maxCells)
>>> {
>>> - if ((conn == NULL) || (freeMems == NULL) || (nbCells < 0))
>>> - return -1;
>>> + xen_op_v2_sys op_sys;
>>> + int i, ret, nbCells;
>>> + virNodeInfo nodeInfo;
>>> + virNodeInfoPtr nodeInfoPtr = &nodeInfo;
>>> + xenUnifiedPrivatePtr priv;
>>> +
>>> + if ((conn == NULL) || (freeMems == NULL) || (maxCells < 0))
>>> + return -1;
>>>
>>
>> Hum, actually, that would catch the (maxCells == -1) so that won't work
>> and won't catch maxCells == 0 which could lead to a segfault. Maybe
>> (maxCells == 0) || (maxCells < -1) should be used instead.
>>
>>
>>
> Yes, you're right. I added maxcells = -1 and startCell as an
> afterthought, and it shows!
>
>>> + /* get actual number of cells */
>>> + if (xenDaemonNodeGetInfo(conn, nodeInfoPtr)) {
>>>
>>
>> Hum, since the number of cells is static, maybe this should be stored
>> permanently in a variable on init. The xenDaemon RPC will be orders
>> of magnitude
>> more expensive than the direct hypercall below.
>>
>>
>>
>>> + virXenError(VIR_ERR_XEN_CALL, " cannot determine actual number of
>>> cells", 0);
>>> + return -1;
>>> + }
>>> + nbCells = nodeInfoPtr->nodes;
>>> +
>>> + if (maxCells > nbCells)
>>> + maxCells = nbCells;
>>>
>>
>>
>>
> I wondered about that. Would you like me to make that change as part
> of this patch?
>
>> Hum ... maxCells is the number of entries in the array, I'm afraid
>> you will
>> need 2 counters or I misunderstood the way the API works (possible :-),
>> I would fill in freeMems[] starting at index 0 for startCell, and
>> going up.
>> That feels more coherent than leaving uninitialized values at the
>> start of
>> the array:
>>
>> for (i = startCell, j = 0;(i < nbCells) && (j < maxCells);i++,j++) {
>> op_sys.u.availheap.node = i;
>> ret = xenHypervisorDoV2Sys(priv->handle, &op_sys);
>> if (ret < 0)
>> return(-1);
>> freeMems[j] = op_sys.u.availheap.avail_bytes;
>> }
>> return (j);
>>
>>
>>
> Yes, right again!
>
> Thanks so much for the great feedback!
>
--
Elizabeth Kon (Beth)
IBM Linux Technology Center
Open Hypervisor Team
email: eak(a)us.ibm.com
17 years, 2 months
[Libvir] How to deploy Virtual Machine via libvirt
by Omer Khalid
Hi,
I have been using libvirt in my python software for a while just to get
deployed domain information. Usually I create a xen configuration file with
all the parameters and trigger a "xm create -f file" command to deploy the
VM.
The problem I have noticed since some time is that as this command is
forwarded from Python to Xen, and no error code is returned; I loose track
if the domain deployment request fails which is rare but happens.
So I thought to use libvirt for virtual machine deployment, and wondered if
there are python bindings which I could use. I also try to look up for some
kind of small example code to deploy a VM via libvirt on the libvirt site
but didn't succeed to find. Can any body help me out with their experience?
Thanks,
Omer
--
----------------------------------------------------------
CERN – European Organization for Nuclear
Research, IT Department, CH-1211
Geneva 23, Switzerland
Phone: +41 (0) 22 767 2224
Fax: +41 (0) 22 766 8683
E-mail : Omer.Khalid(a)cern.ch
Homepage: http://cern.ch/Omer.Khalid
17 years, 2 months