[libvirt-users] Collecting system stats
by Sam Giraffe
Hi,
I am trying to create a report across all my KVM hypervisors using libvirt
API and I need some assistance in figuring out the right API calls. My
report will contain the total available memory, CPU, disk across all
hypervisors, versus the amount used by the virtual machines. This report
will help figure out if we are running low on hypervisor disk, CPU or
memory and if we need to add more.
I can't find the libvirt API call to get system info, such as total memory
in the system, free memory, total disk space, available disk space, etc.
There is virDomainInfo which shows memory total and free about a particular
VM, I guess I can use active/inactive domain list and then use
virDomainInfo on each of the VM's, but that does not give me hypervisor
stats. I am wondering if I will have to enable SNMP and use that to get
hypervisors stats.
I am creating this report remotely on a reporting machine which will
connect to each hypervisor using the libvirt API using one of the remote
libvirt connections available.
Thanks
11 years, 2 months
[libvirt-users] how to mount /dev/shm on system container
by Aarti Sawant
i have create a system container test1 and trying to mount /dev/shm inside
a container
<domain type='lxc'>
<name>test1</name>
<memory>102400</memory>
<os>
<type>exe</type>
<init>/sbin/init</init>
</os>
<vcpu>1</vcpu>
<devices>
<console type='pty'/>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<filesystem type='mount' accessmode='passthrough'>
<source dir='/lxc/test1/'/>
<target dir='/'/>
</filesystem>
<interface type='bridge'>
<source bridge='br0'/>
</interface>
</devices>
</domain>
network settings on host
/etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE="eth0"
BOOTPROTO="dhcp"
BRIDGE="br0"
HWADDR="08:00:27:97:D6:35"
NM_CONTROLLED="no"
ONBOOT="yes"
TYPE="Ethernet"
UUID="fe9d7236-f418-47ab-b472-6e6caabdd807"
/etc/sysconfig/network-scripts/ifcfg-br0
DEVICE="br0"
TYPE="Bridge"
USERCTL="no"
ONBOOT="yes"
BOOTPROTO="dhcp"
NM_CONTROLLED="no"
network setting on host
/lxc/test1/etc/sysconfig/network-scripts/ifcfg-eth0
EVICE=eth0
ONBOOT=yes
TYPE=Ethernet
BOOTPROTO=dhcp
PREFIX=24
USERCTL=yes
IPV6INIT=no
vim /lxc/test1/etc/fstab
none /dev/shm tmpfs defaults 0 0
chroot /lxc/test1
mkdir -m 1777 /dev/shm
After mounting /dev/shm on container, my container gets hang.
Has anyone tried to mount /dev/shm ?
Thanks,
Aarti Sawant
NTTDATA OSS Center Pune
11 years, 3 months
Re: [libvirt-users] virDomainGetInfo() returns wrong domain state
by Panday Ritesh Sharma (rpanday)
Hi Libvirt support team,
Could you please help me with below query.
Regards
Ritesh Sharma
From: Panday Ritesh Sharma (rpanday)
Sent: Tuesday, September 24, 2013 8:22 PM
To: 'libvir-list(a)redhat.com'; 'libvirt-users(a)redhat.com'; Vinay Shankarkumar (vinays)
Cc: Basavaraj Bendigeri (bbendige); q-se-dev(mailer list)
Subject: virDomainGetInfo() returns wrong domain state
Hi Team,
I have written below code to get the VM state at run time. I found, though the VM is in shut-off state, when I use the function virDomainGetInfo(); I get state as running. Could you please let me know what wrong I am doing. To know the actual VM state I used 'virsh list' and it clearly shows the VM is in shut-off state. Please find the log and code snippet below.
Log from virsh:
=================
[host:~]$ virsh list --all
Id Name State
----------------------------------------------------
1 calvados running
2 LCXR running
3 default-sdr--1 running
- test--2 shut off
Out put:
=========
04.03.06.698923264:INFO: vm_libvirt_state_to_vmm_state: state returned is 1
Note : Here 1 is actually running.
Code snippet:
===============
enum cidl_vmm_vm_state
vm_libvirt_state_to_vmm_state(unsigned char libvirt_state)
{
enum cidl_vmm_vm_state state;
INFO("%s: state returned is %u\n",__FUNCTION__, libvirt_state); <<<<<<<<<<<<<<<<<<
if (libvirt_state == VIR_DOMAIN_RUNNING) {
state = cidl_vm_state_running;
} else if ((libvirt_state == VIR_DOMAIN_PAUSED) ||
(libvirt_state == VIR_DOMAIN_BLOCKED)) {
state = cidl_vm_state_paused;
} else if (libvirt_state == VIR_DOMAIN_SHUTOFF) {
state = cidl_vm_state_defined;
} else {
state = cidl_vm_state_not_defined;
}
return state;
}
virDomainInfo res_util;
virDomainPtr dom = virDomainLookupByName(virt,
private_names[vm_idx]);
res = virDomainGetInfo(dom, &res_util);
vminfo[vm_idx].vm_state =
vm_libvirt_state_to_vmm_state(res_util.state);
Regards
Ritesh Sharma
11 years, 3 months
[libvirt-users] VM with ceph backend snapshot-revert error
by 王根意
Hey guys:
I have a running vm with ceph backend:
root@apc20-005:~# virsh list
> Id Name State
> ----------------------------------
> 56 one-240 running
And *snapshot-create-as* works well:
root@apc20-005:~# virsh snapshot-create-as one-240
> Domain snapshot 1380009353 created
But when exec *snapshot-revert*, error occurs:
root@apc20-005:~# virsh snapshot-revert one-240 1380009353
> *error: operation failed: Error -22 while loading VM state*
And the vm went paused state.
root@apc20-005:~# virsh list
> Id Name State
> ----------------------------------
> 56 one-240 paused
When I use qcow2 image backend, Everything is ok! Any idea?
11 years, 3 months
[libvirt-users] How to create snapshots for sheepdog with libvirt API
by Pavel Tokarev
Hello!
I am trying to create snapshots for sheepdog disks using libvirt API or virsh. The disk is defined in domain as follows:
<disk type='network' device='disk'>
<driver name='qemu' cache='none'/>
<source protocol='sheepdog' name='sheepvol1'/>
<target dev='vdb' bus='virtio'/>
</disk>
I have tried both options "external" and "internal". If I try to create external snapshot with virsh command:
snapshot-create-as vps vpssnap --disk-only --diskspec vda,snapshot=no --diskspec vdb,snapshot=external
I get an error: source for disk 'vdb' is not a regular file; refusing to generate external snapshot name.
If I try the same command but with "internal" mode:
snapshot-create-as vps vpssnap --disk-only --diskspec vda,snapshot=no --diskspec vdb,snapshot=internal
virsh creates a snapshot, but this snapshot is useless. If i try to revert to this snapshot i get an error: "qemu-img: Could not open 'sheepvol1': No such file or directory", plus if I list all VDIs with "collie vdi list" I do not see any snapshots there. So i guess virsh does not create an appropriate sheepdog snapshot at all.
Can someone please tell me how to create snapshots for sheepdog disks?
Thank you.
11 years, 3 months
[libvirt-users] creating snapshots for sheepdog with libvirt API
by Pavel Tokarev
Hello!
I am trying to create snapshots for sheepdog disks using libvirt API or virsh. The disk is defined in domain as follows:
<disk type='network' device='disk'>
<driver name='qemu' cache='none'/>
<source protocol='sheepdog' name='sheepvol1'/>
<target dev='vdb' bus='virtio'/>
</disk>
I have tried both options "external" and "internal". If I try to create external snapshot with virsh command:
snapshot-create-as vps vpssnap --disk-only --diskspec vda,snapshot=no --diskspec vdb,snapshot=external
I get an error: source for disk 'vdb' is not a regular file; refusing to generate external snapshot name.
If I try the same command but with "internal" mode:
snapshot-create-as vps vpssnap --disk-only --diskspec vda,snapshot=no --diskspec vdb,snapshot=internal
virsh creates a snapshot, but this snapshot is useless. If i try to revert to this snapshot i get an error: "qemu-img: Could not open 'sheepvol1': No such file or directory", plus if I list all VDIs with "collie vdi list" I do not see any snapshots there. So i guess virsh does not create an appropriate sheepdog snapshot at all.
Can someone please tell me how to create snapshots for sheepdog disks?
Thank you.
11 years, 3 months
[libvirt-users] Using a shared LVM volume group for both virtual disks and host filesystems
by Edoardo Comar
Hi
I have a test environment where a single LVM volume group is used
both as a storage pool for virtual machines' disks AND for the LVs used
for the HOST system filesystem.
I never use libvirt commands to start/stop/activate the pool. Only to
create/delete volumes.
The test environment works fine.
How suitable would be a shared VG for a production environment ?
thanks,
Edoardo Comar
PS -
This discussion seems to imply that separate VGs give more safety against
data corruption in case of bugs ... did not point to real cases of
failure.
http://serverfault.com/questions/200728/lvm-volume-group-shared-between-k...
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
11 years, 3 months
[libvirt-users] problem with sheepdog backend when creating a pool
by Giovanni Bellac
Hello all,
I am getting a error, when creating a sheepdog pool via libvirt. libvirt is compiled as:
./configure --prefix=/opt/libvirt --without-xen --with-yajl --with-storage-sheepdog=/opt/sheepdog
Sheepdog is functional, when creating manually a vdi via "qemu-img" and than using as disk in libvirt.
The error looks like this:
internal error missing backend for pool type 9
The peace of log:
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollMakePollFDs:391 : Prepare n=2 w=3, f=11 e=1 d=0
2013-09-18 14:51:27.580+0000: 14524: debug : virStoragePoolCreateXML:13588 : conn=0x7f1877937800, xmlDesc= <pool type="sheepdog">
<name>mysheeppool</name>
<source>
<name>mysheeppool</name>
<host name='localhost' port='7000'/>
</source>
</pool>
, flags=0
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollMakePollFDs:391 : Prepare n=3 w=4, f=12 e=1 d=0
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollMakePollFDs:391 : Prepare n=4 w=5, f=6 e=1 d=0
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollMakePollFDs:391 : Prepare n=5 w=6, f=14 e=1 d=0
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollMakePollFDs:391 : Prepare n=6 w=7, f=13 e=0 d=0
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollMakePollFDs:391 : Prepare n=7 w=8, f=13 e=1 d=0
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollMakePollFDs:391 : Prepare n=8 w=10, f=16 e=1 d=0
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollMakePollFDs:391 : Prepare n=9 w=12, f=17 e=1 d=0
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollCalculateTimeout:332 : Calculate expiry of 2 timers
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollCalculateTimeout:340 : Got a timeout scheduled for 1379515892580
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollCalculateTimeout:353 : Schedule timeout then=1379515892580 now=1379515887580
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollCalculateTimeout:361 : Timeout at 1379515892580 due in 5000 ms
2013-09-18 14:51:27.580+0000: 14522: debug : virEventPollRunOnce:627 : EVENT_POLL_RUN: nhandles=9 timeout=5000
2013-09-18 14:51:27.580+0000: 14524: debug : virFileClose:90 : Closed fd 18
2013-09-18 14:51:27.580+0000: 14524: debug : virObjectRef:293 : OBJECT_REF: obj=0x7f1877925550
2013-09-18 14:51:27.580+0000: 14524: debug : virAccessManagerCheckStoragePool:321 : manager=0x7f1877925550(name=stack) driver=QEMU pool=0x7f1864001340 perm=5
2013-09-18 14:51:27.580+0000: 14524: debug : virAccessManagerCheckStoragePool:321 : manager=0x7f1877929180(name=none) driver=QEMU pool=0x7f1864001340 perm=5
2013-09-18 14:51:27.580+0000: 14524: debug : virAccessManagerCheckStoragePool:321 : manager=0x7f1877925550(name=stack) driver=QEMU pool=0x7f1864001340 perm=2
2013-09-18 14:51:27.580+0000: 14524: debug : virAccessManagerCheckStoragePool:321 : manager=0x7f1877929180(name=none) driver=QEMU pool=0x7f1864001340 perm=2
2013-09-18 14:51:27.580+0000: 14524: debug : virObjectUnref:256 : OBJECT_UNREF: obj=0x7f1877925550
2013-09-18 14:51:27.580+0000: 14524: error : virStorageBackendForType:1114 : internal error missing backend for pool type 9
The pool XML:
<pool type="sheepdog">
<name>mysheeppool</name>
<source>
<name>mysheeppool</name>
<host name='localhost' port='7000'/>
</source>
</pool>
Kind regards
Giovanni
11 years, 3 months
[libvirt-users] Expanding an LVM Storage Pool
by McEvoy, James
I looked around but could not find any info on how to expand a libvirt managed LVM storage pool. I do not see any virsh command to do it
but I was successful using the vgexpand command to add some more storage once I destroyed the pools and then restarted it.
I'd like to verify that this is the proper way to grow the storage in a libvirt LVM storage pool. And this brings up a second
question, I did this without any VM running so I'd like to know what the impact of destroying a running pool is on running VMs.
Does taking the pool off-line cause any issues with running VMs or is the only affect that the pool is unavailable for management by libvirt?
These are the commands that I ran to create my pool and expand it after a kickstart of the server:
pvcreate -ff -y /dev/mapper/mpath?
virsh pool-define-as --name GuestVols --type logical --source-dev /dev/mapper/mpatha --target /dev/GuestVols
virsh pool-build GuestVols
virsh pool-start GuestVols
virsh pool-autostart GuestVols
virsh pool-destroy --pool GuestVols
vgextend GuestVols /dev/mapper/mpath{b..d}
vgdisplay GuestVols
virsh pool-start --pool GuestVols
virsh pool-info --pool GuestVols
Is this the best way to create a managed pool across 4 LUNs? And to potentially expand the storage group in the future if more storage is needed?
--jim
11 years, 3 months
[libvirt-users] Specify mount option for Storage-Pool
by Jorge Fábregas
Hi,
I've changed the "default" storage-pool from "directory-based" to a
"pre-formatted block-device" since my /var/lib/libvirt/images is on a
partition of its own.
I would like to use the "discard" option as well as other (noatime etc)
on this filesystem. Is there a way to specify mount options in the
storage-pool XML definition?
Thanks,
Jorge
p.d. the reason for the change is that I don't want the partition to be
mounted all the time (just when I start libvirtd).
11 years, 3 months