[libvirt-users] storage volume's size change
by Shantanu Pavgi
I have a question regarding storage volumes. I have uploaded a sparsed raw disk file to a storage volume, where virtual/max. size of the raw disk file is larger than size of the initial libvirt volume. Following is an example of commands:
# Create storage volume of capacity 9G
$ virsh-sys vol-create-as --pool pool-01 --name saturn.img --capacity 9G
# Display volume info
$ virsh-sys vol-info saturn.img --pool pool-01
Name: saturn.img
Type: file
Capacity: 9.00 GB
Allocation: 36.11 MB
# Upload raw disk file to the volume
$ virsh-sys vol-upload --vol saturn.img --file /lustre/scratch/shantanup/saturn-disk.img --pool pool-01
# qemu-img info on disk file which was uploaded
$ qemu-img info /lustre/scratch/shantanup/saturn-disk.img
file format: raw
virtual size: 20G (21474836480 bytes)
disk size: 5.7G
# vol-info after uploading the raw disk image
$ virsh-sys vol-info saturn.img --pool pool-01
Name: saturn.img
Type: file
Capacity: 20.00 GB
Allocation: 20.00 GB
The vol-info command indicates that volume size has changed after uploading the raw disk image to it. So has the volume size really changed to 20G or will the system experience any problem after actual disk size (currently 5.7G) hits 9G limit? I can't directly poke into the storage pool mount to run ls or qemu-img commands on underlying volumes, so I don't have that information. Any help?
--
Thanks,
Shantanu
12 years, 4 months
[libvirt-users] live migration
by hcyy
Hello, everybody. I use NFS to do live migration。After input virsh
--connect=qemu:///system --quiet migrate --live vm12 qemu+tcp://pcmk-1/system (vm12 is vm name,/pcmk-1 is host name)it use almost 10s for preparation. During the 10s,the vm is still runing and can ping other vm. But if i input mkdir pcmk-6 in vm during the 10s,it say :mkdir: cannot create directory `pcmk-6': Read-only file system. Anybody can tell me this situation is due to my wrong configure or libvirt cannot mkdir during migration?thanks!
12 years, 4 months
[libvirt-users] segfault on snapshot-list, 0.9.13
by Andrey Korolyov
Hi,
virsh crashes when doing snapshot-list:
Program received signal SIGSEGV, Segmentation fault.
0x00007f4ba600dc51 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
(gdb) bt
#0 0x00007f4ba600dc51 in ?? () from /lib/x86_64-linux-gnu/libc.so.6
#1 0x00007f4ba60905b7 in xdr_string () from /lib/x86_64-linux-gnu/libc.so.6
#2 0x00007f4ba8e92cfe in xdr_remote_nonnull_string (xdrs=<optimized
out>, objp=<optimized out>)
at /var/tmp/build/libvirt/libvirt-0.9.13/./src/remote/remote_protocol.c:31
#3 0x00007f4ba8e93111 in xdr_remote_nonnull_domain
(xdrs=0x7fff39fc2db0, objp=0x7fff39fc3010)
at /var/tmp/build/libvirt/libvirt-0.9.13/./src/remote/remote_protocol.c:58
#4 0x00007f4ba8e99691 in xdr_remote_domain_list_all_snapshots_args
(xdrs=0x7fff39fc2db0, objp=0x7fff39fc3010)
at /var/tmp/build/libvirt/libvirt-0.9.13/./src/remote/remote_protocol.c:4549
#5 0x00007f4ba8ea60f7 in virNetMessageEncodePayload (msg=0x1a306c0,
filter=0x7f4ba8e99670 <xdr_remote_domain_list_all_snapshots_args>,
data=0x7fff39fc3010)
at /var/tmp/build/libvirt/libvirt-0.9.13/./src/rpc/virnetmessage.c:351
#6 0x00007f4ba8e9de3f in virNetClientProgramCall (prog=<optimized
out>, client=0x1a30ab0, serial=7, proc=274, noutfds=<optimized out>,
outfds=0x0,
ninfds=0x0, infds=0x0, args_filter=0x7f4ba8e99670
<xdr_remote_domain_list_all_snapshots_args>, args=0x7fff39fc3010,
ret_filter=0x7f4ba8e996e0
<xdr_remote_domain_list_all_snapshots_ret>, ret=0x7fff39fc2ff0)
at /var/tmp/build/libvirt/libvirt-0.9.13/./src/rpc/virnetclientprogram.c:327
#7 0x00007f4ba8e7f550 in callWithFD (priv=0x1a30400, flags=<optimized
out>, fd=<optimized out>, proc_nr=274, args_filter=<optimized out>,
args=<optimized out>, ret_filter=0x7f4ba8e996e0
<xdr_remote_domain_list_all_snapshots_ret>, ret=0x7fff39fc2ff0 "",
conn=<optimized out>)
at /var/tmp/build/libvirt/libvirt-0.9.13/./src/remote/remote_driver.c:4799
#8 0x00007f4ba8e7f6a4 in call (priv=<optimized out>, flags=<optimized
out>, proc_nr=<optimized out>, args_filter=<optimized out>,
args=<optimized out>,
ret_filter=<optimized out>, ret=0x7fff39fc2ff0 "", conn=<optimized
out>) at /var/tmp/build/libvirt/libvirt-0.9.13/./src/remote/remote_driver.c:4820
#9 0x00007f4ba8e8edfd in remoteDomainListAllSnapshots (dom=0x1a300f0,
snapshots=0x7fff39fc3218, flags=<optimized out>)
at /var/tmp/build/libvirt/libvirt-0.9.13/./src/remote/remote_driver.c:4895
#10 0x00007f4ba8e6c36f in virDomainListAllSnapshots (domain=0x1a300f0,
snaps=0x7fff39fc3218, flags=0)
at /var/tmp/build/libvirt/libvirt-0.9.13/./src/libvirt.c:17262
#11 0x000000000042bca6 in vshSnapshotListCollect (tree=false, flags=0,
from=0x0, dom=0x1a300f0, ctl=0x7fff39fc35e0)
at /var/tmp/build/libvirt/libvirt-0.9.13/./tools/virsh.c:17198
#12 cmdSnapshotList (ctl=0x7fff39fc35e0, cmd=<optimized out>) at
/var/tmp/build/libvirt/libvirt-0.9.13/./tools/virsh.c:17590
#13 0x000000000040ec92 in vshCommandRun (ctl=0x7fff39fc35e0,
cmd=0x1a30f20) at
/var/tmp/build/libvirt/libvirt-0.9.13/./tools/virsh.c:19461
#14 0x000000000040bdb0 in main (argc=<optimized out>, argv=<optimized
out>) at /var/tmp/build/libvirt/libvirt-0.9.13/./tools/virsh.c:21094
12 years, 4 months
[libvirt-users] Adding a second lv as vm drive: how to set the pci part
by Mauricio Tavares
Let's say I have a vm, vm1, which has a lv as its hard drive:
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source file='/dev/mapper/kvmtest_vm1_rootvg'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</disk>
I could add pickles as a second drive as follows:
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source file='/dev/mapper/kvmtest_pickles'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</disk>
but I am trying to understand the pci address entry. Must drive be on
a different slot?
12 years, 4 months
[libvirt-users] Issue with getCPUStats and getMemoryStats
by Ananth
Hi,
I am facing issue with the calls getCPUStats and getMemoryStats. Please
find the error below.
AttributeError: 'module' object has no attribute
'VIR_NODE_CPU_STATS_ALL_CPUS'
>>> print con.getCPUStats(2, None, 0, 0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2173, in
getCPUStats
ret = libvirtmod.virNodeGetCPUStats(self._o, cpuNum, params, nparams,
flags)
*AttributeError: 'module' object has no attribute 'virNodeGetCPUStats'*
>>> print con.getMemoryStats(2, None, 0, 0)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib64/python2.6/site-packages/libvirt.py", line 2245, in
getMemoryStats
ret = libvirtmod.virNodeGetMemoryStats(self._o, cellNum, params,
nparams, flags)
*AttributeError: 'module' object has no attribute 'virNodeGetMemoryStats'*
Kind let me know if this is a known issue. I came to know that this issue
was fixed in 0.9.8 version of libvirt. I am executing these calls on
libvirt-0.9.8 compiled (--with-netcf) on centos 6.2.
Please let me know if any other option needs to be specified while
compiling libvirt in order to get this working.
--
Regards
Ananth
12 years, 4 months
[libvirt-users] OpenVswitch with KVM virtual machines
by Neha Jatav
Hey,
I have KVM installed on my Fedora 17 box. I added the network interfaces of the virtual machines to the openvswitch bridge as follows:
____ ____
/ VM1\______br0_______/ em1\
\____/ | \____/
|
_|_
/VM2\
\____/
virbr0 is the virtual network switch
VM1 and VM2 are on the same subnet having tap interfaces vnet0 and vnet1 respectively.
em1 is the default network interface.
$sudo ovs-vsctl add-br br0
$sudo ovs-vsctl add-port br0 em1
$sudo ifconfig br0 <ip address of em1>
$sudo ip route del default dev em1
$sudo ip route add default dev br0
(Using the above commands, I was able to connect to the internet)
$sudo brctl delif virbr0 vnet0
$sudo ovs-vsctl add-port br0 vnet0
$sudo brctl delif virbr0 vnet1
$sudo ovs-vsctl add-port br0 vnet1
$brctl show
bridge name bridge id STP enabled interfaces
virbr0 8000.52540094e45e yes virbr0-nic
vnet0
vnet2
$sudo ovs-vsctl show
Bridge "br0"
Port "vnet1"
Interface "vnet1"
Port "br0"
Interface "br0"
type: internal
Port "vnet0"
Interface "vnet0"
Port "em1"
Interface "em1"
ovs_version: "1.4.0"
$ifconfig em1
em1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.66.97.253 netmask 255.255.254.0 broadcast 10.66.97.255
inet6 fe80::226:55ff:fe3e:971c prefixlen 64 scopeid 0x20<link>
ether 00:26:55:3e:97:1c txqueuelen 1000 (Ethernet)
RX packets 194955 bytes 81216930 (77.4 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 53004 bytes 9477482 (9.0 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
device interrupt 19 memory 0xf0200000-f0220000
$ifconfig br0
br0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 10.66.97.253 netmask 255.0.0.0 broadcast 10.255.255.255
inet6 fe80::226:55ff:fe3e:971c prefixlen 64 scopeid 0x20<link>
ether 00:26:55:3e:97:1c txqueuelen 0 (Ethernet)
RX packets 84745 bytes 60302978 (57.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 44528 bytes 7732040 (7.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
By default, the OVS should act as a MAC-layer learning switch. However, the VMs which are in the same subnet can ping each other but the VMs can't ping the host machine (10.66.97.253) & viceversa.
Can you tell me where am I wrong in my approach?
Thanks and regards,
Neha.
12 years, 4 months
[libvirt-users] guest VM waiting for a symbol at serial console
by Andrey Korolyov
Hi,
Recently I`ve run into weird bug, which may be reproduced in following way:
- start qemu virtual machine configured with direct boot and kernel args
like 'console=tty0 console=ttyS0,38400n8', connect to serial console,
disconnect and then shutdown vm
- try to start vm again, machine hangs on booting kernel until I press
any key into 'virsh console' session
- after first power cycle, this bug remains until host system will be
rebooted.
This happens only when vm experiencing 'power cycle', rebooting inside
same kvm process do fine.
guest kernel 3.4.4/3.2.20, libvirt 0.9.12 and qemu-1.1.0
12 years, 4 months
[libvirt-users] numa query
by kapil jain
Hi,
I have a host machine which contains 4 numa nodes each with 2 GB memory and 4 cpus. I am using qem-kvm hypervisor.
I am trying to create a guest with the similar topology (4 numa nodes, with only 2 vcpus and 1 GB memory) as host. Each vcpu is 1-1 pinned to physical cpus(i.e. guest{socket0 cpu0} is pinned to host{socket0 cpu0). But one critical requirement is that one guest socket should not have memory from two host sockets to avoid numa access completely.
With the current constructs of numa/topology I am able to create the guest. But memory is not mapped as socket 1-1. numatune is not helping. Please suggest a possible way.
Thanks,
Kapil
12 years, 4 months