[libvirt-users] virDomainSetMemoryStatsPeriod autostart
by Gleb Voronich
Hello,
A collecting period should be set to get domain memory stats
(virDomainSetMemoryStatsPeriod function or virsh dommemstat --period
from command line).
Is there a way to set this period automatically using XML config or any
other way?
10 years
[libvirt-users] Two identical hosts, different capabilities and topologies
by Davide Ferrari
Hello,
I've two twin servers (Dell PowerEdge R815) dedicated to kvm+libvirt
and they have the same exact CPU (AMD Opteron(TM) Processor 6272) with
the same topology (2 sockets, 16 cores per socket, so a total of 32
CPUs seen by the kernel) but both virsh -r capabilities and virsh
nodeinfo say that the two machines are different - but they really are
not! - and this prevents live migration from one server to another
(but the other way round works). Attached you will find the output of
some programs, if you need more info just ask. They are both running
Debian Wheezy with latest backported kvm/libvirt (libvirt-daemon
1.2.9-6~bpo70+1). A curious fact is that kvm-host-01 detects the wrong
topology (1 socket, 32 cores per socket) but it displays more
capabilities, so migrating from kvm-host-02 -> kvm-host-01 works but
from 01 to 02 fails.
Any idea?
Thanks in advance
10 years
[libvirt-users] snapshots and qcow2
by Laszlo Hornyak
Hi,
I ran into a strange behavior with libvirt snapshots. I have some vms
running with thin-provisioned qcow2 disk images (libvirt 1.1.3 with fedora
20).
* When I create a snapshot on the VM, the qcow file suddenly grows big,
somewhat bigger than the maximum size of the disk.
* When I delete the snapshot, the allocated disk space is not freed up, the
qcow image remains the same size. However, if I create a new snapshot
again, it will take the previous snapshot's space. This does not seem to be
very well documented in qemu and the man page, manual, online
documentation, wiki etc does not mention it.
Is there a way to free up the space allocated for snapshots?
Thank you,
Laszlo
--
EOF
10 years
[libvirt-users] all.accept_redirects force disabled with libvirt
by Thomas Lau
Hi All,
I was having trouble to enable all.accept_redirects due to our network
structure, we have to enable it, but all libvirt installed machines
contain this setting:
net.ipv4.conf.all.accept_redirects = 0
I even use sysctl.conf to force to enable it, still no go, anyone know why?
--
Thomas Lau
Director of Infrastructure
Tetrion Capital Limited
Direct: +852-3976-8903
Mobile: +852-9323-9670
Address: 20/F, IFC 1, Central district, Hong Kong
10 years
[libvirt-users] libgfapi disk locking in virtlockd not working
by Piotr Rybicki
Hello.
I'm playing with libgfapi network disks, over IB and all is working
fine, but not disk locking (and true rdma transport).
I use virtlockd, and with fuse mount, locking works as expected.
But when i converted disk definitions to libgfapi, locks are not created
(but qemu starts and works fine). I used direct and indirect locking -
same result : qemu working fine, no locks.
my revelant xml section:
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writethrough' iothread='1'/>
<source protocol='gluster'
name='pool_test/gentoo-virtio-lib1-sys.img'>
<host name='X.X.X.X' port='24007' transport='rdma'/>
</source>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</disk>
I also tried devices <lease> but it also doesn't work:
<lease>
<lockspace>somearea</lockspace>
<key>somekey</key>
<target path='/data/nfs/lockd'/>
</lease>
# virsh start gentoo-virtio-lib1
błąd: Uruchomienie domeny gentoo-virtio-lib1 nie powiodło się
błąd: internal error: Lockspace for path /data/nfs/lockd/somearea does
not exist
well, lockspace exists...
# ls -al /data/nfs/lockd
razem 4K
drwxrwxrwx 3 root root 21 12-08 22:43 .
drwxrwxrwt 7 root root 4096 12-08 14:29 ..
drwxrwxrwx 2 qemu qemu 6 12-08 22:43 somearea
# ls -al /data/nfs/lockd/somearea/
razem 0K
drwxrwxrwx 2 qemu qemu 6 12-08 22:43 .
drwxrwxrwx 3 root root 21 12-08 22:43 ..
Are libgfapi based disks supported by virtlockd? Is there a bug. Am I
missing something?
My test system:
# uname -r
3.14.24-gentoo
# libvirtd -V
libvirtd (libvirt) 1.2.10
# virtlockd -V
virtlockd (libvirt) 1.2.10
# qemu-x86_64 -version
qemu-x86_64 version 2.1.2, Copyright (c) 2003-2008 Fabrice Bellard
# glusterfs -V
glusterfs 3.5.20141204.7b160e3 built on Dec 8 2014 19:03:37
(normally 3.5.2, but was curious if 3.5 series nightly buld will enable
true RDMA transfer, not IPOIB. - it did not).
Best regards
Piotr Rybicki
10 years
[libvirt-users] command reference
by Laszlo Hornyak
Hi,
I have found the online command reference somewhat incomplete so I thought
I give it a try and fill in some of the gaps, but when I cloned the repo I
noticed that the last commit happened a little more than 3 years ago. Do
you guys accept patches for the documentation? What is the plan in general
for the future on the documentation front lines?
--
EOF
10 years
[libvirt-users] best shared storage solution ?
by Jason Vas Dias
Good day -
Is there a way of safely allowing a guest OS to access a disk storage device
that is also mounted at the same time by the hypervisor or another guest ?
I have Linux LVM volumes containing ext4 and btrfs filestystems
that need to be shared between the hypervisor and the guests, and
between guests .
Attempts to use NFS are too slow to be useable - it takes a guest around
4 hours to uncompress and extract a 10MB tar file (using 4 2.9GHz cores).
I need access to the shared file system at all times to use as my /home
directory from the hypervisor host, so I guess this rules out iSCSI .
I came accross documentation on 'sanlock', but its purpose seems to be
antithetical to what I require, as described at:
http://libvirt.org/locking.html :
" how to ensure a single disk cannot be used by more than one running
VM at a time, across any host in a network" - but I want to ensure
SAFE
concurrent access by multiple VMs or any kernel , either on the
hypervisor hardware or in a VM guest, to the same filesystems (
ext4, btrfs ) on the
same machine, without having to use a network file system .
Can sanlock be used to accomplish this ?
Or is there any other method that allows the same filesystem to be safely
accessed by more than one kernel running on the same host ?
The libvirt documentation is very sketchy on this subject, but I understand
that just using the '<shareable/>' in the VM configuration XML is not enough,
and a lock manager must be configured - can sanlock be used for this, or
is there another lock manager that can ?
Thanks & Regards,
Jason Vas Dias
10 years
[libvirt-users] Libvirt Live Migration
by Dhia Abbassi
I'm trying to implement a virtualization API. I was testing migration with
libvirt I got some problems.
When I use the following command :
*virsh migrate --live --persistent --copy-storage-all vm-clone1
qemu+ssh://server_ip/system*
the migration works fine but in the destination host the migrated vm is
paused and I can't unpause it and I need to reboot the vm to be able use it
in the new host. When I try to unoause it Igot the following error message:
<< *Error unpausing domain: internal error: unable to execute QEMU command
'cont': Resetting the Virtual Machine is required *>>
How can I solve this problem, or is there an other way to make a live
migration with libvirt??
Thank you for you consideration.
--
Best regards
*Dhia Abbassi*
Full Stack Engineer | Rosafi Holding
<http://tn.linkedin.com/in/dhiaabbassi> <https://github.com/DhiaTN>
<https://plus.google.com/u/0/+DhiaAbbassi> <https://twitter.com/DhiaTN>
10 years
[libvirt-users] Problem with /dev/tty in LXC established with virt-install
by Vegard Vesterheim
I have created a LXC container with debootstrap followed by virt-install
like this:
host=mylxc1
debootstrap wheezy /home/lxc/$host
virt-install -c lxc:// -n $host --filesystem /home/lxc/$host,/ --ram 1024
I am confused about the /dev filesystem in this container. Specifically
the device '/dev/tty'.
>From inside the container:
~# ls -la /dev/tty
ls: cannot access /dev/tty: No such file or directory
# mknod -m 666 /dev/tty c 4 0
mknod: `/dev/tty': Operation not permitted
A LXC container created and started with the native LXC commands
(lxc-create/lxc-start) has a functioning /dev/tty:
# cat </dev/tty
foo
foo
^C
How can I create a functioning /dev/tty with the LXC driver in libvirt?
- Vegard V -
10 years