[libvirt-users] Performance monotoring: Perf Event support
by Hui Jing
Hi everyone,
I am using libvirt 2.5.0. I'd like to enable the "perf events" to
monitoring my vm performance such as l3 cache. But it failed with "error:
argument unsupported: unable to enable host cpu perf event for cmt".
1. Could you please give some hints on how to check if my host CPU supports
cmt,mbm, is it displayed in /proc/cpuinfo?
2. Does the kernel version matters as well to support it?
Thanks,
Hui
7 years, 7 months
[libvirt-users] External snapshot issue
by Leroy Tennison
I have used
virsh snapshot-create-as <VM name> <snapshot name> "<snapshot
description>" --diskspec
"vda,snapshot=external,file=/path/to/external-snapshot" --disk-only
--atomic
to create an external snapshot of a running VM. I followed it with
virsh blockpull <VM name> --path /path/to/external-snapshot
and monitored it until done. I confirmed it with
qemu-img info /path/to/external-snapshot
which shows no backing store.
However, when I do
grep {/etc,/run}/libvirt/qemu/<VM name>.xml
the xml for the VM still shows a backing store in the /run/...
definition (but not in the /etc/... definition, my understanding is that
the /etc/... definition won't be updated until the VM is shut down):
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/var/lib/libvirt/images/external-snapshot.qcow2'/>
<backingStore type='file' index='1'>
<format type='qcow2'/>
<source file='/var/lib/libvirt/images/original-image-file.qcow'/>
<backingStore/>
</backingStore>
<target dev='hda' bus='ide'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
This is using libvirtd version 1.3.1 and qemu-img version 2.5.0.
How do I resolve this situation? I don't want to have to rely on a
backing store permanently. Thanks for any help or pointers.
7 years, 7 months
[libvirt-users] ZFS: creating a pool in a created zfs does not work, only when using the whole zfs-pool.
by Thies C. Arntzen
Hi,
I’m new here so apologies if this has been answered before.
I have a box that uses ZFS for everything (ubuntu 17.04) and I want to
create a libvirt pool on that. My ZFS pool is named „big"
So i do:
> zfs create big/zpool
> virsh pool-define-as --name zpool --source-name big/zpool --type zfs
> virsh pool-start zpool
> virsh pool-autostart zpool
> virsh pool-list
> virsh vol-create-as --pool zpool --name test1 --capacity 1G
> virsh vol-list zpool
Everything seems to work (no error message, vol-list shows the created
volume, I can see the volume
via zfs list -t all). -BUT- I cannot use that volume via virt-manager
and after a short while it’s
no longer listed via virsh vol-list zpool. The very same thing works
as expected if I create a new zfs
pool which I hand into libvirt. So instead of creating a pool from
"big/zpool“ I create a pool names
„somepool“ on a free device and -voila- everything works.
Hope I did make myself clear?
Best regards,
thies
7 years, 7 months
[libvirt-users] External snapshot issue addendum
by Leroy Tennison
I made a mistake in my original post, --diskspec in snapshot-create-as
should have shown "hda ...".
I also just noticed that 'virsh edit <VM name>' has the correct
information (no backing store, only the external snapshot file shown), I
was under the impression that /run/... showed the current state, incorrect?
7 years, 7 months
[libvirt-users] Proper way to remove a qemu-nbd-mounted volume usnig lvm
by Leroy Tennison
I either haven't searched for the right thing or the web doesn't contain
the answer.
I have used the following to mount an image and now I need to know the
proper way to reverse the process.
qemu-nbd -c /dev/nbd0 <qcow2 image using lvm>
vgscan --cache (had to use --cache to get the qemu-nbd volume to
be recognized, lvmetad is running)
vgchange -ay
lvdisplay
mount <selected qemu-nbd related 'LV Path' found from lvdisplay
above> <mount point>
I have done the following:
umount <mount point>
lvchange -an <all qemu-nbd related 'LV Path's found from lvdisplay
above>
vgchange -an <qemu-nbd related volume>
Now what? How do I get the volume out of the list so I can use
'qemu-nbd -d /dev/nbd0' to dis-associate the image with /dev/nbd0?
vgreduce seems to be for volumes which have multiple underlying
devices. I started to use vgremove but, when it started prompting for
confirmation about removing logical volumes, I wasn't sure exactly what
it was going to do and responded 'no'.
If there is a web reference explaining this specific situation just
point me to it - I'm not opposed to reading. Thanks for the help.
7 years, 7 months
[libvirt-users] libvirtd segfault when using oVirt 4.1 with graphic console - CentOS7.3
by Rafał Wojciechowski
hello,
I am getting such error:
libvirtd[27218]: segfault at 0 ip 00007f4940725721 sp 00007f4930711740
error 4 in libvirt.so.0.2000.0[7f4940678000+353000]
when I am trying to start VM with graphic spice/vnc console - in
headless mode(without graphic console) it is running
I noticed this after update oVirt 4.0 to oVirt 4.1 however I noticed
that also libvirtd and related packages were upgraded from 1.2.7 to 2.0.0:
libvirt-daemon-kvm-2.0.0-10.el7_3.5.x86_64
libvirt-daemon-driver-qemu-2.0.0-10.el7_3.5.x86_64
I am running up to date CentOS7.3 with 3.10.0-514.16.1.el7.x86_64
I have same issue with SELinux and without SELinux(checked after reboot
with permissive mode)
I tried to get information about this issue from oVirt team, but after
mail conversation both me and oVirt team thinks that it might be issue
in libvirtd
in below link I am putting xmls generated by vdsm which are passing to
the libvirtd to run the VMs
first one is from vdsm log and second one is extracted from dump after
segfault of libvirtd
https://paste.fedoraproject.org/paste/eqpe8Byu2l-3SRdXc6LTLl5M1UNdIGYhyRL...
Regards,
Rafal Wojciechowski
7 years, 7 months
[libvirt-users] libvirt remote connection
by Anastasiya Ruzhanskaya
Hello,
I have some questions about libvirt remote connection.
Am I right that internally libvirt uses only tcp ( ssh and tls are only
encryption based on this) + ftp ( when working with image itself)? Also I
have found that it uses RPC. However, as I know RPC runs above tcp but I
cannot capture these packets with wireshark when I am connecting remotely
to the host with vm? Is it somehow possible to find out, what data, what
messages, in what format are send from my server to the remote libvirt
(daemon I suppose?)?
7 years, 7 months
[libvirt-users] understanding --idmap for containers (v2.5.0)
by mailing lists
Hello,
I'm testing containers on a host machine without selinux so I'm trying use the idmap feature, but I must be missing something because all that I get is a readonly container for the root user.
# virsh version --daemon
Compiled against library: libvirt 2.5.0
Using library: libvirt 2.5.0
Using API: QEMU 2.5.0
Running hypervisor: QEMU 2.8.1
Running against daemon: 2.5.0
# virsh --connect lxc:/// dumpxml lab-gentoo-01
<domain type='lxc'>
<name>lab-gentoo-01</name>
<uuid>a9f73091-b716-4b61-95ad-fa1d0c061bef</uuid>
<memory unit='KiB'>524288</memory>
<currentMemory unit='KiB'>524288</currentMemory>
<vcpu placement='static'>2</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64'>exe</type>
<init>/bin/sh</init>
</os>
<idmap>
<uid start='0' target='900' count='10'/>
<gid start='0' target='900' count='10'/>
</idmap>
<features>
<privnet/>
</features>
<cpu mode='host-model'>
<model fallback='allow'/>
</cpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<filesystem type='mount' accessmode='passthrough'>
<source dir='/media/containers/lab-gentoo-01/'/>
<target dir='/'/>
</filesystem>
<interface type='bridge'>
<mac address='00:16:3e:c8:13:14'/>
<source bridge='bridge-01'/>
</interface>
<console type='pty'>
<target type='lxc' port='0'/>
</console>
</devices>
</domain>
# ls -l /media/containers/lab-gentoo-01/
total 36
drwxr-xr-x 2 root root 4096 Apr 13 07:33 bin
drwxr-xr-x 2 root root 18 Apr 13 03:28 boot
drwxr-xr-x 7 root root 4096 Apr 18 12:45 dev
drwxr-xr-x 31 root root 4096 Apr 18 12:49 etc
drwxr-xr-x 2 root root 18 Apr 13 03:28 home
lrwxrwxrwx 1 root root 5 Apr 13 06:13 lib -> lib64
drwxr-xr-x 2 root root 4096 Apr 13 06:14 lib32
drwxr-xr-x 9 root root 4096 Apr 13 07:33 lib64
drwxr-xr-x 2 root root 18 Apr 13 03:28 media
drwxr-xr-x 2 root root 18 Apr 13 03:28 mnt
drwxr-xr-x 2 root root 18 Apr 13 03:28 opt
drwxr-xr-x 2 root root 6 Apr 13 03:18 proc
drwx------ 2 root root 18 Apr 13 03:28 root
drwxr-xr-x 2 root root 31 Apr 13 07:32 run
drwxr-xr-x 2 root root 4096 Apr 13 07:36 sbin
drwxr-xr-x 2 root root 18 Apr 13 03:28 sys
drwxrwxrwt 2 root root 18 Apr 13 07:36 tmp
drwxr-xr-x 13 root root 4096 Apr 18 12:49 usr
drwxr-xr-x 9 root root 102 Apr 13 03:28 var
# virsh --connect lxc:/// start --console lab-gentoo-01
Domain lab-gentoo-01 started
Connected to domain lab-gentoo-01
Escape character is ^]
sh-4.3# /usr/bin/id
uid=0(root) gid=0(root) groups=0(root)
sh-4.3# pwd
/
sh-4.3# touch asdf
touch: cannot touch 'asdf': Permission denied
sh-4.3#
indeed the container is using the idmap feature because the efective uid/gid map (900/900) is not allowing writes in the filesystem, but it doesn't seems very usefull.
is it possible to have read/write containers while using idmap?
7 years, 7 months