Re: [libvirt-users] Using virsh blockcopy -- what's it supposed to accomplish?
by Gary R Hook
On 12/24/14 4:42 AM, Kashyap Chamarthy wrote:
> On Tue, Dec 23, 2014 at 12:38:57PM -0600, Gary R Hook wrote:
>
> [. . .]
>
> In my case, the block device is a QCOW2 disk image file. If I boot
> without using the disk image file which has the operating system, the
> domain will fail to boot, no?
>
> I see you're playing with NBD disks. I'll admit, I haven't played much
> with QEMU NBD, will have to experiment post holidays.
Back from the holidays, and back on this issue. I've learned a lot.
I've learned how to use the blockcopy command to create a local copy in
a simple disk file:
virsh dumpxml my_domain > my_domain.xml
virsh undefine my_domain
virsh blockcopy --domain my_domain vda $PWD/dsk.copy.qcow2 --wait
--verbose --finish
virsh define my_domain.xml
and the resulting copy in dsk.copy.qcow2 is, indeed, bootable. It
appears to be a perfect copy, as I expect it to be.
But while I see (per Kashyap's article, etc) that it can be useful in
certain scenarios, it's not interesting to me. I would like to my copy
to be off-system, and was hoping to use the NBD interface to accomplish
that. So I tried this (a variant of the above), working on the same
system because it's easier:
qemu-img create -f qcow2 /tmp/dsk.test.qcow2
qemu-nbd -f qcow2 -p11112 /tmp/dsk.test.qcow2
nbd-client localhost 11112 /dev/nbd2
virsh dumpxml my_domain > my_domain.xml
virsh undefine my_domain
virsh blockcopy --domain my_domain --wait --verbose --finish
virsh define my_domain.xml
nbd-client -d /dev/nbd2
and the qemu-nbd process exits, as I wish. I presume at this point that
the new file has integrity.
I can take the qcow2 file that belongs to the domain and serve it up via
NBD:
qemu-nbd --partition=1 -p11112 /path/to/my/qcow2/file.qcow2
nbd-client localhost 11112 /dev/nbd2
mount /dev/nbd2 -oloop /mnt/foo
and lo! in /mnt/foo I found my root filesytem. Seems perfectly reasonable.
If, however, I try to use my generated-via-NBD file, I get this:
# qemu-nbd --partition=1 -p11112 $PWD/dsk.test.qcow2 &
[1] 7672
# qemu-nbd: Could not find partition 1: Invalid argument
[1]+ Exit 1 qemu-nbd --partition=1 -p11112
$PWD/dsk.test.qcow2
# qemu-nbd --partition=0 -p11112 $PWD/dsk.test.qcow2 &
[1] 7686
# qemu-nbd: Invalid partition 0
^C
[1]+ Exit 1 qemu-nbd --partition=0 -p11112
$PWD/dsk.test.qcow2
# qemu-nbd --partition=2 -p11112 $PWD/dsk.test.qcow2 &
[1] 7699
# qemu-nbd: Could not find partition 2: Invalid argument
^C
[1]+ Exit 1 qemu-nbd --partition=2 -p11112
$PWD/dsk.test.qcow2
# qemu-nbd --partition=3 -p11112 $PWD/dsk.test.qcow2 &
[1] 7830
# qemu-nbd: Could not find partition 3: Invalid argument
[1]+ Exit 1 qemu-nbd --partition=3 -p11112
$PWD/dsk.test.qcow2
I don't know what has been created, but it's not a copy of the original
guest's disk. There's no partition there, it seems.
So yes, blockcopy works fine under certain conditions. But the NBD layer
seems to really muck things up.
Or, more likely, I'm doing things wrong. I'm hoping someone can point
out something obvious.
There's a recent thread about "Block Replication for Continuous
Checkpointing" that is heading towards using NBD. I fail to understand
how this is ever going to work, based on my explorations.
--
Gary R Hook
Senior Kernel Engineer
NIMBOXX, Inc
9 years, 10 months
[libvirt-users] Private network for LXC
by Γιώργος Τσίρκας
Hello, i read about virtualization! It's very interesting,so i decided to
find out more about it! I made my first linux container but i need some
help about the creation of a private network with double nat!
Thank you!
9 years, 10 months
Re: [libvirt-users] Error starting domain: internal error: missing IFLA_VF_INFO in netlink response
by Hong-Hua.Yin@freescale.com
Hi Laine,
Sorry to disturb you.
It seemed this issue had been fixed in libvirt-1.2.2/libnl-3.2.22/linux-3.12. But we still got the error on PowerPC platform.
I'll appreciate if you could give any suggestion. We are not sure if any netlink implementation in kernel space is missed.
The scenario is a little complicated. We installed internal PF and VF kernel modules and want to use
<interface type="hostdev" managed="yes"> syntax to start a guest domain with MAC address.
# insmod fslinic.ko max_vfs=2
Freescale 10 Gigabit PCI Express Network Driver
fslinic 0000:01:00.0: Multiqueue Enabled: Rx Queue count = 1, Tx Queue count = 1
fslinic 0000:01:00.0: Freescale (R) 10 Gigabit Network Connection
fslinic 0000:01:00.1: Multiqueue Enabled: Rx Queue count = 1, Tx Queue count = 1
fslinic 0000:01:00.1: Freescale (R) 10 Gigabit Network Connection
# insmod fslinicvf.ko
Freescale 10 Gigabit PCI Express Network Driver
# lspci -mk
00:00.0 "PCI bridge" "Freescale Semiconductor Inc" "Device 0440" -r20 "" ""
01:00.0 "Power PC" "Freescale Semiconductor Inc" "Device 0440" -r20 "" ""
01:00.1 "Power PC" "Freescale Semiconductor Inc" "Device 0440" -r20 "" ""
01:00.4 "Power PC" "Freescale Semiconductor Inc" "Device 0000" -r20 "" ""
01:00.5 "Power PC" "Freescale Semiconductor Inc" "Device 1957" -r20 "" ""
01:01.0 "Power PC" "Freescale Semiconductor Inc" "Device 0000" -r20 "" ""
01:01.1 "Power PC" "Freescale Semiconductor Inc" "Device 1957" -r20 "" ""
0001:00:00.0 "PCI bridge" "Freescale Semiconductor Inc" "Device 0440" -r20 "" ""
0002:00:00.0 "PCI bridge" "Freescale Semiconductor Inc" "Device 0440" -r20 "" ""
# echo 1957 0000 > /sys/bus/pci/drivers/vfio-pci/new_id
# echo 1957 1957 > /sys/bus/pci/drivers/vfio-pci/new_id
# cat interface.xml
<domain type='kvm'>
<name>interface</name>
<memory unit='KiB'>524288</memory>
<currentMemory unit='KiB'>524288</currentMemory>
<vcpu placement='static'>1</vcpu>
<os>
<type arch='ppc64' machine='ppce500'>hvm</type>
<kernel>/dev/shm/uImage</kernel>
<initrd>/dev/shm/ramdisk</initrd>
<cmdline>root=/dev/ram rw console=ttyS0,115200</cmdline>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-ppc64</emulator>
<controller type='usb' index='0'/>
<controller type='pci' index='0' model='pci-root'/>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<interface type="hostdev" managed="yes">
<mac address="00:e0:0c:00:20:01"/>
<source>
<address type="pci" domain="0x0000" bus="0x01" slot="0x00" function="0x04"/>
</source>
</interface>
<memballoon model='virtio'/>
</devices>
</domain>
root@t4240rdb:/var/volatile# virsh start interface
error: Failed to start domain interface
error: internal error: missing IFLA_VF_INFO in netlink response
The debug message is as below:
error : virNetClientProgramDispatchError:175 : internal error: missing IFLA_VF_INFO in netlink response
Do you have any suggestion to investigate/debug this error?
Wish you could reply. Thank you in advance.
Best Regards,
Olivia
> -----Original Message-----
> From: Yin Olivia-R63875
> Sent: Monday, November 24, 2014 7:37 PM
> To: 'libvirt-users(a)redhat.com'; libvir-list(a)redhat.com
> Subject: Error starting domain: internal error: missing IFLA_VF_INFO in netlink
> response
>
> Hi,
>
> We try PCI Passthrough of host network devices on PPC platform.
> http://wiki.libvirt.org/page/Networking#Assignment_with_.3Cinterface_type.3D
> .27hostdev.27.3E_.28SRIOV_devices_only.29
>
> But we got a similar issue as below that reported on RedHat before:
> https://bugzilla.redhat.com/show_bug.cgi?id=1040626
>
> With <hostdev>, it could start VM successfully.
> <hostdev mode='subsystem' type='pci' managed='yes'>
> <driver name='vfio'/>
> <source>
> <address domain='0x0000' bus='0x01' slot='0x00' function='0x4'/>
> </source>
> </hostdev>
>
> Logged into VM and checked for VF using lspci root@model : qemu ppce500:~#
> lspci
> 00:00.0 PCI bridge: Freescale Semiconductor Inc MPC8533E
> 00:02.0 Power PC: Freescale Semiconductor Inc Device 0000 (rev 20)
> 00:03.0 Unclassified device [00ff]: Red Hat, Inc Virtio memory balloon
>
> But it will fail if using <interface type="hostdev">:
> <interface type="hostdev" managed="yes">
> <mac address="00:e0:0c:00:20:01"/>
> <source>
> <address type="pci" domain="0x0000" bus="0x01" slot="0x00"
> function="0x4"/>
> </source>
> </interface>
>
>
> # virsh create vf.xml
> 2014-11-24 11:08:31.390+0000: 3556: info : libvirt version: 1.2.2
> 2014-11-24 11:08:31.390+0000: 3556: debug : virLogParseOutputs:1378 :
> outputs=1:file:virsh.log
> error: Failed to create domain from vf.xml
> error: internal error: missing IFLA_VF_INFO in netlink response
>
> Exactly We're using max_vfs=2 and libnl-3.2.22 # ls -l /usr/lib64/libnl-3.so.200.17.0
> -rwxr-xr-x 1 root root 154440 Sep 17 07:15 /usr/lib64/libnl-3.so.200.17.0
>
>
> Why this issue happened on PPC? Is there anything architecture-specific support
> needed to add?
>
> Thanks,
> Olivia
9 years, 10 months
[libvirt-users] Libvirt guest can't boot up when use ceph as storage backend with Selinux enabled
by Shanzhi Yu
Hi there,
I met one problem that guest fail to boot up when Selinux is enabled with guest storage
based on ceph. However, I can boot the guest with qemu directly. I also can boot it up
with Selinux disabled. Not sure it is a libvirt bug or wrong use case.
1. Enable Selinux
# getenforce && iptables -L
Enforcing
Chain INPUT (policy ACCEPT)
target prot opt source destination
Chain FORWARD (policy ACCEPT)
target prot opt source destination
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
2. Define a guest with source file based on ceph
# virsh define /dev/stdin <<EOF
<domain type='kvm' id='13'>
<name>ceph</name>
<memory unit='KiB'>4048896</memory>
<currentMemory unit='KiB'>4048576</currentMemory>
<vcpu placement='static'>4</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-rhel7.1.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
</features>
<devices>
<emulator>/usr/libexec/qemu-kvm</emulator>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<auth username='libvirt'>
<secret type='ceph' usage='client.libvirt secret'/>
</auth>
<source protocol='rbd' name='libvirt-pool/rhel7-rbd.img'>
<host name='10.66.xxx.xx' port='6789'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
</devices>
</domain>
EOF
Domain ceph defined from /dev/stdin
3. Try to start the guest by virsh
# virsh start ceph
error: Failed to start domain ceph
error: internal error: process exited while connecting to monitor:
What I can see from libvirtd log is below:
2015-01-08 08:07:32.376+0000: 22552: warning : qemuDomainObjTaint:1890 : Domain id=19 name='ceph' uuid=e4412366-1f16-4c54-b121-dfb565672427 is tainted: high-privileges
Detaching after fork from child process 23015.
2015-01-08 08:07:32.684+0000: 22552: error : qemuMonitorOpenUnix:309 : failed to connect to monitor socket: No such process
2015-01-08 08:07:32.684+0000: 22552: error : qemuProcessWaitForMonitor:2207 : internal error: process exited while connecting to monitor:
2015-01-08 08:07:32.684+0000: 22552: error : virDBusCall:1542 : error from service: TerminateMachine: No such file or directory
4. Start it by qemu cmd
# cat /usr/local/var/log/libvirt/qemu/ceph.log
2015-01-08 08:08:12.179+0000: starting up
LC_ALL=C PATH=/root/perl5/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin HOME=/root USER=root LOGNAME=root QEMU_AUDIO_DRV=none /usr/libexec/qemu-kvm -name ceph -S -machine pc-i440fx-rhel7.1.0,accel=kvm,usb=off -m 3954 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid e4412366-1f16-4c54-b121-dfb565672427 -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/usr/local/var/lib/libvirt/qemu/ceph.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -no-acpi -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=rbd:libvirt-pool/rhel7-rbd.img:id=libvirt:key=AQAQLq5UwO8PMRAA5qftTrdfzXnFZdnunN1WeQ==:auth_supported=cephx\;none:mon_host=10.66.106.92\:6789,if=none,id=drive-virtio-disk0,format=raw,cache=none -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x2 -msg timestamp=on
Domain id=20 is tainted: high-privileges
2015-01-08 08:08:12.473+0000: shutting down
# /usr/libexec/qemu-kvm -name ceph -S -machine pc-i440fx-rhel7.1.0,accel=kvm,usb=off -m 3954 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 319cf458-8740-4d83-9317-e2d52025aa9e -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/usr/local/var/lib/libvirt/qemu/ceph.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -no-acpi -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=rbd:libvirt-pool/rhel7-rbd.img:id=libvirt:key=AQAQLq5UwO8PMRAA5qftTrdfzXnFZdnunN1WeQ==:auth_supported=cephx\;none:mon_host=10.66.106.92,if=none,id=drive-virtio-disk0,format=raw,cache=none
# ps aux|grep qemu
root 23075 2.5 0.2 5114492 16352 pts/5 Sl+ 16:09 0:00 /usr/libexec/qemu-kvm -name ceph -S -machine pc-i440fx-rhel7.1.0,accel=kvm,usb=off -m 3954 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid 319cf458-8740-4d83-9317-e2d52025aa9e -nographic -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/usr/local/var/lib/libvirt/qemu/ceph.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -no-acpi -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=rbd:libvirt-pool/rhel7-rbd.img:id=libvirt:key=AQAQLq5UwO8PMRAA5qftTrdfzXnFZdnunN1WeQ==:auth_supported=cephx;none:mon_host=10.66.106.92,if=none,id=drive-virtio-disk0,format=raw,cache=none
5. Disable Selinux, start guest by virsh will succeed
# setenforce 0 && virsh start ceph
Domain ceph started
# virsh list
Id Name State
----------------------------------------------------
21 ceph running
# virsh dumpxml ceph|grep disk -A 10
..
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<auth username='libvirt'>
<secret type='ceph' usage='client.libvirt secret'/>
</auth>
<source protocol='rbd' name='libvirt-pool/rhel7-rbd.img'>
<host name='10.66.xxx.xx' port='6789'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<alias name='virtio-disk0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
..
I use latest libvirt build from git and
# rpm -q librbd1 librados2 qemu-kvm-rhev
librbd1-0.87-0.el7.x86_64
librados2-0.87-0.el7.x86_64
qemu-kvm-rhev-2.1.2-17.el7.x86_64
--
Regards
shyu
9 years, 10 months
[libvirt-users] trying to get "pages" output in virsh capabilities
by Chris Friesen
When running "virsh capabilities" one of my systems shows a couple entries for
"pages" under host/cpu. (One for 4KB, one for 2MB.) On my other system I'm
missing the "pages" entries.
Is there something that I need to configure, or is this a libvirt version issue,
or what?
The system with the "pages" entries is running 1.2.9, while the system without
them is running 1.2.2.
Thanks,
Chris
9 years, 10 months
[libvirt-users] libvirt bridges rhel7
by Derek Yarnell
Hi,
In RHEL5/6 we have been able to create bridges on our hyper-visor of a
number of vlans (eg. em1 -> em1.483 -> br483) We would be able to see
this br483 in the virt-manager drop down network choice at the end of
creating a new host.
On RHEL7 we are not using NetworkManager per the documentation[0] for
bridges (and because it is not worth it). So a RHEL7 virt-manager
(0.10.0) when connected to a libvirtd for RHEL6 (0.10.2) shows the
bridge but when connected to a libvirtd for RHEL7 (1.1.1) it does not.
So it isn't a virt-manager problem as it seems to be a difference in the
backend libvirtd. You are able to manually specify the bridge device
and it does work though.
The `iface-dumpxml br483` command in virsh on RHEL6 and RHEL7 both are
identical. I guess the question how does a libvirt client like
virt-manager determine a iface is a usable bridge?
[0] -
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/...
Thanks,
derek
--
Derek T. Yarnell
University of Maryland
Institute for Advanced Computer Studies
9 years, 10 months
[libvirt-users] ubuntu virsh snapshot-create-as gives Error -22 while writing VM
by Jon Schipp
Hello all, I'm trying to create an online internal snapshot to work with
Cuckoo Sandbox.
I keep receiving this -22 error below on my Ubuntu system and I'm out of
ideas, been at it for a while so any help is appreciated.
root@cuckoo-sec:~# virsh snapshot-create-as cuckoo cuckoo-snap1 "Cuckoo
Snapshot"
error: operation failed: Error -22 while writing VM
$ uname -a
Linux cuckoo-sec 3.13.0-24-generic #46-Ubuntu SMP Thu Apr 10 19:11:08 UTC
2014 x86_64 x86_64 x86_64 GNU/Linux
$ cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=14.04
DISTRIB_CODENAME=trusty
DISTRIB_DESCRIPTION="Ubuntu 14.04 LTS"
$ kvm --version
QEMU emulator version 2.0.0 (Debian 2.0.0+dfsg-2ubuntu1.9), Copyright (c)
2003-2008 Fabrice Bellard
$ libvirtd --version
libvirtd (libvirt) 1.2.2
$ qemu-img info /opt/kvm/Windows7-64.qcow2
image: /opt/kvm/Windows7-64.qcow2
file format: qcow2
virtual size: 25G (26843545600 bytes)
disk size: 12G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
$ virsh list
Id Name State
----------------------------------------------------
17 cuckoo running
$ apt-get install apparmor-utils
$ aa-status
$ aa-complain /usr/sbin/libvirtd
$ aa-complain /etc/apparmor.d/libvirt/libvirt-bdfcd0d8-b032-6870-
79bf-c77e0a3c8590
$ strace virsh snapshot-create-as cuckoo cuckoo-snap1 "Cuckoo Snapshot"
...
gettid() = 10168
futex(0x7f4726ab8240, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f4726ab8240, FUTEX_WAIT_PRIVATE, 2, NULL) = -1 EAGAIN (Resource
temporarily unavailable)
futex(0x7f4726ab8240, FUTEX_WAKE_PRIVATE, 1) = 0
gettid() = 10168
Thanks
--
Jon Schipp,
jonschipp.com, sickbits.net, opennsm.ncsa.illinois.edu
9 years, 10 months
[libvirt-users] use of qemu-kvm --chardev pipe, id=X, path=... argument ?
by Jason Vas Dias
Please can anyone enlighten me as to why linux qemu-kvm always
creates the console on my terminal, when I am trying to direct
all of its input and output to a pipe ?
I have created :
$ mkfifo /tmp/el6x32{.in,.out,.monitor}
and use the command:
$ /usr/libexec/qemu-kvm -M rhel6.4.0 -cpu n270 -smp 1 \
-hda /home/rpmbuild/OEL6/img/OEL6_32.img \
-kernel /home/rpmbuild/OEL6/boot/vmlinuz-2.6.39-400.215.14.el6uek.i686 \
-initrd /home/rpmbuild/OEL6/boot/initramfs
2.6.39-400.215.14.el6uek.i686.img \
-append 'root=/dev/sda rw selinux=0 enforcing=0 console=0' \
-m 2048 -k en-gb -nographic -vga none -vnc none -enable-kvm \
-chardev pipe,id=0,path=/tmp/el6x32 -monitor pipe:/tmp/el6x32.monitor
But this ends up with the kernel's console on qemu-kvm's STDIO .
I actually want the console to be redirected to take input from
/tmp/el6x32.in and direct output to /tmp/el6x32.out -
I thought that was what the above '-chardev pipe,id=0,path=/tmp/el6x32'
should do if kernel boot params 'console=0' is also supplied ?
Why isn't this happening for me ? Anyone got a guest to read console input
from a pipe and direct console output to a pipe ? If so, how?
Thanks in advance for any replies, Jason.
9 years, 10 months
[libvirt-users] Please help me!Thank You!
by 75124955
I am using Libvirt virsh command by vmware esx created a virtual host, has been unable to create success.
My creation process is as follows :
Create a virtual host XML file content is as follows:
<domain type='vmware'>
<name>test1</name>
<memory>524288</memory>
<currentMemory>524288</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64'>hvm</type>
</os>
<devices>
<disk type='file' device='disk'>
<source file='[datastore1 (2)] test1/test1.vmdk'/>
<target dev='sda' bus='scsi'/>
<address type='drive' controller='0' bus='0' unit='0'/>
</disk>
<controller type='scsi' index='0' model='vmpvscsi'/>
<controller type='ide' index='0'/>
<interface type='bridge'>
<source bridge='VM Network'/>
</interface>
</devices>
</domain>
Has been unable to create the VMDK files, can you tell me what reason be? Thank you very much !
I will always wait for your reply, this problem is very important to me.
9 years, 10 months
[libvirt-users] Console access for a user.
by Le Bris Gilles
Hi,
I would like to allow a user (non-root) to access the console of his VM
(he's got root access on it).
Using sudo doesn't seem to work:
/bin/virsh console vm
error: failed to get domain 'vm'
error: Domain not found: no domain with matching name 'vm'
If I assign suid to virsh, I get: 'error: Failed to initialize libvirt'
I don't see any information on Internet about this.
What is the procedure to follow to get this result (QEMU conf, libvirt
conf, etc)?
Thank you in advance.
9 years, 10 months