[libvirt-users] lvm vs qcow (on NVMe)
by lejeczek
hi guys,
anybody observed, maybe has measured performance difference between the
two with qemu quests(windows OS)?
Any conclusions, recommendations?
many thanks, L.
6 years
[libvirt-users] Sasl authentication
by Anastasiya Ruzhanskaya
Hello everyone,
I am trying to set up sasl authentication in virt-manager. I read in one
existing bug, that it is better to use tls for encryption and sasl for
authentication, but not use sasl for both.
I configured tls, it was fine. Then I created a user "u" on server and
modified libvirtd.conf.
I set auth_tls = "sasl" there and mech_list: scram-sha-1 in sasl2 config
for libvirt.
The output of sasl:
u@ubuntu2:~$ sudo sasldblistusers2 -f /etc/libvirt/passwd.db
u@ubuntu: userPassword
Then I connect to server using tls and entering as the user u.
Then I have the following message:
Unable to connect to libvirt qemu+tls://u@192.168.246.135/system.
authentication failed: authentication failed
What are the correct steps to set up everything right. I just followed
this: set up tls, set up user, modify config file, relaunch and connect.
6 years
[libvirt-users] user information sent when using sasl or tls
by Anastasiya Ruzhanskaya
Hello everyone!
I am trying without any success to decipher traffic from client to server
in virt-manager in wireshark, but I don't know the sessioon key there, so
seems no chance to do this.
This is why I want to ask, is any info related to the certificate sent
through the connection? Or if I use kerberos protocol for authentication,
will the user id be sent in every message from client to server? Or only
during authentication?
6 years
[libvirt-users] Long time to restart guest
by Jesson
Hello, my guest is professional Windows, the version is 10. Standby (display lights up) After more than 2 days, if we restart on Windows (do not restart X with virsh), the reboot takes a long time, maybe about 7 minutes. But all kvm logs are normal, and the guest state is running, only the linux system log gives me some messages. My guests only started after resetting all the devices. So what should I do to get started quickly?
there are my environment:
Compiled against library: libvirt 4.4.0
Using library: libvirt 4.4.0
Using API: QEMU 4.4.0
Running hypervisor: QEMU 2.12.0
Linux: LFS 7.10
Kernel: 4.17.2
6 years
[libvirt-users] Remote and local connections at the same time (Centos 7)
by Anastasiya Ruzhanskaya
Hello!
I was investigating libvirt a year ago regarding it's remote control. I
figured out necessary settings for configuring remote control in ubuntu
(setting flags in libvirt setting files). Now I have several questions:
1) Are these flags the same for Centos?
They did not worked for me.
My flags for ubuntu are (for tcp for example):
/etc/libvirt/libvirtd.conf : listen_tls = 0, listen_tcp = 1, listen_addr =
"0.0.0.0 " auth_-
tcp = "none "
/etc/init/libvirt-bin.conf: add -l (listen) : env libvirtd_opts = d -l"
/etc/default/libvirt-bin add -l : libvirtd_opts = d -l"
2) Can the remote control be configured along with local on one machine?
For example, in one configuration I want to run libvirt on this machine and
accept incoming remote calls, and at the same time I want to be able to run
virtual machines locally on the same machine. Are these two things somehow
connected or I am free to set up both of them?
As I see the flags touch not only libvirtd settings, but also libvirt
settings.
6 years
[libvirt-users] Libvirt access control drivers
by Anastasiya Ruzhanskaya
Hello!
According to the documentation access control drivers are not in really
"good condition". There is a polkit, but it can distinguish users only
according the pid. However, I have met some articles about more
fine-grained control and about selinux drivers for libvirt? So, what is the
status now? Should I implement something by myself if I want access based
on login, are their instructions how to write these drivers or there is
smth already?
6 years
[libvirt-users] Dual Stack setup problems
by Simon Michalke
Hi,
my host hast 2 interface with 2 different ipv4 addresses: eno1 and
eno1:1. eno1 also has an ipv6 subnet.
Is there a way to have ipv4 redirect to eno1:1 and ipv6 redirect to eno1
with one network or do I have to setup a 2 virtual networks with 2
interfaces to get dual stack working properly?
Regards,
Simon
6 years
[libvirt-users] guest on host's shutdown/reboot (problem?) - centos 7.5
by lejeczek
hi guys
I have just two guest after which libvirt looks and when I shutdown
those guests:
$ virsh shutdown $_dom
that domain takes little time to power down, as expected one would
say(~20sec) But! I reboot the hosts and I have in my:
/etc/sysconfig/libvirt-guests :
ON_SHUTDOWN=shutdown
PARALLEL_SHUTDOWN=1
SHUTDOWN_TIMEOUT=300
I see libvirtd service shut down takes minutes, a few good minutes to
deal with guests shutdown(I suppose?)
How could I dig into it and troubleshoot?
Would you have any idea as why libvirtd shutdown takes that long?
many thank, L.
6 years
[libvirt-users] libvirt call qemu to create vm need more than 10 seconds
by netsurfed
Hi all:
It takes more than 10 seconds to create a vm on a Dell R830 machine, but it takes less than 2 seconds on other machines. This is not normal, so I turned on the debug log for libvirtd. I analyzed the log and found that the time was spent on libvirtd calling qemu. Thread 95225 calls the qemuProcessLaunch interface at 14:22:30.129 and then builds the emulator command line, but the qemuDomainObjTaint method is called after 10 seconds. I tried to analyze the qemuProcessLaunch function to find the reason, but nothing was found. What is the cause?
Below some information about my hypervisor:
[root@node1 datapool]# virsh -v
3.4.0
[root@node1 datapool]# qemu-x86_64 --version
qemu-x86_64 version 2.9.0
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
[root@node1 datapool]# uname -a
Linux node1 3.10.0-693.el7.x86_64 #1 SMP Tue Aug 22 21:09:27 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
There are too many debug logs. I paste the log of thread 99225 below:
Line 58679: 2018-11-13 14:22:30.129+0000: 95225: debug : qemuProcessLaunch:5614 : vm=0x7f822c002fe0 name=6efd749c-3162-4ac4-90e6-b3fca6fde9bc id=1 asyncJob=6 incoming.launchURI=<null> incoming.deferredURI=<null> incoming.fd=-1 incoming.path=<null> snapshot=(nil) vmop=0 flags=0x11
Line 58680: 2018-11-13 14:22:30.129+0000: 95225: info : virObjectRef:296 : OBJECT_REF: obj=0x7f81f80166f0
Line 58681: 2018-11-13 14:22:30.129+0000: 95225: info : virObjectRef:296 : OBJECT_REF: obj=0x7f81f8257e60
Line 58682: 2018-11-13 14:22:30.129+0000: 95225: debug : qemuProcessLaunch:5634 : Creating domain log file
Line 58683: 2018-11-13 14:22:30.129+0000: 95225: info : virObjectRef:296 : OBJECT_REF: obj=0x7f81f80166f0
Line 58684: 2018-11-13 14:22:30.129+0000: 95225: info : virObjectNew:202 : OBJECT_NEW: obj=0x7f822c006560 classname=qemuDomainLogContext
Line 58685: 2018-11-13 14:22:30.129+0000: 95225: debug : qemuDomainLogContextNew:4563 : Context new 0x7f822c006560 stdioLogD=1
Line 58686: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetSocketNewConnectUNIX:640 : path=/var/run/libvirt/virtlogd-sock spawnDaemon=0 binary=<null>
Line 58687: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetSocketNewConnectUNIX:704 : connect() succeeded
Line 58688: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetSocketNew:236 : localAddr=0x7f8244ca5530 remoteAddr=0x7f8244ca55c0 fd=20 errfd=-1 pid=0
Line 58689: 2018-11-13 14:22:30.130+0000: 95225: info : virObjectNew:202 : OBJECT_NEW: obj=0x7f822c00aab0 classname=virNetSocket
Line 58690: 2018-11-13 14:22:30.130+0000: 95225: info : virNetSocketNew:292 : RPC_SOCKET_NEW: sock=0x7f822c00aab0 fd=20 errfd=-1 pid=0 localAddr=127.0.0.1;0, remoteAddr=127.0.0.1;0
Line 58691: 2018-11-13 14:22:30.130+0000: 95225: info : virObjectNew:202 : OBJECT_NEW: obj=0x7f822c00b1b0 classname=virNetClient
Line 58692: 2018-11-13 14:22:30.130+0000: 95225: info : virNetClientNew:328 : RPC_CLIENT_NEW: client=0x7f822c00b1b0 sock=0x7f822c00aab0
Line 58693: 2018-11-13 14:22:30.130+0000: 95225: info : virObjectNew:202 : OBJECT_NEW: obj=0x7f822c00ac50 classname=virNetClientProgram
Line 58694: 2018-11-13 14:22:30.130+0000: 95225: info : virObjectRef:296 : OBJECT_REF: obj=0x7f822c00ac50
Line 58695: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetMessageNew:46 : msg=0x7f822c00a900 tracked=0
Line 58696: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetMessageEncodePayload:387 : Encode length as 164
Line 58697: 2018-11-13 14:22:30.130+0000: 95225: info : virNetClientSendInternal:2120 : RPC_CLIENT_MSG_TX_QUEUE: client=0x7f822c00b1b0 len=164 prog=2270401305 vers=1 proc=1 type=0 status=0 serial=0
Line 58698: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetClientCallNew:2073 : New call 0x7f822c00acc0: msg=0x7f822c00a900, expectReply=1, nonBlock=0
Line 58699: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetClientIO:1879 : Outgoing message prog=2270401305 version=1 serial=0 proc=1 type=0 length=164 dispatch=(nil)
Line 58700: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetClientIO:1938 : We have the buck head=0x7f822c00acc0 call=0x7f822c00acc0
Line 58701: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetSocketUpdateIOCallback:2198 : Watch not registered on socket 0x7f822c00aab0
Line 58702: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetMessageDecodeLength:161 : Got length, now need 48 total (44 more)
Line 58703: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetMessageDecodeNumFDs:336 : Got 1 FDs from peer
Line 58704: 2018-11-13 14:22:30.130+0000: 95225: info : virNetSocketRecvFD:2048 : RPC_SOCKET_RECV_FD: sock=0x7f822c00aab0 fd=23
Line 58705: 2018-11-13 14:22:30.130+0000: 95225: info : virNetClientCallDispatch:1267 : RPC_CLIENT_MSG_RX: client=0x7f822c00b1b0 len=48 prog=2270401305 vers=1 proc=1 type=5 status=0 serial=0
Line 58706: 2018-11-13 14:22:30.130+0000: 95225: debug : virKeepAliveCheckMessage:374 : ka=(nil), client=0x7f82505b0086, msg=0x7f822c00b210
Line 58707: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetMessageClear:74 : msg=0x7f822c00b210 nfds=0
Line 58708: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetClientIOEventLoopPassTheBuck:1562 : Giving up the buck 0x7f822c00acc0
Line 58709: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetClientIOEventLoopPassTheBuck:1576 : No thread to pass the buck to
Line 58710: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetSocketUpdateIOCallback:2198 : Watch not registered on socket 0x7f822c00aab0
Line 58711: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetClientIO:1968 : All done with our call head=(nil) call=0x7f822c00acc0 rv=0
Line 58712: 2018-11-13 14:22:30.130+0000: 95225: debug : virNetMessageFree:87 : msg=0x7f822c00a900 nfds=1 cb=(nil)
Line 58713: 2018-11-13 14:22:30.130+0000: 95225: debug : virFileClose:110 : Closed fd 23
Line 58714: 2018-11-13 14:22:30.130+0000: 95225: info : virObjectUnref:259 : OBJECT_UNREF: obj=0x7f81f80166f0
Line 58715: 2018-11-13 14:22:30.130+0000: 95225: debug : qemuProcessLaunch:5640 : Building emulator command line
Line 58716: 2018-11-13 14:22:30.130+0000: 95225: debug : virFileClose:110 : Closed fd 23
Line 58717: 2018-11-13 14:22:30.130+0000: 95225: info : virObjectRef:296 : OBJECT_REF: obj=0x7f81f80166f0
Line 58718: 2018-11-13 14:22:30.130+0000: 95225: debug : qemuBuildCommandLine:9894 : driver=0x7f81f8011790 def=0x7f822c00b7e0 mon=0x7f822c0062a0 json=1 qemuCaps=0x7f822c002e80 migrateURI=(null) snapshot=(nil) vmop=0
Line 58719: 2018-11-13 14:22:30.131+0000: 95225: debug : virArchFromHost:181 : Mapped x86_64 to 32 (x86_64)
Line 58720: 2018-11-13 14:22:30.131+0000: 95225: info : virObjectRef:296 : OBJECT_REF: obj=0x7f81f8257e60
Line 58721: 2018-11-13 14:22:30.131+0000: 95225: info : virObjectUnref:259 : OBJECT_UNREF: obj=0x7f81f8257e60
Line 58722: 2018-11-13 14:22:30.131+0000: 95225: debug : virCPUCheckFeature:817 : arch=x86_64, cpu=0x7f822c00c090, feature=svm
Line 58723: 2018-11-13 14:22:30.131+0000: 95225: info : virObjectRef:296 : OBJECT_REF: obj=0x7f81f80166f0
Line 58724: 2018-11-13 14:22:30.131+0000: 95225: info : virNetDevProbeVnetHdr:208 : Enabling IFF_VNET_HDR
Line 58772: 2018-11-13 14:22:30.132+0000: 95225: debug : virNetDevSetMACInternal:277 : SIOCSIFHWADDR vnet0 MAC=fe:54:00:28:29:1d - Success
Line 58773: 2018-11-13 14:22:30.132+0000: 95225: debug : virFileClose:110 : Closed fd 25
Line 58775: 2018-11-13 14:22:30.132+0000: 95225: debug : virFileClose:110 : Closed fd 25
Line 58777: 2018-11-13 14:22:30.132+0000: 95225: debug : virFileClose:110 : Closed fd 25
Line 58782: 2018-11-13 14:22:30.132+0000: 95225: debug : virCommandRunAsync:2448 : About to run ovs-vsctl --timeout=5 -- --if-exists del-port vnet0 -- add-port lcs_br_1 vnet0 -- set Interface vnet0 'external-ids:attached-mac="52:54:00:28:29:1d"' -- set Interface vnet0 'external-ids:iface-id="04af9169-2c72-45e9-845d-3d162268728b"' -- set Interface vnet0 'external-ids:vm-id="6efd749c-3162-4ac4-90e6-b3fca6fde9bc"' -- set Interface vnet0 external-ids:iface-status=active
Line 58789: 2018-11-13 14:22:30.133+0000: 95225: debug : virFileClose:110 : Closed fd 25
Line 58791: 2018-11-13 14:22:30.133+0000: 95225: debug : virFileClose:110 : Closed fd 27
Line 58792: 2018-11-13 14:22:30.133+0000: 95225: debug : virFileClose:110 : Closed fd 29
Line 58793: 2018-11-13 14:22:30.133+0000: 95225: debug : virCommandRunAsync:2451 : Command result 0, with PID 95350
Line 58993: 2018-11-13 14:22:30.146+0000: 95225: debug : virCommandRun:2299 : Result status 0, stdout: '' stderr: '2018-11-13 14:22:30.136+0000: 95350: debug : virFileClose:110 : Closed fd 27
Line 58997: 2018-11-13 14:22:30.146+0000: 95225: debug : virFileClose:110 : Closed fd 26
Line 58998: 2018-11-13 14:22:30.146+0000: 95225: debug : virFileClose:110 : Closed fd 28
Line 58999: 2018-11-13 14:22:30.147+0000: 95225: debug : virFileClose:110 : Closed fd 25
Line 59000: 2018-11-13 14:22:30.147+0000: 95225: info : virObjectUnref:259 : OBJECT_UNREF: obj=0x7f81f80166f0
Line 59001: 2018-11-13 14:22:30.147+0000: 95225: debug : virFileClose:110 : Closed fd 25
Line 59002: 2018-11-13 14:22:30.147+0000: 95225: info : virObjectRef:296 : OBJECT_REF: obj=0x7f81f80166f0
Line 59003: 2018-11-13 14:22:30.147+0000: 95225: info : virObjectUnref:259 : OBJECT_UNREF: obj=0x7f81f80166f0
Line 59004: 2018-11-13 14:22:30.147+0000: 95225: info : virObjectUnref:259 : OBJECT_UNREF: obj=0x7f81f80166f0
Line 60062: 2018-11-13 14:22:40.161+0000: 95225: info : virObjectRef:296 : OBJECT_REF: obj=0x7f81f80166f0
Line 60063: 2018-11-13 14:22:40.161+0000: 95225: warning : qemuDomainObjTaint:4415 : Domain id=1 name='6efd749c-3162-4ac4-90e6-b3fca6fde9bc' uuid=6efd749c-3162-4ac4-90e6-b3fca6fde9bc is tainted: high-privileges
6 years
[libvirt-users] performance overhead with backing chains?
by Riccardo Ravaioli
Hi,
I was wondering if there are any studies on the performance overhead of a
QEMU/KVM virtual machine when the backing chain of its disk(s) is of size
n, with n > 1. In particular, it would be useful to know until what size we
can expect little or no impact on a Linux virtual machine, for instance.
Thanks!
Regards,
Riccardo
6 years