[libvirt] An AB deadlock and libvirtd crash problem the other day with virsh console
by weifuqiang
Hi all:
I encountered with an AB deadlock and libvirtd crash problem the other day.
The process to generate the problems:
1 use the command "virsh create ***.xml" to create a vm
2 after vm is running , use the command "virsh console ***" to connect vm with console
3 after connection, use the command "virsh destroy ****" to destroy vm and delete it
Then, an AB lock problem occurs, or libvirtd got crashed.
AB LOCK stack:
[Switching to thread 1 (Thread 0x7ff96bf3d7a0 (LWP 9772))]
#0 0x00007ff967b49324 in __lll_lock_wait () from /lib64/libpthread.so.0
#0 0x00007ff967b49324 in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007ff967b44669 in _L_lock_1008 () from /lib64/libpthread.so.0
#2 0x00007ff967b4447e in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x00007ff96b3bd96c in virChrdevFDStreamCloseCb () from /usr/lib64/libvirt.so.0
#4 0x00007ff96b3c9a34 in virFDStreamCloseInt () from /usr/lib64/libvirt.so.0
#5 0x00007ff96b40ac3e in virStreamAbort () from /usr/lib64/libvirt.so.0
#6 0x00007ff96bfa242a in daemonStreamHandleAbort ()
#7 0x00007ff96bfa2803 in daemonStreamEvent ()
#8 0x00007ff96b3c9bfc in virFDStreamEvent () from /usr/lib64/libvirt.so.0
#9 0x00007ff96b3083aa in virEventPollRunOnce () from /usr/lib64/libvirt.so.0
#10 0x00007ff96b307042 in virEventRunDefaultImpl () from /usr/lib64/libvirt.so.0
#11 0x00007ff96b4503dd in virNetDaemonRun () from /usr/lib64/libvirt.so.0
#12 0x00007ff96bf72528 in main ()
[Switching to thread 12 (Thread 0x7ff9640ff700 (LWP 9778))]
#0 0x00007ff967b49324 in __lll_lock_wait () from /lib64/libpthread.so.0
#0 0x00007ff967b49324 in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007ff967b44669 in _L_lock_1008 () from /lib64/libpthread.so.0
#2 0x00007ff967b4447e in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x00007ff96b3c8f26 in virFDStreamSetInternalCloseCb () from /usr/lib64/libvirt.so.0
#4 0x00007ff96b30f7b9 in virHashForEach () from /usr/lib64/libvirt.so.0
#5 0x00007ff96b3be1a2 in virChrdevFree () from /usr/lib64/libvirt.so.0
#6 0x00007ff96055980f in qemuDomainObjPrivateFree () from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#7 0x00007ff96b367014 in virDomainObjDispose () from /usr/lib64/libvirt.so.0
#8 0x00007ff96b330c33 in virObjectUnref () from /usr/lib64/libvirt.so.0
#9 0x00007ff96b35edb9 in virDomainObjEndAPI () from /usr/lib64/libvirt.so.0
#10 0x00007ff9605c6e89 in qemuDomainUndefineFlags () from /usr/lib64/libvirt/connection-driver/libvirt_driver_qemu.so
#11 0x00007ff96b3e35ac in virDomainUndefine () from /usr/lib64/libvirt.so.0
#12 0x00007ff96bf996db in remoteDispatchDomainUndefineHelper ()
#13 0x00007ff96b4537ef in virNetServerProgramDispatch () from /usr/lib64/libvirt.so.0
#14 0x00007ff96b45205e in virNetServerProcessMsg () from /usr/lib64/libvirt.so.0
#15 0x00007ff96b4520e8 in virNetServerHandleJob () from /usr/lib64/libvirt.so.0
#16 0x00007ff96b349f94 in virThreadPoolWorker () from /usr/lib64/libvirt.so.0
#17 0x00007ff96b3494a8 in virThreadHelper () from /usr/lib64/libvirt.so.0
#18 0x00007ff967b42806 in start_thread () from /lib64/libpthread.so.0
#19 0x00007ff96789d67d in clone () from /lib64/libc.so.6
#20 0x0000000000000000 in ?? ()
coredump stack
(gdb) f 1
#1 0x00007f1672a7c73f in virMutexLock (m=0x68) at util/virthread.c:89
89 pthread_mutex_lock(&m->lock);
(gdb) bt
#0 0x00007f1670ace444 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1 0x00007f1672a7c73f in virMutexLock (m=0x68) at util/virthread.c:89
#2 0x00007f1672b2fda2 in virFDStreamSetInternalCloseCb (st=0x7f1673a1a370, cb=0x0, opaque=0x0, fcb=0x0) at fdstream.c:796
#3 0x00007f1672b1f5ad in virChrdevFreeClearCallbacks (payload=0x7f1673a1a370, name=0x7f1673a12520, data=0x0) at conf/virchrdev.c:301
#4 0x00007f1672a36c9b in virHashForEach (table=0x7f1673a0ba20, iter=0x7f1672b1f567 <virChrdevFreeClearCallbacks>, data=0x0) at util/virhash.c:521
#5 0x00007f1672b1f626 in virChrdevFree (devs=0x7f1673a2f820) at conf/virchrdev.c:316
#6 0x00007f1669cca988 in qemuDomainObjPrivateFree (data=0x7f1673a1afa0) at qemu/qemu_domain.c:496
#7 0x00007f1672a9e293 in virDomainObjDispose (obj=0x7f1673a16400) at conf/domain_conf.c:2545
#8 0x00007f1672a5b326 in virObjectUnref (anyobj=0x7f1673a16400) at util/virobject.c:265
#9 0x00007f1672a9e7a5 in virDomainObjEndAPI (vm=0x7f166b07d8d0) at conf/domain_conf.c:2684
#10 0x00007f1669d3ff00 in qemuDomainDestroyFlags (dom=0x7f16600017f0, flags=0) at qemu/qemu_driver.c:2264
#11 0x00007f1669d3ff69 in qemuDomainDestroy (dom=0x7f16600017f0) at qemu/qemu_driver.c:2273
#12 0x00007f1672b39333 in virDomainDestroy (domain=0x7f16600017f0) at libvirt-domain.c:483
#13 0x00007f16737167a5 in remoteDispatchDomainDestroy (server=0x7f167398f4f0, client=0x7f1673a172f0, msg=0x7f1673a2fc70, rerr=0x7f166b07db50, args=0x7f1673a2f9b0)
at remote_dispatch.h:4002
#14 0x00007f1673716648 in remoteDispatchDomainDestroyHelper (server=0x7f167398f4f0, client=0x7f1673a172f0, msg=0x7f1673a2fc70, rerr=0x7f166b07db50, args=0x7f1673a2f9b0, ret=
0x7f1673a10e10) at remote_dispatch.h:3977
#15 0x00007f1672bd8a9b in virNetServerProgramDispatchCall (prog=0x7f167399c610, server=0x7f167398f4f0, client=0x7f1673a172f0, msg=0x7f1673a2fc70)
at rpc/virnetserverprogram.c:437
#16 0x00007f1672bd85fd in virNetServerProgramDispatch (prog=0x7f167399c610, server=0x7f167398f4f0, client=0x7f1673a172f0, msg=0x7f1673a2fc70) at rpc/virnetserverprogram.c:307
#17 0x00007f1672bd2a08 in virNetServerProcessMsg (srv=0x7f167398f4f0, client=0x7f1673a172f0, prog=0x7f167399c610, msg=0x7f1673a2fc70) at rpc/virnetserver.c:136
#18 0x00007f1672bd2aed in virNetServerHandleJob (jobOpaque=0x7f1673a1d780, opaque=0x7f167398f4f0) at rpc/virnetserver.c:157
#19 0x00007f1672a7d833 in virThreadPoolWorker (opaque=0x7f167399bad0) at util/virthreadpool.c:145
#20 0x00007f1672a7cbec in virThreadHelper (data=0x7f167399c980) at util/virthread.c:206
#21 0x00007f1670acc806 in start_thread () from /lib64/libpthread.so.0
#22 0x00007f167082767d in clone () from /lib64/libc.so.6
#23 0x0000000000000000 in ?? ()
(gdb) bt
#0 0x00007f1670ad3324 in __lll_lock_wait () from /lib64/libpthread.so.0
#1 0x00007f1670ace669 in _L_lock_1008 () from /lib64/libpthread.so.0
#2 0x00007f1670ace47e in pthread_mutex_lock () from /lib64/libpthread.so.0
#3 0x00007f1672a7c73f in virMutexLock (m=0x7f1673a2f820) at util/virthread.c:89
#4 0x00007f1672b1f3fe in virChrdevFDStreamCloseCb (st=0x7f1673a1a370, opaque=0x7f1673a149e0) at conf/virchrdev.c:254
#5 0x00007f1672b2e8a8 in virFDStreamCloseInt (st=0x7f1673a1a370, streamAbort=true) at fdstream.c:329
#6 0x00007f1672b2e9d5 in virFDStreamAbort (st=0x7f1673a1a370) at fdstream.c:355
#7 0x00007f1672b7d856 in virStreamAbort (stream=0x7f1673a1a370) at libvirt-stream.c:663
#8 0x00007f16737497e8 in daemonStreamHandleAbort (client=0x7f1673a1b1b0, stream=0x7f1673a1ec50, msg=0x7f1673a0d660) at stream.c:613
#9 0x00007f16737499fa in daemonStreamHandleWrite (client=0x7f1673a1b1b0, stream=0x7f1673a1ec50) at stream.c:662
#10 0x00007f167374866b in daemonStreamEvent (st=0x7f1673a1a370, events=14, opaque=0x7f1673a1b1b0) at stream.c:138
#11 0x00007f1672b2e17a in virFDStreamEvent (watch=18, fd=17, events=14, opaque=0x7f1673a1a370) at fdstream.c:173
#12 0x00007f1672a2c2cb in virEventPollDispatchHandles (nfds=12, fds=0x7f1673a2fda0) at util/vireventpoll.c:509
#13 0x00007f1672a2cb08 in virEventPollRunOnce () at util/vireventpoll.c:658
#14 0x00007f1672a2a9e0 in virEventRunDefaultImpl () at util/virevent.c:308
#15 0x00007f1672bd25d2 in virNetDaemonRun (dmn=0x7f1673990ed0) at rpc/virnetdaemon.c:707
#16 0x00007f167370cad4 in main (argc=3, argv=0x7ffd63276288) at libvirtd.c:1581
The reason of this problem is that fdstream abort event or close event occured at the same time, libvirtd doesn't deal with the synchronousness well enough. //ÄãÕâÀïÔÏÈÏë±í´ïµÄ¾ßÌåÒâ˼ÊÇ£¿
the flows about fdStream is bellow
1¡¢qemuDomainDefineXMLFlags -> virDomainObjListAdd -> qemuDomainObjPrivateAlloc -> virChrdevAlloc -> virHashCreate
2¡¢qemuDomainOpenConsole -> virChrdevOpen -> virHashAddEntry(devs->hash, path, st)
3¡¢virDomainObjDispose -> privateDataFreeFunc (qemuDomainObjPrivateFree) - > virChrdevFree(*dev locked*) -> virChrdevFreeClearCallbacks - > virFDStreamSetInternalCloseCb(*fdst locked*)
4¡¢virFDStreamCloseInt (*fdst locked*) -> icbFreeOpaque(virChrdevFDStreamCloseCb (*dev locked*)) -> virHashRemoveEntry
The AB lock problem is obviouse: in step 3, it locks chardev before fdst, and in step 4, it's the opsite way.
The reason of libvirtd crash is that: in virFDStreamCloseInt function we set fdst to NULL, while in virFDStreamSetInternalCloseCb we use fdst->lock, note that fdst has already been freed.
another crash problem occurs because that when virChrdevFree was earlier finished, dev->hash freed and all date of hash is freed, but fdstream event flow use fdStream after hash free. Ahha~~£¬ libvirtd coredump.
All of those problem is because clear vm flow and fdStream flow concurs synchronously.
then I fix this problem by modify virChrdevFree()
void virChrdevFree(virChrdevsPtr devs)
{
if (!devs || !devs->hash)
return;
for (;;) {
virMutexLock(&devs->lock);
if (0 == virHashSize(devs->hash)) {
virMutexUnlock(&devs->lock);
break;
}
virMutexUnlock(&devs->lock);
usleep(10 * 1000);
}
virMutexLock(&devs->lock);
virHashFree(devs->hash);
virMutexUnlock(&devs->lock);
virMutexDestroy(&devs->lock);
VIR_FREE(devs);
}
If the chardev is removed by fdStream close or fdStream abort when vm destroy or shutdown, the modification works well. But I'm not sure is that all chardev would be removed when we clear vm, if not, it would always sleep here.
Another solution is as follows:
virMutexLock(vm);
virChrdevFree();
virMutexUnlock(vm);
virMutexLock(vm);
virFDStreamCloseInt();
virMutexUnlock(vm);
I lock vm before calling these 2 functions, which makes them run sequently.
Do you have other better idea to solve this problem. thanks in advance.
Best Regards
David
8 years, 10 months
[libvirt] [PATCH v3 0/6] admin: Introduce server listing API
by Erik Skultety
Since v2:
- static names of daemons are passed to virNetServerNew directly, instead
of creating an array of names like the previous version did
- dropped client-side server identification through ID, only name is used
- adjusted naming of some methods (prefixes again...)
- converted the server listing example to virt-admin command (finally)
Erik Skultety (6):
rpc: Introduce new element 'name' to virnetserver structure
virnetdaemon: Add post exec restart support for multiple servers
admin: Move admin_server.{h,c} to admin.{h,c}
admin: Introduce virAdmServer structure
admin: Introduce adminDaemonConnectListServers API
virt-admin: Introduce cmdSrvList
daemon/Makefile.am | 6 +-
daemon/admin.c | 181 +++++++++++++++++++++
daemon/admin.h | 36 ++++
daemon/admin_server.c | 121 +++++---------
daemon/admin_server.h | 23 ++-
daemon/libvirtd.c | 4 +-
include/libvirt/libvirt-admin.h | 11 ++
po/POTFILES.in | 2 +-
src/admin/admin_protocol.x | 26 ++-
src/admin_protocol-structs | 15 ++
src/datatypes.c | 36 ++++
src/datatypes.h | 34 ++++
src/libvirt-admin.c | 148 +++++++++++++++++
src/libvirt_admin_private.syms | 5 +
src/libvirt_admin_public.syms | 3 +
src/locking/lock_daemon.c | 3 +-
src/logging/log_daemon.c | 3 +-
src/lxc/lxc_controller.c | 2 +-
src/rpc/virnetdaemon.c | 15 ++
src/rpc/virnetdaemon.h | 3 +
src/rpc/virnetserver.c | 32 +++-
src/rpc/virnetserver.h | 5 +
.../input-data-admin-server-names.json | 128 +++++++++++++++
.../virnetdaemondata/output-data-admin-nomdns.json | 2 +
.../output-data-admin-server-names.json | 128 +++++++++++++++
.../virnetdaemondata/output-data-anon-clients.json | 1 +
.../output-data-initial-nomdns.json | 1 +
tests/virnetdaemondata/output-data-initial.json | 1 +
tests/virnetdaemontest.c | 40 ++---
tools/virt-admin.c | 62 +++++++
30 files changed, 946 insertions(+), 131 deletions(-)
create mode 100644 daemon/admin.c
create mode 100644 daemon/admin.h
create mode 100644 tests/virnetdaemondata/input-data-admin-server-names.json
create mode 100644 tests/virnetdaemondata/output-data-admin-server-names.json
--
2.4.3
8 years, 10 months
[libvirt] [PATCH V2 0/9] support multi-thread compress migration.
by ShaoHe Feng
These series patches support multi-thread compress during live migration.
Eli Qiao (4):
Add test cases for qemuMonitorJSONGetMigrationParameter
remote: Add support for set and get multil thread migration parameters
qemu_driver: Add support to set/get migration parameters.
virsh: Add set and get multi-thread migration parameters commands
ShaoHe Feng (5):
qemu_migration: Add support for mutil-thread compressed migration
enable
qemu: Add monitor API for get/set migration parameters
set multi-thread compress params for Migrate3 during live migration
virsh: add multi-thread migration option for live migrate command
Implement the public APIs for multi-thread compress parameters.
.gnulib | 2 +-
daemon/remote.c | 62 +++++++++++
include/libvirt/libvirt-domain.h | 31 ++++++
src/driver-hypervisor.h | 14 +++
src/libvirt-domain.c | 110 +++++++++++++++++++
src/libvirt_public.syms | 5 +
src/qemu/qemu_domain.h | 3 +
src/qemu/qemu_driver.c | 186 ++++++++++++++++++++++++++++++++
src/qemu/qemu_migration.c | 105 ++++++++++++++++++
src/qemu/qemu_migration.h | 32 ++++--
src/qemu/qemu_monitor.c | 40 ++++++-
src/qemu/qemu_monitor.h | 11 ++
src/qemu/qemu_monitor_json.c | 93 ++++++++++++++++
src/qemu/qemu_monitor_json.h | 9 ++
src/qemu/qemu_monitor_text.c | 95 +++++++++++++++++
src/qemu/qemu_monitor_text.h | 10 ++
src/remote/remote_driver.c | 54 ++++++++++
src/remote/remote_protocol.x | 30 +++++-
src/remote_protocol-structs | 26 +++++
tests/qemumonitorjsontest.c | 53 ++++++++++
tools/virsh-domain.c | 223 ++++++++++++++++++++++++++++++++++++++-
tools/virsh.pod | 37 +++++--
22 files changed, 1212 insertions(+), 19 deletions(-)
--
2.1.4
8 years, 10 months
[libvirt] Trying to debug "Received unexpected event 3" from libvirt
by Yaniv Kaul
Hi,
I'm trying to debug this issue, which may be affecting my inability to
perform live snapshot.
1. I'm not sure what 'Waking up a tragedian" in the debug log means - what
exactly is a tragedian?
2. In any case, it'd be great if the WARN would mention mon->await_event -
is it the event libvirt is actually waiting for?
(Both from qemu/qemu_agent.c)
3. I reckon event 3 is QEMU_AGENT_EVENT_RESET ? (from qemu/qemu_agent.h)
4. I'm also getting 'End of file while reading data: Input/output error'
messages, not sure what they mean yet.
(using 1.2.18.2-1 on FC23, trying to live-snapshot VMs (with Centos 6 & 7
in them, all with qemu guest agent, AFAIK).
TIA,
Y.
8 years, 10 months
[libvirt] [PATCH 0/3] Misc fixes
by Cédric Bosdonnat
Hi all,
Here are a few patches without strong connection together. The first one
only allows us not to package virt-login-shell even with lxc driver
enabled. The other ones are related to mounts security.
I'm wondering if changing the default dropped capabilities in the lxc
driver is acceptable... dropping sys_admin makes sense, but it can
introduce incompatibilities for users needing it as they will need to
explicitely enable it.
Cédric Bosdonnat (3):
Allow building lxc without virt-login-shell
virt-aa-helper: don't deny writes to readonly mounts
lxc: drop sys_admin caps by default
configure.ac | 14 ++++++++++++++
src/lxc/lxc_container.c | 1 +
src/security/virt-aa-helper.c | 5 ++++-
tools/Makefile.am | 12 ++++++------
4 files changed, 25 insertions(+), 7 deletions(-)
--
2.1.4
8 years, 10 months
[libvirt] [RFC] memory settings interface for containers
by Nikolay Shirokovskiy
Hi, everyone.
I plan to add means to configure vz containers memory setting and have trouble
getting it done thru libvirt interface. Looks like current interface fits good
for vm memory managment but its not clear how to use it with containers. First
let's take aside memory hotplugging which is obviously not suitable for
containers. Then memory interface is represented by 2 parameters: total_memory
and cur_balloon. For VMs total_memory can't be changed at runtime, cur_ballon
can't be greater than total_memory. But for containers memory model is
different. We have only one parameter and it can be changed for running
domains. So question is how to map this model to existing interface (it is
unlikely to have a new interface for this case). I plan to make both parameters
to have same meaning and be equal for containers and update virsh, API and xml
model documentation accordingly.
I'd be happy to hear core developers opinions on this topic.
8 years, 10 months
[libvirt] Bug in RPC code causes failure to start LXC container using virDomainCreateXMLWithFiles
by Ben Gray
Hi,
Occasionally when trying to start LXC containers with fds I get the
following error:
virNetMessageDupFD:562 : Unable to duplicate FD -1: Bad file
descriptor
I tracked it down to the code that handles EAGAIN errors from
recvfd. In such cases the virNetMessageDecodeNumFDs function may be
called multiple times from virNetServerClientDispatchRead and each time
it overwrites the msg->fds array. In the best case (when msg->donefds
== 0) this results in a memory leak, in the worse case it will leak any
fd's already in msg->fds and cause subsequent failures when dup is called.
A very similar problem is mention here:
https://www.redhat.com/archives/libvir-list/2012-December/msg01306.html
Below is my patch to fix the issue.
--- a/src/rpc/virnetserverclient.c 2015-01-23 11:46:24.000000000 +0000
+++ b/src/rpc/virnetserverclient.c 2015-11-26 15:30:51.214462290 +0000
@@ -1107,36 +1107,40 @@
/* Now figure out if we need to read more data to get some
* file descriptors */
- if (msg->header.type == VIR_NET_CALL_WITH_FDS &&
- virNetMessageDecodeNumFDs(msg) < 0) {
- virNetMessageQueueServe(&client->rx);
- virNetMessageFree(msg);
- client->wantClose = true;
- return; /* Error */
- }
+ if (msg->header.type == VIR_NET_CALL_WITH_FDS) {
+ size_t i;
- /* Try getting the file descriptors (may fail if blocking) */
- for (i = msg->donefds; i < msg->nfds; i++) {
- int rv;
- if ((rv = virNetSocketRecvFD(client->sock, &(msg->fds[i])))
< 0) {
+ if (msg->nfds == 0 &&
+ virNetMessageDecodeNumFDs(msg) < 0) {
virNetMessageQueueServe(&client->rx);
virNetMessageFree(msg);
client->wantClose = true;
- return;
+ return; /* Error */
}
- if (rv == 0) /* Blocking */
- break;
- msg->donefds++;
- }
- /* Need to poll() until FDs arrive */
- if (msg->donefds < msg->nfds) {
- /* Because DecodeHeader/NumFDs reset bufferOffset, we
- * put it back to what it was, so everything works
- * again next time we run this method
- */
- client->rx->bufferOffset = client->rx->bufferLength;
- return;
+ /* Try getting the file descriptors (may fail if blocking) */
+ for (i = msg->donefds; i < msg->nfds; i++) {
+ int rv;
+ if ((rv = virNetSocketRecvFD(client->sock,
&(msg->fds[i]))) < 0) {
+ virNetMessageQueueServe(&client->rx);
+ virNetMessageFree(msg);
+ client->wantClose = true;
+ return;
+ }
+ if (rv == 0) /* Blocking */
+ break;
+ msg->donefds++;
+ }
+
+ /* Need to poll() until FDs arrive */
+ if (msg->donefds < msg->nfds) {
+ /* Because DecodeHeader/NumFDs reset bufferOffset, we
+ * put it back to what it was, so everything works
+ * again next time we run this method
+ */
+ client->rx->bufferOffset = client->rx->bufferLength;
+ return;
+ }
}
/* Definitely finished reading, so remove from queue */
8 years, 10 months
[libvirt] [PATCH 0/5] logging fixes
by Laine Stump
These are all related to excessive, misleading, or missing info in
logs when trying to debug problems with SR-IOV network
devices.
Patch 2 does change the logging to eliminate an error message when no
error has occurred (or prevent overwriting a prior error if a
DISASSOCIATE is happening as part of the cleanup after said prior
error), but there is a change to behavior in patch 2 that could have
unintended bad consequences, which is why I've Cc'ed Christian at
Cisco and Stefan and Shivaprasad at IBM, in hopes that they (or
someone they can contact at their respective organizations) can look
at the change and report back if it will cause a problem. The change
in question (again, in 2/5) is that we would previously always return
a status of 0 (PORT_VDP_RESPONSE_SUCCESS) from
virNetDevVPortProfileGetStatus if instanceId was NULL; that is
*always* the case for both ASSOCIATE and DISASSOCIATE for 802.1Qbg,
and is true for all DISASSOCIATE commands for 802.1Qbh. With the
change in Patch 2/5, we now will now set status to the actual
IFLA_PORT_RESPONSE from the response message, which seems to be
correct behavior, but could have bad side effects if there is a
previously undiscovered bug at the other end of the communication.
Laine Stump (5):
util: report the MAC address that couldn't be set
util: don't log error in virNetDevVPortProfileGetStatus if instanceId
is NULL
util: improve error reporting in virNetDevVPortProfileGetStatus
util: reduce debug log in virPCIGetVirtualFunctions()
docs: update to properly reflect meaning of fields in log filter
daemon/libvirtd.conf | 14 ++++++---
docs/logging.html.in | 15 ++++++----
src/util/virnetdev.c | 23 +++++++++-----
src/util/virnetdevvportprofile.c | 65 ++++++++++++++++++++++++++++++++--------
src/util/virpci.c | 37 ++++++-----------------
src/util/virpci.h | 4 +--
6 files changed, 98 insertions(+), 60 deletions(-)
--
2.5.0
8 years, 10 months
[libvirt] [PATCH 0/5] auto-add USB2 controller set for Q35
by Laine Stump
For just about every other machinetype, libvirt automatically adds a
USB controller if there is no controller (including "type='none'")
specified in the config. It doesn't do this for the Q35 machinetype,
because Q35 hardware would have a USB2 controller, USB2 controllers
come in sets of multiple devices, and the code that auto-adds the USB
controller was really setup to just add a single controller. Expanding
that to adding a set of related controllers was beyond the amount of
time I had when putting in the initial Q35 support, so I left it "for
later", and then forgot about it until someone reminded me in the hall
at KVM Forum this summer.
I find the practice of auto-adding devices that aren't required for
operation of the virtual machine to be a bit odd, but this does make
the Q35 machinetype more consistent with all the others, and it is
still possible to force no USB controllers by specifying:
<controller type='usb' model='none'/>
Since the USB controllers on a real Q35 machine are on bus 0 slot
0x1D, there is also a patch here to attempt to use that address for
the first set of USB controllers (and 0x1A for the 2nd set).
Finally, patch 1 is a bugfix for a problem that hadn't been noticed
before, because nobody had tried to connect a USB controller to a
pcie-root-port (which has a single slot that is numbered 0).
Laine Stump (5):
qemu: don't assume slot 0 is unused/reserved.
qemu: prefer 00:1D.x and 00:1A.x for USB2 controllers on Q35
conf: add virDomainDefAddController()
qemu: define virDomainDevAddUSBController()
qemu: auto-add a USB2 controller set for Q35 machines
src/conf/domain_conf.c | 104 +++++++++++++++++----
src/conf/domain_conf.h | 2 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_command.c | 57 ++++++++++-
src/qemu/qemu_domain.c | 14 ++-
.../qemuxml2argv-q35-usb2-multi.args | 40 ++++++++
.../qemuxml2argv-q35-usb2-multi.xml | 47 ++++++++++
.../qemuxml2argv-q35-usb2-reorder.args | 40 ++++++++
.../qemuxml2argv-q35-usb2-reorder.xml | 47 ++++++++++
tests/qemuxml2argvdata/qemuxml2argv-q35-usb2.args | 30 ++++++
tests/qemuxml2argvdata/qemuxml2argv-q35-usb2.xml | 39 ++++++++
tests/qemuxml2argvdata/qemuxml2argv-q35.args | 5 +
tests/qemuxml2argvtest.c | 22 +++++
.../qemuxml2xmlout-q35-usb2-multi.xml | 66 +++++++++++++
.../qemuxml2xmlout-q35-usb2-reorder.xml | 66 +++++++++++++
.../qemuxml2xmloutdata/qemuxml2xmlout-q35-usb2.xml | 46 +++++++++
tests/qemuxml2xmltest.c | 3 +
17 files changed, 606 insertions(+), 23 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-usb2-multi.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-usb2-multi.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-usb2-reorder.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-usb2-reorder.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-usb2.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-q35-usb2.xml
create mode 100644 tests/qemuxml2xmloutdata/qemuxml2xmlout-q35-usb2-multi.xml
create mode 100644 tests/qemuxml2xmloutdata/qemuxml2xmlout-q35-usb2-reorder.xml
create mode 100644 tests/qemuxml2xmloutdata/qemuxml2xmlout-q35-usb2.xml
--
2.4.3
8 years, 10 months