[libvirt] [PATCH] qemuBuildCommandLine: Don't add tlsPort if none set
by Michal Privoznik
If user hasn't supplied any tlsPort we default to setting it
to zero in our internal structure. However, when building command
line we test it against -1 which is obviously wrong.
---
src/qemu/qemu_command.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index de2d4a1..ed82cc2 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -5374,7 +5374,7 @@ qemuBuildCommandLine(virConnectPtr conn,
virBufferAsprintf(&opt, "port=%u", def->graphics[0]->data.spice.port);
- if (def->graphics[0]->data.spice.tlsPort != -1) {
+ if (def->graphics[0]->data.spice.tlsPort) {
if (!driver->spiceTLS) {
qemuReportError(VIR_ERR_CONFIG_UNSUPPORTED,
_("spice TLS port set in XML configuration,"
--
1.7.8.5
12 years, 7 months
[libvirt] Ifdown functionality request
by Oleg V Popov
Hello. libvirt can do:
<interface type='ethernet'>
<target dev='vnet7'/>
<script path='/etc/qemu-ifup-mynet'/>
</interface>
Please. Add if-down script functionality.
--
Best regards/Всего наилучшего
Popov Oleg/Попов Олег
tel: +79115580313
xmpp: user(a)livelace.ru
skype: livelace
12 years, 7 months
[libvirt] [PATCH] Fix a few typo in translated strings
by Daniel Veillard
this was raised by our hindi localization team
chandan kumar <chandankumar.093047(a)gmail.com>
pushed under trivial rule
diff --git a/src/lxc/lxc_controller.c b/src/lxc/lxc_controller.c
index 8f336f5..bbc9d9c 100644
--- a/src/lxc/lxc_controller.c
+++ b/src/lxc/lxc_controller.c
@@ -1591,14 +1591,14 @@ lxcControllerRun(virDomainDefPtr def,
if (virSetBlocking(monitor, false) < 0 ||
virSetBlocking(client, false) < 0) {
virReportSystemError(errno, "%s",
- _("Unable to set file descriptor non blocking"));
+ _("Unable to set file descriptor non-blocking"));
goto cleanup;
}
for (i = 0 ; i < nttyFDs ; i++) {
if (virSetBlocking(ttyFDs[i], false) < 0 ||
virSetBlocking(containerTtyFDs[i], false) < 0) {
virReportSystemError(errno, "%s",
- _("Unable to set file descriptor non blocking"));
+ _("Unable to set file descriptor non-blocking"));
goto cleanup;
}
}
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 6ec1eb9..996763c 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -866,7 +866,7 @@ static int qemuCollectPCIAddress(virDomainDefPtr def ATTRIBUTE_UNUSED,
if (info->addr.pci.function != 0) {
qemuReportError(VIR_ERR_XML_ERROR,
_("Attempted double use of PCI Address '%s' "
- "(may need \"multifunction='on'\" for device on function 0"),
+ "(may need \"multifunction='on'\" for device on function 0)"),
addr);
} else {
qemuReportError(VIR_ERR_XML_ERROR,
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index afd744a..1a0ee94 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -3356,13 +3356,13 @@ qemuMonitorJSONBlockIoThrottleInfo(virJSONValuePtr result,
if (!temp_dev || temp_dev->type != VIR_JSON_TYPE_OBJECT) {
qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("block io throttle device entry was not in expected format"));
+ _("block_io_throttle device entry was not in expected format"));
goto cleanup;
}
if ((current_dev = virJSONValueObjectGetString(temp_dev, "device")) == NULL) {
qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("block io throttle device entry was not in expected format"));
+ _("block_io_throttle device entry was not in expected format"));
goto cleanup;
}
@@ -3376,7 +3376,7 @@ qemuMonitorJSONBlockIoThrottleInfo(virJSONValuePtr result,
if ((inserted = virJSONValueObjectGet(temp_dev, "inserted")) == NULL ||
inserted->type != VIR_JSON_TYPE_OBJECT) {
qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("block io throttle inserted entry was not in expected format"));
+ _("block_io_throttle inserted entry was not in expected format"));
goto cleanup;
}
--
Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/
daniel(a)veillard.com | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library http://libvirt.org/
12 years, 7 months
[libvirt] [PATCH RFC 0/8] qemu: Cache results of parsing qemu help output
by Lee Schermerhorn
Stracing libvirtd shows that the qemu driver is executing 2 different
qemu binaries 3 times each to fetch the version, capabilities [supported
devices], and cpu models each time VM state is queried. E.g., [lines wrapped]:
6471 17:15:26.561890 execve("/usr/bin/qemu",
["/usr/bin/qemu", "-cpu", "?"], [/* 2 vars */]) = 0
6472 17:15:26.626668 execve("/usr/bin/qemu",
["/usr/bin/qemu", "-help"], [/* 2 vars */]) = 0
6473 17:15:26.698104 execve("/usr/bin/qemu",
["/usr/bin/qemu", "-device", "?", "-device", "pci-assign,?",
"-device", "virtio-blk-pci,?"], [/* 2 vars */]) = 0
6484 17:15:27.267770 execve("/usr/bin/qemu-system-x86_64",
["/usr/bin/qemu-system-x86_64", "-cpu", "?"],
/* 2 vars */]) = 0
6492 17:15:27.333177 execve("/usr/bin/qemu-system-x86_64",
["/usr/bin/qemu-system-x86_64", "-help"],
[/* 2 vars */]) = 0
6496 17:15:27.402280 execve("/usr/bin/qemu-system-x86_64",
["/usr/bin/qemu-system-x86_64", "-device", "?", "-device",
"pci-assign,?", "-device", "virtio-blk-pci,?"],
[/* 2 vars */]) = 0
~1sec per libvirt api call. Not a killer, but on a heavily loaded
host -- several 10s of VMs -- a periodic query of all VM state, such
as from a cloud compute manager, can take a couple of minutes to
complete.
Because the qemu binaries on the host do not change all that often,
the results of parsing the qemu help output from the exec's above
can be cached. The qemu driver already does some caching of
capabilities, but it does not prevent the execs above.
This series is a work in progress. I'm submitting it as an RFC because I
saw Eric mention the frequent execing of qemu binaries and I have been
working on this to eliminate the overhead shown above.
The series caches the parse results of:
+ qemuCapsExtractVersionInfo
+ qemuCapsProbeMachineTypes
+ qemuCapsProbeCPUModels
by splitting these functions into two parts. The existing function
name fetches the cached parse results for the specified binary and returns
them. The other half, named "qemuCapsCacheX", where X is one of
ExtractVersionInfo, ProbeMachineTypes, and ProbeCPUModels, exec's the
emulator binary and caches the results. The act of fetching the
cached results will fill or refresh the cache as necessary in a new
function qemuCapsCachedInfoGet(). A few auxilliary function have been
added -- e.g., virCapabilitiesDupMachines() to duplicate a cached list
of machine types and virBitmapDup() to duplicate cached capabilities
flags.
The series does not attempt to integrate with nor remove the existing
capabilities caching. TBD.
The series was developed and tested in the context of the Ubuntu 11.04 natty
libvirt_0.8.8-1ubuntu6.7 package using quilt to manage patches in the
debian/patches directory. In that context, it builds, passes all
"make check" tests [under pbuilder] and some fairly heavy, overlapping VM
launch tests where it does eliminate all but a few initial exec's of the
various qemu* and kvm binaries.
The version here, rebased to libvirt-0.9.10, builds cleanly under mock on
Fedora 16 in the context of a modified libvirt-0.9.10-1.fc16 source package.
I.e., no errors and warning-for-warning compatible with build of the
libvirt-0.9.10 fc16 srpm downloaded from libvirt.org. I placed the modified
spec file [applies the patches] and the build logs at:
http://downloads.linux.hp.com/~lts/Libvirt/
I have installed the patched libvirt on a fedora 16 system and successfully
defined and launched a vm. Testing in progress. I'll place an annotated
test log ont the site above when complete.
I also need to rebase atop the current mainline sources, but I wanted to get
this series out for review to see if the overall approach would be acceptable.
Comments?
12 years, 7 months
[libvirt] Qemu, libvirt, and CPU models
by Eduardo Habkost
Hi,
Sorry for the long message, but I didn't find a way to summarize the
questions and issues and make it shorter.
For people who don't know me: I have started to work recently on the
Qemu CPU model code. I have been looking at how things work on
libvirt+Qemu today w.r.t. CPU models, and I have some points I would
like to understand better and see if they can be improved.
I have two main points I would like to understand/discuss:
1) The relationship between libvirt's cpu_map.xml and the Qemu CPU model
definitions.
2) How we could properly allow CPU models to be changed without breaking
existing virtual machines?
Note that for all the questions below, I don't expect that we design the
whole solution and discuss every single detail in this thread. I just
want to collectn suggestions, information about libvirt requirements and
assumptions, and warnings about expected pitfalls before I start working
on a solution on Qemu.
1) Qemu and cpu_map.xml
I would like to understand how cpu_map.xml is supposed to be used, and
how it is supposed to interact with the CPU model definitions provided
by Qemu. More precisely:
1.1) Do we want to eliminate the duplication between the Qemu CPU
definitions and cpu_map.xml?
1.1.1) If we want to eliminate the duplication, how can we accomplish
that? What interfaces you miss, that Qemu could provide?
1.1.2) If the duplication has a purpose and you want to keep
cpu_map.xml, then:
- First, I would like to understand why libvirt needs cpu_map.xml? Is
it part of the "public" interface of libvirt, or is it just an
internal file where libvirt stores non-user-visible data?
- How can we make sure there is no confusion between libvirt and Qemu
about the CPU models? For example, what if cpu_map.xml says model
'Moo' has the flag 'foo' enabled, but Qemu disagrees? How do we
guarantee that libvirt gets exactly what it expects from Qemu when
it asks for a CPU model? We have "-cpu ?dump" today, but it's not
the better interface we could have. Do you miss something in special
in the Qemu<->libvirt interface, to help on that?
1.2) About the probing of available features on the host system: Qemu
has code specialized to query KVM about the available features, and to
check what can be enabled and what can't be enabled in a VM. On many
cases, the available features match exactly what is returned by the
CPUID instruction on the host system, but there are some
exceptions:
- Some features can be enabled even when the host CPU doesn't support
it (because they are completely emulated by KVM, e.g. x2apic).
- On many other cases, the feature may be available but we have to
check if Qemu+KVM are really able to expose it to the guest (many
features work this way, as many depend on specific support by the
KVM kernel module and/or Qemu).
I suppose libvirt does want to check which flags can be enabled in a
VM, as it already have checks for host CPU features (e.g.
src/cpu/cpu_x86.c:x86Compute()). But I also suppose that libvirt
doesn't want to duplicate the KVM feature probing code present on
Qemu, and in this case we could have an interface where libvirt could
query for the actually-available CPU features. Would it be useful for
libvirt? What's the best way to expose this interface?
1.3) Some features are not plain CPU feature bits: e.g. level=X can be
set in "-cpu" argument, and other features are enabled/disabled by
exposing specific CPUID leafs and not just a feature bit (e.g. PMU
CPUID leaf support). I suppose libvirt wants to be able to probe for
those features too, and be able to enable/disable them, right?
2) How to change an existing model and keep existing VMs working?
Sometimes we have to update a CPU model definition because of some bug.
Eamples:
- The CPU models Conroe, Penrym and Nehalem, have level=2 set. This
works most times, but it breaks CPU core/thread topology enumeration.
We have to change those CPU models to use level=4 to fix the bug.
- This can happen with plain CPU feature bits, too, not just "level":
sometimes real-world CPU models have a feature that is not supported
by Qemu+KVM yet, but when the kernel and Qemu finally starts to
support it, we may want to enable it on existing CPU models. Sometimes
a model simply has the wrong set of feature bits, and we have to fix
it to have the right set of features.
But if we simply change the existing model definition, this will break
existing machines:
- Today, it would break on live migration, but that's slightly easy to
fix: we have to migrate the CPUID information too, to make sure we
won't change the CPU under the guest OS feet.
- Even if we fix live migration, simple "cold" migration will make the
guest OS see a different CPU after a reboot, and that's undesirable
too. Even if the Qemu developers disagree with me and decide that this
is not a problem, libvirt may want to expose a more stable CPU to the
guest, and some cooperation from Qemu would be ncessary.
So, my questions are:
About the libvirt<->Qemu interface:
2.1) What's the best mechanism to have different versions of a CPU
model? An alias system like the one used by machine-types? How to
implement this without confusing the existing libvirt probing code?
2.2) We have to make the CPU model version-choosing mechanism depend on
the machine-type. e.g. if the user has a pc-1.0 machine using the
Nehalem CPU model, we have to keep using the level=2 version of that
CPU. But if the user chose a newer machine-type version, we can safely
get the latest-and-greates version of the Nehalem CPU model. How to
make this work without confusing libvirt?
About the user<->libvirt interface:
2.3) How all this will interact with cpu_map.xml? Right now there's the
assumption that the CPU model definitions are immutable, right?
2.4) How do you think libvirt would expose this "CPU model version"
to the user? Should it just expose the unversioned CPU models to the
user, and let Qemu or libvirt choose the right version based on
machine-type? Should it expose only the versioned CPU models (because
they are immutable) and never expose the unversioned aliases? Should
it expose the unversioned alias, but change the Domain XML definition
automatically to the versioned immutable one (like it happens with
machine-type)?
I don't plan to interfere on the libvirt interface design, but I suppose
that libvirt design assumptions will be impacted by the solution we
choose on Qemu. For example: right now libvirt seems to assume that CPU
models are immutable. Are you going to keep this assumption in the
libvirt interfaces? Because I am already willing to break this
assumption on Qemu, although I would like to cooperate with libvirt and
not break any requirements/assumptions without warning.
--
Eduardo
12 years, 7 months
[libvirt] [PATCH] Removed more AMD-specific features from cpu64-rhel* models
by Martin Kletzander
We found few more AMD-specific features in cpu64-rhel* models that
made it impossible to start qemu guest on Intel host (with this
setting) even though qemu itself starts correctly with them.
---
src/cpu/cpu_map.xml | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/src/cpu/cpu_map.xml b/src/cpu/cpu_map.xml
index 7ef230e..6a6603b 100644
--- a/src/cpu/cpu_map.xml
+++ b/src/cpu/cpu_map.xml
@@ -347,7 +347,6 @@
</model>
<model name='cpu64-rhel6'>
- <feature name='abm'/>
<feature name='apic'/>
<feature name='clflush'/>
<feature name='cmov'/>
@@ -373,7 +372,6 @@
<feature name='sse'/>
<feature name='sse2'/>
<feature name='pni'/>
- <feature name='sse4a'/>
<feature name='syscall'/>
<feature name='tsc'/>
</model>
--
1.7.3.4
12 years, 7 months
[libvirt] [PATCH v2] Removed more AMD-specific features from cpu64-rhel* models
by Martin Kletzander
We found few more AMD-specific features in cpu64-rhel* models that
made it impossible to start qemu guest on Intel host (with this
setting) even though qemu itself starts correctly with them.
This impacts one test, thus the fix in tests/cputestdata/.
---
src/cpu/cpu_map.xml | 2 --
.../cputestdata/x86-baseline-no-vendor-result.xml | 3 +--
2 files changed, 1 insertions(+), 4 deletions(-)
diff --git a/src/cpu/cpu_map.xml b/src/cpu/cpu_map.xml
index 7ef230e..6a6603b 100644
--- a/src/cpu/cpu_map.xml
+++ b/src/cpu/cpu_map.xml
@@ -347,7 +347,6 @@
</model>
<model name='cpu64-rhel6'>
- <feature name='abm'/>
<feature name='apic'/>
<feature name='clflush'/>
<feature name='cmov'/>
@@ -373,7 +372,6 @@
<feature name='sse'/>
<feature name='sse2'/>
<feature name='pni'/>
- <feature name='sse4a'/>
<feature name='syscall'/>
<feature name='tsc'/>
</model>
diff --git a/tests/cputestdata/x86-baseline-no-vendor-result.xml b/tests/cputestdata/x86-baseline-no-vendor-result.xml
index 4b4921c..00e03b2 100644
--- a/tests/cputestdata/x86-baseline-no-vendor-result.xml
+++ b/tests/cputestdata/x86-baseline-no-vendor-result.xml
@@ -1,4 +1,3 @@
<cpu mode='custom' match='exact'>
- <model fallback='allow'>kvm64</model>
- <feature policy='require' name='lahf_lm'/>
+ <model fallback='allow'>cpu64-rhel6</model>
</cpu>
--
1.7.3.4
12 years, 7 months
[libvirt] [PATCH] conf: eliminate redundant VIR_ALLOC of 1st element of network DNS hosts.
by Laine Stump
virNetworkDNSHostsDefParseXML was calling VIR_ALLOC(def->hosts) if
def->hosts was NULL. This is a waste of time, though, since
VIR_REALLOC_N is called a few lines further down, prior to any use of
def->hosts. (initializing def->nhosts to 0 is also redundant, because
the newly allocated memory will always be cleared to all 0's anyway).
---
src/conf/network_conf.c | 8 --------
1 files changed, 0 insertions(+), 8 deletions(-)
diff --git a/src/conf/network_conf.c b/src/conf/network_conf.c
index 743ae92..0333141 100644
--- a/src/conf/network_conf.c
+++ b/src/conf/network_conf.c
@@ -510,14 +510,6 @@ virNetworkDNSHostsDefParseXML(virNetworkDNSDefPtr def,
virSocketAddr inaddr;
int ret = -1;
- if (def->hosts == NULL) {
- if (VIR_ALLOC(def->hosts) < 0) {
- virReportOOMError();
- goto error;
- }
- def->nhosts = 0;
- }
-
if (!(ip = virXMLPropString(node, "ip")) ||
(virSocketAddrParse(&inaddr, ip, AF_UNSPEC) < 0)) {
virNetworkReportError(VIR_ERR_XML_DETAIL,
--
1.7.7.6
12 years, 7 months