[libvirt] Exact meaning of "nativeMode" attribute in vlan tags
by Laine Stump
You'd think that I would know this, since I'm the person who reviewed
jrobson's patch adding support for the nativeMode attribute to the vlan
tag element. But you'd be wrong. Here is what the config looks like:
<vlan trunk='yes'>
<tag id='42' nativeMode='untagged'/>
<tag id='47'/>
</vlan>
I understand that trunk='yes' means that packets with any of the tags
listed in a <tag> subelement can be sent out this port (and the tag will
*not* be removed), and likewise packets arriving into the bridge from
the port are allowed to have any of the listed tags (and, again, no tag
will be removed). But what exactly do nativeMode='untagged' and
nativeMode='tagged' mean?
As I understand it, (nativeMode='untagged'|nativeMode='tagged') means
that packets (arriving from|sent to) the port (without a tag/with that
tag) will be (tagged|untagged). Can someone who fully understands this
please select A or B for each of the 4 parenthesized items (in as many
permutations as make sense).
I guess that in one of the modes, untagged packets going in one
direction or the other will be tagged, and vice versa, I just don't know
which direction does which, and for which mode, and don't want to guess.
(I'm asking this because I want to implement identical functionality for
standard Linux host bridges - I want to make sure there are no surprises
for people switching between OVS and Linux host bridge implementations).
10 years, 2 months
[libvirt] [PATCH v2 0/2] Python bindings for IOThreads
by John Ferlan
v1 here:
http://www.redhat.com/archives/libvir-list/2015-February/msg00684.html
Changes in v2:
* Return an empty list when there are no IOThreads found
* Fix alloc/cleanup logic to match review comments
* Used "PyObject *error = NULL" like the GetCPUStats code in order to
return either NULL for Py*_New() and PyTuple_SetItem() method errors
or VIR_PY_NONE for other exceptions
* Only make the Py_XDECREF calls on objects which don't get consumed by
PyTuple_SetItem
* Change name of method from "getIOThreadsInfo" to "ioThreadsInfo"
John Ferlan (2):
Support virDomainGetIOThreadsInfo and virDomainIOThreadsInfoFree
Support virDomainSetIOThreads
generator.py | 6 ++
libvirt-override-api.xml | 14 ++++
libvirt-override.c | 180 +++++++++++++++++++++++++++++++++++++++++++++++
sanitytest.py | 5 ++
4 files changed, 205 insertions(+)
--
2.1.0
10 years, 2 months
[libvirt] Plan for next release
by Daniel Veillard
Oops, Feb is really short and I got caught, I think if we want a
release by beginning of March, i.e. next week we need to enter freeze
ASAP. So I'm suggesting to start the freeze tomorrow, for a release
on Monday if everything goes well !
Hope this suits everybody,
thanks,
Daniel
--
Daniel Veillard | Open Source and Standards, Red Hat
veillard(a)redhat.com | libxml Gnome XML XSLT toolkit http://xmlsoft.org/
http://veillard.com/ | virtualization library http://libvirt.org/
10 years, 2 months
[libvirt] [PATCH] qemu: Fix AAVMF/OVMF #define names
by Cole Robinson
The AAVMF and OVMF names were swapped. Reorder the one usage where it
matters so behavior doesn't change.
---
src/qemu/qemu_conf.c | 16 ++++++++--------
1 file changed, 8 insertions(+), 8 deletions(-)
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index c6b083c..2cf3905 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -149,10 +149,10 @@ virQEMUDriverConfigLoaderNVRAMParse(virQEMUDriverConfigPtr cfg,
}
-#define VIR_QEMU_OVMF_LOADER_PATH "/usr/share/AAVMF/AAVMF_CODE.fd"
-#define VIR_QEMU_OVMF_NVRAM_PATH "/usr/share/AAVMF/AAVMF_VARS.fd"
-#define VIR_QEMU_AAVMF_LOADER_PATH "/usr/share/OVMF/OVMF_CODE.fd"
-#define VIR_QEMU_AAVMF_NVRAM_PATH "/usr/share/OVMF/OVMF_VARS.fd"
+#define VIR_QEMU_OVMF_LOADER_PATH "/usr/share/OVMF/OVMF_CODE.fd"
+#define VIR_QEMU_OVMF_NVRAM_PATH "/usr/share/OVMF/OVMF_VARS.fd"
+#define VIR_QEMU_AAVMF_LOADER_PATH "/usr/share/AAVMF/AAVMF_CODE.fd"
+#define VIR_QEMU_AAVMF_NVRAM_PATH "/usr/share/AAVMF/AAVMF_VARS.fd"
virQEMUDriverConfigPtr virQEMUDriverConfigNew(bool privileged)
{
@@ -313,10 +313,10 @@ virQEMUDriverConfigPtr virQEMUDriverConfigNew(bool privileged)
goto error;
cfg->nloader = 2;
- if (VIR_STRDUP(cfg->loader[0], VIR_QEMU_OVMF_LOADER_PATH) < 0 ||
- VIR_STRDUP(cfg->nvram[0], VIR_QEMU_OVMF_NVRAM_PATH) < 0 ||
- VIR_STRDUP(cfg->loader[1], VIR_QEMU_AAVMF_LOADER_PATH) < 0 ||
- VIR_STRDUP(cfg->nvram[1], VIR_QEMU_AAVMF_NVRAM_PATH) < 0)
+ if (VIR_STRDUP(cfg->loader[0], VIR_QEMU_AAVMF_LOADER_PATH) < 0 ||
+ VIR_STRDUP(cfg->nvram[0], VIR_QEMU_AAVMF_NVRAM_PATH) < 0 ||
+ VIR_STRDUP(cfg->loader[1], VIR_QEMU_OVMF_LOADER_PATH) < 0 ||
+ VIR_STRDUP(cfg->nvram[1], VIR_QEMU_OVMF_NVRAM_PATH) < 0)
goto error;
#endif
--
2.1.0
10 years, 2 months
[libvirt] [PATCH 0/3] Fix memory ABI stability check issues
by Peter Krempa
Note that this series applies only on top of the NUMA config unification series:
http://www.redhat.com/archives/libvir-list/2015-February/msg00532.html
Please see individual patches for explanation
Peter Krempa (3):
conf: ABI: Hugepage backing definition is not guest ABI
conf: ABI: Memballoon setting is not guest ABI
conf: numa: Check ABI stability of NUMA configuration
src/conf/domain_conf.c | 31 ++-----------------------------
src/conf/numa_conf.c | 37 +++++++++++++++++++++++++++++++++++++
src/conf/numa_conf.h | 3 +++
src/libvirt_private.syms | 1 +
4 files changed, 43 insertions(+), 29 deletions(-)
--
2.2.2
10 years, 2 months
[libvirt] [PATCH v2] qemu: Check for negative port values in network drive configuration
by Erik Skultety
We interpret port values as signed int (convert them from char *),
so if anegative value is provided in network disk's configuration,
we accept it as valid, however there's an 'unknown cause' error raised later.
This error is only accidental because we return the port value in the return code.
This patch adds just a minor tweak to the already existing check so we reject
negative values the same way as we reject non-numerical strings.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1163553
---
src/qemu/qemu_command.c | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 743d6f0..6941b5a 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -2951,10 +2951,10 @@ static int
qemuNetworkDriveGetPort(int protocol,
const char *port)
{
- int ret = 0;
-
+ unsigned ret = 0;
if (port) {
- if (virStrToLong_i(port, NULL, 10, &ret) < 0) {
+ if (virStrToLong_uip(port, NULL, 10, &ret) < 0 ||
+ (int) ret < 0) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("failed to parse port number '%s'"),
port);
--
1.9.3
10 years, 2 months
[libvirt] [PATCH 00/24] Move all NUMA related configuration into one structure
by Peter Krempa
Due to historical madne^Wreasons the NUMA configuration is split between
/domain/cpu and /domain/numatune. Internally the data was also split into two
places. We can't do anything about the external representation but we certainly
can store all the definitions in one place internally.
This series does that.
Peter Krempa (24):
conf: Move numatune_conf to numa_conf
conf: Move NUMA cell parsing code from cpu conf to numa conf
conf: Refactor virDomainNumaDefCPUParseXML
conf: numa: Don't duplicate NUMA cell cpumask
conf: Move NUMA cell formatter to numa_conf
conf: numa: Rename virDomainNumatune to virDomainNuma
conf: Move enum virMemAccess to the NUMA code and rename it
conf: numa: Recalculate rather than remember total NUMA cpu count
conf: numa: Improve error message in case a numa node doesn't have cpus
conf: numa: Reformat virDomainNumatuneParseXML
conf: numa: Refactor logic in virDomainNumatuneParseXML
conf: numa: Format <numatune> XML only if necessary
conf: Separate helper for creating domain objects
conf: Allocate domain definition with the new helper
conf: numa: Always allocate the NUMA config
conf: numa: Avoid re-allocation of the NUMA conf
numa: conf: Tweak parameters of virDomainNumatuneSet
conf: numa: Don't pass double pointer to virDomainNumatuneParseXML
qemu: command: Unify retrieval of NUMA cell count in qemuBuildNumaArgStr
conf: numa: Add helper to get guest NUMA node count and refactor users
conf: numa: Add accesor for the NUMA node cpu mask
conf: numa: Add accessor to NUMA node's memory access mode
conf: numa: Add setter/getter for NUMA node memory size
conf: Move all NUMA configuration to virDomainNuma
po/POTFILES.in | 2 +-
src/Makefile.am | 2 +-
src/conf/cpu_conf.c | 151 +-------
src/conf/cpu_conf.h | 25 +-
src/conf/domain_conf.c | 59 ++-
src/conf/domain_conf.h | 11 +-
src/conf/{numatune_conf.c => numa_conf.c} | 429 +++++++++++++++------
src/conf/{numatune_conf.h => numa_conf.h} | 77 ++--
src/cpu/cpu.c | 2 +-
src/libvirt_private.syms | 13 +-
src/lxc/lxc_cgroup.c | 4 +-
src/lxc/lxc_controller.c | 6 +-
src/lxc/lxc_native.c | 4 +-
src/openvz/openvz_conf.c | 2 +-
src/parallels/parallels_sdk.c | 4 +-
src/phyp/phyp_driver.c | 2 +-
src/qemu/qemu_cgroup.c | 12 +-
src/qemu/qemu_command.c | 52 +--
src/qemu/qemu_driver.c | 20 +-
src/qemu/qemu_process.c | 4 +-
src/vbox/vbox_common.c | 8 +-
src/vmx/vmx.c | 2 +-
src/xen/xen_hypervisor.c | 8 +-
src/xen/xend_internal.c | 4 +-
src/xen/xm_internal.c | 4 +-
src/xenconfig/xen_sxpr.c | 2 +-
src/xenconfig/xen_xl.c | 2 +-
src/xenconfig/xen_xm.c | 2 +-
tests/cputest.c | 2 +-
tests/openvzutilstest.c | 2 +-
.../qemuxml2argv-numatune-memnode.xml | 2 +-
.../qemuxml2xmlout-numatune-memnode.xml | 2 +-
tests/securityselinuxtest.c | 2 +-
tests/testutilsqemu.c | 4 -
34 files changed, 503 insertions(+), 424 deletions(-)
rename src/conf/{numatune_conf.c => numa_conf.c} (60%)
rename src/conf/{numatune_conf.h => numa_conf.h} (50%)
--
2.2.2
10 years, 2 months
[libvirt] [PATCH v2] libvirt-guests: Allow time sync on guests resume
by Michal Privoznik
Well, imagine domains were running, and as the host went down, they
were managesaved. Later, after some time, the host went up again and
domains got restored. But without correct time. And depending on how
long was the host shut off, it may take some time for ntp to sync the
time too. But hey, wait a minute. We have an API just for that! So:
1) Introduce SYNC_TIME variable in libvirt-guests.sysconf to allow
users control over the new functionality
2) Call 'virsh domtime --sync $dom' in the libvirt-guests script.
Unfortunately, this is all-or-nothing approach (just like anything
else with the script). Domains are required to have configured and
running qemu-ga inside.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
diff to v1:
-ignore the return value of domtime command
tools/libvirt-guests.sh.in | 5 +++++
tools/libvirt-guests.sysconf | 7 +++++++
2 files changed, 12 insertions(+)
diff --git a/tools/libvirt-guests.sh.in b/tools/libvirt-guests.sh.in
index 1b17bbe..9aa06fa 100644
--- a/tools/libvirt-guests.sh.in
+++ b/tools/libvirt-guests.sh.in
@@ -171,7 +171,9 @@ start() {
isfirst=true
bypass=
+ sync_time=false
test "x$BYPASS_CACHE" = x0 || bypass=--bypass-cache
+ test "x$SYNC_TIME" = x0 || sync_time=true
while read uri list; do
configured=false
set -f
@@ -206,6 +208,9 @@ start() {
retval run_virsh "$uri" start $bypass "$name" \
>/dev/null && \
gettext "done"; echo
+ if "$sync_time"; then
+ run_virsh "$uri" domtime --sync "$name" >/dev/null
+ fi
fi
fi
done
diff --git a/tools/libvirt-guests.sysconf b/tools/libvirt-guests.sysconf
index d1f2051..03e732f 100644
--- a/tools/libvirt-guests.sysconf
+++ b/tools/libvirt-guests.sysconf
@@ -39,3 +39,10 @@
# restoring guests, even though this may give slower operation for
# some file systems.
#BYPASS_CACHE=0
+
+# If non-zero, try to sync guest time on a domain resume. Be aware, that
+# this requires guest agent, which, moreover, has to run under supported
+# system. For instance, qemu-ga doesn't support guest time synchronization
+# on Windows guests, but Linux ones. By default, this piece of
+# functionality is turned off.
+#SYNC_TIME=1
--
2.0.5
10 years, 2 months
[libvirt] [PATCH 0/3] Introduce machine type based capabilities filtering
by Michal Privoznik
Well, here's an alternative approach to my question here [1].
1: https://www.redhat.com/archives/libvir-list/2015-February/msg00369.html
Michal Privoznik (3):
virQEMUCapsCacheLookupCopy: Pass machine type
virQEMUCapsCacheLookupCopy: Filter qemuCaps based on machineType
qemuCaps: Disable memdev for rhel6.5.0 machine type
src/qemu/qemu_capabilities.c | 45 +++++++++++++++++++++-
src/qemu/qemu_capabilities.h | 5 ++-
src/qemu/qemu_migration.c | 3 +-
src/qemu/qemu_process.c | 9 +++--
.../qemuxml2argv-numatune-memnode-rhel650.args | 7 ++++
.../qemuxml2argv-numatune-memnode-rhel650.xml | 31 +++++++++++++++
tests/qemuxml2argvtest.c | 6 +++
7 files changed, 100 insertions(+), 6 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-numatune-memnode-rhel650.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-numatune-memnode-rhel650.xml
--
2.0.5
10 years, 2 months
[libvirt] [PATCH] virsh: fix vcpupin info
by Pavel Hrdina
The "virDomainGetInfo" will get for running domain only live info and for
offline domain only config info. There was no way how to get config info
for running domain. We will use "vshCPUCountCollect" instead to get the
correct cpu count that we need to pass to "virDomainGetVcpuPinInfo".
Also cleanup some unnecessary variables and checks that are done by
drivers.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1160559
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
tests/vcpupin | 2 +-
tools/virsh-domain.c | 75 ++++++++++++++++++++++++----------------------------
2 files changed, 35 insertions(+), 42 deletions(-)
diff --git a/tests/vcpupin b/tests/vcpupin
index 9f34ec0..cd09145 100755
--- a/tests/vcpupin
+++ b/tests/vcpupin
@@ -43,7 +43,7 @@ compare exp out || fail=1
$abs_top_builddir/tools/virsh --connect test:///default vcpupin test 100 0,1 > out 2>&1
test $? = 1 || fail=1
cat <<\EOF > exp || fail=1
-error: vcpupin: vCPU index out of range.
+error: invalid argument: requested vcpu is higher than allocated vcpus
EOF
compare exp out || fail=1
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 2506b89..61a3385 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -6450,20 +6450,17 @@ vshParseCPUList(vshControl *ctl, const char *cpulist,
static bool
cmdVcpuPin(vshControl *ctl, const vshCmd *cmd)
{
- virDomainInfo info;
virDomainPtr dom;
unsigned int vcpu = 0;
const char *cpulist = NULL;
bool ret = false;
unsigned char *cpumap = NULL;
- unsigned char *cpumaps = NULL;
size_t cpumaplen;
int maxcpu, ncpus;
size_t i;
bool config = vshCommandOptBool(cmd, "config");
bool live = vshCommandOptBool(cmd, "live");
bool current = vshCommandOptBool(cmd, "current");
- bool query = false; /* Query mode if no cpulist */
int got_vcpu;
unsigned int flags = VIR_DOMAIN_AFFECT_CURRENT;
@@ -6481,48 +6478,47 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd)
if (vshCommandOptStringReq(ctl, cmd, "cpulist", &cpulist) < 0)
return false;
- if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
- return false;
-
- query = !cpulist;
+ if (!cpulist)
+ VSH_EXCLUSIVE_OPTIONS_VAR(live, config);
if ((got_vcpu = vshCommandOptUInt(cmd, "vcpu", &vcpu)) < 0) {
vshError(ctl, "%s", _("vcpupin: Invalid vCPU number."));
- goto cleanup;
+ return false;
}
/* In pin mode, "vcpu" is necessary */
- if (!query && got_vcpu == 0) {
+ if (cpulist && got_vcpu == 0) {
vshError(ctl, "%s", _("vcpupin: Missing vCPU number in pin mode."));
- goto cleanup;
- }
-
- if (virDomainGetInfo(dom, &info) != 0) {
- vshError(ctl, "%s", _("vcpupin: failed to get domain information."));
- goto cleanup;
- }
-
- if (vcpu >= info.nrVirtCpu) {
- vshError(ctl, "%s", _("vcpupin: vCPU index out of range."));
- goto cleanup;
+ return false;
}
if ((maxcpu = vshNodeGetCPUCount(ctl->conn)) < 0)
- goto cleanup;
-
+ return false;
cpumaplen = VIR_CPU_MAPLEN(maxcpu);
+ if (!(dom = vshCommandOptDomain(ctl, cmd, NULL)))
+ return false;
+
/* Query mode: show CPU affinity information then exit.*/
- if (query) {
+ if (!cpulist) {
/* When query mode and neither "live", "config" nor "current"
* is specified, set VIR_DOMAIN_AFFECT_CURRENT as flags */
if (flags == -1)
flags = VIR_DOMAIN_AFFECT_CURRENT;
- cpumaps = vshMalloc(ctl, info.nrVirtCpu * cpumaplen);
- if ((ncpus = virDomainGetVcpuPinInfo(dom, info.nrVirtCpu,
- cpumaps, cpumaplen, flags)) >= 0) {
+ if ((ncpus = vshCPUCountCollect(ctl, dom, flags, true)) < 0) {
+ if (ncpus == -1) {
+ if (flags & VIR_DOMAIN_AFFECT_LIVE)
+ vshError(ctl, "%s", _("cannot get vcpupin for offline domain"));
+ else
+ vshError(ctl, "%s", _("cannot get vcpupin for transient domain"));
+ }
+ goto cleanup;
+ }
+ cpumap = vshMalloc(ctl, ncpus * cpumaplen);
+ if ((ncpus = virDomainGetVcpuPinInfo(dom, ncpus, cpumap,
+ cpumaplen, flags)) >= 0) {
vshPrintExtra(ctl, "%s %s\n", _("VCPU:"), _("CPU Affinity"));
vshPrintExtra(ctl, "----------------------------------\n");
for (i = 0; i < ncpus; i++) {
@@ -6530,30 +6526,27 @@ cmdVcpuPin(vshControl *ctl, const vshCmd *cmd)
continue;
vshPrint(ctl, "%4zu: ", i);
- ret = vshPrintPinInfo(cpumaps, cpumaplen, maxcpu, i);
+ ret = vshPrintPinInfo(cpumap, cpumaplen, maxcpu, i);
vshPrint(ctl, "\n");
if (!ret)
break;
}
}
-
- VIR_FREE(cpumaps);
- goto cleanup;
- }
-
- /* Pin mode: pinning specified vcpu to specified physical cpus*/
- if (!(cpumap = vshParseCPUList(ctl, cpulist, maxcpu, cpumaplen)))
- goto cleanup;
-
- if (flags == -1) {
- if (virDomainPinVcpu(dom, vcpu, cpumap, cpumaplen) != 0)
- goto cleanup;
} else {
- if (virDomainPinVcpuFlags(dom, vcpu, cpumap, cpumaplen, flags) != 0)
+ /* Pin mode: pinning specified vcpu to specified physical cpus*/
+ if (!(cpumap = vshParseCPUList(ctl, cpulist, maxcpu, cpumaplen)))
goto cleanup;
+
+ if (flags == -1) {
+ if (virDomainPinVcpu(dom, vcpu, cpumap, cpumaplen) != 0)
+ goto cleanup;
+ } else {
+ if (virDomainPinVcpuFlags(dom, vcpu, cpumap, cpumaplen, flags) != 0)
+ goto cleanup;
+ }
+ ret = true;
}
- ret = true;
cleanup:
VIR_FREE(cpumap);
virDomainFree(dom);
--
2.0.5
10 years, 2 months