Devel
Threads by month
- ----- 2026 -----
- April
- March
- February
- January
- ----- 2025 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2024 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2023 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2022 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2021 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2020 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2019 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2018 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2017 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2016 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2015 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2014 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2013 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2012 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2011 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2010 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2009 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2008 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2007 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2006 -----
- December
- November
- October
- September
- August
- July
- June
- May
- April
- March
- February
- January
- ----- 2005 -----
- December
- 17 participants
- 40169 discussions
[libvirt] [PATCH 0/2] Add support for multiple serial ports into the Xen driver
by Michal Novotny 18 Feb '11
by Michal Novotny 18 Feb '11
18 Feb '11
Hi,
this is the patch to add support for multiple serial ports to the
libvirt Xen driver. It support both old style (serial = "pty") and
new style (serial = [ "/dev/ttyS0", "/dev/ttyS1" ]) definition and
tests for xml2sexpr, sexpr2xml and xmconfig have been added as well.
Written and tested on RHEL-5 Xen dom0 and working as designed but
the Xen version have to have patch for RHBZ #614004 but this patch
is for upstream version of libvirt.
Also, this patch is addressing issue described in RHBZ #670789.
Differences between v2 and v3:
* Fixed handling of port number in virDomainChrDefParseTargetXML()
* Fixed serial port handling if we have definition of device in non-zero port number
* Added second test for first port undefined (i.e. port value > 0 for just one
serial port)
Michal
Signed-off-by: Michal Novotny <minovotn(a)redhat.com>
Michal Novotny (2):
Fix virDomainChrDefParseTargetXML() to parse the target port if
available
Add support for multiple serial ports into the Xen driver
src/conf/domain_conf.c | 7 +-
src/xen/xend_internal.c | 86 ++++++++++--
src/xen/xm_internal.c | 137 ++++++++++++++++---
.../sexpr2xml-fv-serial-dev-2-ports.sexpr | 1 +
.../sexpr2xml-fv-serial-dev-2-ports.xml | 53 ++++++++
.../sexpr2xml-fv-serial-dev-2nd-port.sexpr | 1 +
.../sexpr2xml-fv-serial-dev-2nd-port.xml | 49 +++++++
tests/sexpr2xmltest.c | 2 +
.../test-fullvirt-serial-dev-2-ports.cfg | 25 ++++
.../test-fullvirt-serial-dev-2-ports.xml | 55 ++++++++
.../test-fullvirt-serial-dev-2nd-port.cfg | 25 ++++
.../test-fullvirt-serial-dev-2nd-port.xml | 53 ++++++++
tests/xmconfigtest.c | 2 +
.../xml2sexpr-fv-serial-dev-2-ports.sexpr | 1 +
.../xml2sexpr-fv-serial-dev-2-ports.xml | 44 +++++++
.../xml2sexpr-fv-serial-dev-2nd-port.sexpr | 1 +
.../xml2sexpr-fv-serial-dev-2nd-port.xml | 40 ++++++
tests/xml2sexprtest.c | 2 +
18 files changed, 546 insertions(+), 38 deletions(-)
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2nd-port.sexpr
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2nd-port.xml
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2nd-port.cfg
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2nd-port.xml
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2nd-port.sexpr
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2nd-port.xml
--
1.7.3.2
1
3
libvirt-guests invokes functions in gettext.sh, so we need to
require gettext package in spec file.
Demo with the fix:
% rpm -q gettext
package gettext is not installed
% rpm -ivh libvirt-client-0.8.8-1.fc14.x86_64.rpm
error: Failed dependencies:
gettext is needed by libvirt-client-0.8.8-1.fc14.x86_64
* libvirt.spec.in
---
libvirt.spec.in | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/libvirt.spec.in b/libvirt.spec.in
index d4208e8..c4b4e7f 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -415,6 +415,7 @@ Requires: ncurses
# So remote clients can access libvirt over SSH tunnel
# (client invokes 'nc' against the UNIX socket on the server)
Requires: nc
+Requires: gettext
%if %{with_sasl}
Requires: cyrus-sasl
# Not technically required, but makes 'out-of-box' config
--
1.7.4
4
6
Hi,
Here are 2 cleanup patches for 2 smallish issues I found when
looking at how virHash was used in libvirt.
Christophe
3
5
[libvirt] [PATCH] check device-mapper when building with mpath or disk storage driver
by Wen Congyang 18 Feb '11
by Wen Congyang 18 Feb '11
18 Feb '11
When I build libvirt without libvirtd, I receive the following errors:
GEN virsh.1
CCLD virsh
../src/.libs/libvirt.so: undefined reference to `dm_is_dm_major'
collect2: ld returned 1 exit status
make[3]: *** [virsh] Error 1
My build step:
# ./autogen.sh --without-libvirtd
# make dist
# rpmbuild --nodeps --define "_sourcedir `pwd`" --define "_without_libvirtd 1" -ba libvirt.spec
This bug was caused by commit df1011ca.
Signed-off-by: Wen Congyang <wency(a)cn.fujitsu.com>
---
configure.ac | 47 ++++++++++++++++++++++++-----------------------
src/util/util.c | 3 +++
2 files changed, 27 insertions(+), 23 deletions(-)
diff --git a/configure.ac b/configure.ac
index 2bb6918..2f22c18 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1697,29 +1697,6 @@ if test "$with_storage_mpath" = "check" && test "$with_linux" = "yes"; then
fi
AM_CONDITIONAL([WITH_STORAGE_MPATH], [test "$with_storage_mpath" = "yes"])
-if test "$with_storage_mpath" = "yes"; then
- DEVMAPPER_CFLAGS=
- DEVMAPPER_LIBS=
- PKG_CHECK_MODULES([DEVMAPPER], [devmapper >= $DEVMAPPER_REQUIRED], [], [DEVMAPPER_FOUND=no])
- if test "$DEVMAPPER_FOUND" = "no"; then
- # devmapper is missing pkg-config files in ubuntu, suse, etc
- save_LIBS="$LIBS"
- save_CFLAGS="$CFLAGS"
- DEVMAPPER_FOUND=yes
- AC_CHECK_HEADER([libdevmapper.h],,[DEVMAPPER_FOUND=no])
- AC_CHECK_LIB([devmapper], [dm_task_run],,[DEVMAPPER_FOUND=no])
- DEVMAPPER_LIBS="-ldevmapper"
- LIBS="$save_LIBS"
- CFLAGS="$save_CFLAGS"
- fi
- if test "$DEVMAPPER_FOUND" = "no" ; then
- AC_MSG_ERROR([You must install device-mapper-devel/libdevmapper >= $DEVMAPPER_REQUIRED to compile libvirt])
- fi
-
-fi
-AC_SUBST([DEVMAPPER_CFLAGS])
-AC_SUBST([DEVMAPPER_LIBS])
-
LIBPARTED_CFLAGS=
LIBPARTED_LIBS=
if test "$with_storage_disk" = "yes" ||
@@ -1781,6 +1758,30 @@ AM_CONDITIONAL([WITH_STORAGE_DISK], [test "$with_storage_disk" = "yes"])
AC_SUBST([LIBPARTED_CFLAGS])
AC_SUBST([LIBPARTED_LIBS])
+if test "$with_storage_mpath" = "yes" ||
+ test "$with_storage_disk" = "yes"; then
+ DEVMAPPER_CFLAGS=
+ DEVMAPPER_LIBS=
+ PKG_CHECK_MODULES([DEVMAPPER], [devmapper >= $DEVMAPPER_REQUIRED], [], [DEVMAPPER_FOUND=no])
+ if test "$DEVMAPPER_FOUND" = "no"; then
+ # devmapper is missing pkg-config files in ubuntu, suse, etc
+ save_LIBS="$LIBS"
+ save_CFLAGS="$CFLAGS"
+ DEVMAPPER_FOUND=yes
+ AC_CHECK_HEADER([libdevmapper.h],,[DEVMAPPER_FOUND=no])
+ AC_CHECK_LIB([devmapper], [dm_task_run],,[DEVMAPPER_FOUND=no])
+ DEVMAPPER_LIBS="-ldevmapper"
+ LIBS="$save_LIBS"
+ CFLAGS="$save_CFLAGS"
+ fi
+ if test "$DEVMAPPER_FOUND" = "no" ; then
+ AC_MSG_ERROR([You must install device-mapper-devel/libdevmapper >= $DEVMAPPER_REQUIRED to compile libvirt])
+ fi
+
+fi
+AC_SUBST([DEVMAPPER_CFLAGS])
+AC_SUBST([DEVMAPPER_LIBS])
+
dnl
dnl check for libcurl (ESX/XenAPI)
dnl
diff --git a/src/util/util.c b/src/util/util.c
index a7ce4b3..cd4ddf3 100644
--- a/src/util/util.c
+++ b/src/util/util.c
@@ -3125,6 +3125,8 @@ virTimestamp(void)
return timestamp;
}
+#if defined WITH_STORAGE_MPATH || defined WITH_STORAGE_DISK
+/* Currently only disk and mpath storage drivers use this function. */
bool
virIsDevMapperDevice(const char *devname)
{
@@ -3137,3 +3139,4 @@ virIsDevMapperDevice(const char *devname)
return false;
}
+#endif
--
1.7.1
1
0
Hi,
I am currently working on a project where I must develop a function to
clone domain for the driver ESX.
I immediately saw that the function cloneVM_Task () from VMware API was
not supported for standalone ESX.
Does anyone know the best way to still implement this function or a
workaround ?
Thank you in advance,
Cédric
1
0
[libvirt] [PATCH] Add support for multiple serial ports into the Xen driver
by Michal Novotny 18 Feb '11
by Michal Novotny 18 Feb '11
18 Feb '11
Hi,
this is the patch to add support for multiple serial ports to the
libvirt Xen driver. It support both old style (serial = "pty") and
new style (serial = [ "/dev/ttyS0", "/dev/ttyS1" ]) definition and
tests for xml2sexpr, sexpr2xml and xmconfig have been added as well.
Written and tested on RHEL-5 Xen dom0 and working as designed but
the Xen version have to have patch for RHBZ #614004.
Also, this patch is addressing issue described in RHBZ #670789.
Michal
Signed-off-by: Michal Novotny <minovotn(a)redhat.com>
---
src/xen/xend_internal.c | 73 ++++++++++-
src/xen/xm_internal.c | 141 +++++++++++++++++---
.../sexpr2xml-fv-serial-dev-2-ports.sexpr | 1 +
.../sexpr2xml-fv-serial-dev-2-ports.xml | 53 ++++++++
tests/sexpr2xmltest.c | 1 +
.../test-fullvirt-serial-dev-2-ports.cfg | 25 ++++
.../test-fullvirt-serial-dev-2-ports.xml | 55 ++++++++
tests/xmconfigtest.c | 1 +
.../xml2sexpr-fv-serial-dev-2-ports.sexpr | 1 +
.../xml2sexpr-fv-serial-dev-2-ports.xml | 44 ++++++
tests/xml2sexprtest.c | 1 +
11 files changed, 370 insertions(+), 26 deletions(-)
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml
diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c
index 44d5a22..493736e 100644
--- a/src/xen/xend_internal.c
+++ b/src/xen/xend_internal.c
@@ -1218,6 +1218,9 @@ xenDaemonParseSxprChar(const char *value,
if (value[0] == '/') {
def->source.type = VIR_DOMAIN_CHR_TYPE_DEV;
+ def->data.file.path = strdup(value);
+ if (!def->data.file.path)
+ goto error;
} else {
if ((tmp = strchr(value, ':')) != NULL) {
*tmp = '\0';
@@ -2334,6 +2337,8 @@ xenDaemonParseSxpr(virConnectPtr conn,
tty = xenStoreDomainGetConsolePath(conn, def->id);
xenUnifiedUnlock(priv);
if (hvm) {
+
+
tmp = sexpr_node(root, "domain/image/hvm/serial");
if (tmp && STRNEQ(tmp, "none")) {
virDomainChrDefPtr chr;
@@ -2346,6 +2351,54 @@ xenDaemonParseSxpr(virConnectPtr conn,
chr->deviceType = VIR_DOMAIN_CHR_DEVICE_TYPE_SERIAL;
def->serials[def->nserials++] = chr;
}
+
+
+ const struct sexpr *serial_root;
+ bool have_multiple_serials = false;
+
+ serial_root = sexpr_lookup(root, "domain/image/hvm/serial");
+ if (serial_root) {
+ const struct sexpr *cur, *node, *cur2;
+
+ for (cur = serial_root; cur->kind == SEXPR_CONS; cur = cur->u.s.cdr) {
+ node = cur->u.s.car;
+
+ for (cur2 = node; cur2->kind == SEXPR_CONS; cur2 = cur2->u.s.cdr) {
+ tmp = cur2->u.s.car->u.value;
+
+ if (tmp && STRNEQ(tmp, "none")) {
+ virDomainChrDefPtr chr;
+ if ((chr = xenDaemonParseSxprChar(tmp, tty)) == NULL)
+ goto error;
+ if (VIR_REALLOC_N(def->serials, def->nserials+1) < 0) {
+ virDomainChrDefFree(chr);
+ goto no_memory;
+ }
+ chr->targetType = VIR_DOMAIN_CHR_TARGET_TYPE_SERIAL;
+ chr->target.port = def->nserials;
+ def->serials[def->nserials++] = chr;
+ }
+ have_multiple_serials = true;
+ }
+ }
+ }
+
+ /* If no serial port has been defined (using the new-style definition) use the old way */
+ if (!have_multiple_serials) {
+ tmp = sexpr_node(root, "domain/image/hvm/serial");
+ if (tmp && STRNEQ(tmp, "none")) {
+ virDomainChrDefPtr chr;
+ if ((chr = xenDaemonParseSxprChar(tmp, tty)) == NULL)
+ goto error;
+ if (VIR_REALLOC_N(def->serials, def->nserials+1) < 0) {
+ virDomainChrDefFree(chr);
+ goto no_memory;
+ }
+ chr->targetType = VIR_DOMAIN_CHR_TARGET_TYPE_SERIAL;
+ def->serials[def->nserials++] = chr;
+ }
+ }
+
tmp = sexpr_node(root, "domain/image/hvm/parallel");
if (tmp && STRNEQ(tmp, "none")) {
virDomainChrDefPtr chr;
@@ -5958,10 +6011,22 @@ xenDaemonFormatSxpr(virConnectPtr conn,
virBufferAddLit(&buf, "(parallel none)");
}
if (def->serials) {
- virBufferAddLit(&buf, "(serial ");
- if (xenDaemonFormatSxprChr(def->serials[0], &buf) < 0)
- goto error;
- virBufferAddLit(&buf, ")");
+ if (def->nserials > 1) {
+ virBufferAddLit(&buf, "(serial (");
+ for (i = 0; i < def->nserials; i++) {
+ if (xenDaemonFormatSxprChr(def->serials[i], &buf) < 0)
+ goto error;
+ if (i < def->nserials - 1)
+ virBufferAddLit(&buf, " ");
+ }
+ virBufferAddLit(&buf, "))");
+ }
+ else {
+ virBufferAddLit(&buf, "(serial ");
+ if (xenDaemonFormatSxprChr(def->serials[0], &buf) < 0)
+ goto error;
+ virBufferAddLit(&buf, ")");
+ }
} else {
virBufferAddLit(&buf, "(serial none)");
}
diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c
index bfb6698..f457d80 100644
--- a/src/xen/xm_internal.c
+++ b/src/xen/xm_internal.c
@@ -1432,20 +1432,54 @@ xenXMDomainConfigParse(virConnectPtr conn, virConfPtr conf) {
chr = NULL;
}
- if (xenXMConfigGetString(conf, "serial", &str, NULL) < 0)
- goto cleanup;
- if (str && STRNEQ(str, "none") &&
- !(chr = xenDaemonParseSxprChar(str, NULL)))
- goto cleanup;
+ /* Try to get the list of values to support multiple serial ports */
+ list = virConfGetValue(conf, "serial");
+ if (list && list->type == VIR_CONF_LIST) {
+ list = list->list;
+ while (list) {
+ char *port;
+
+ if ((list->type != VIR_CONF_STRING) || (list->str == NULL)) {
+ xenXMError(VIR_ERR_INTERNAL_ERROR,
+ _("config value %s was malformed"), port);
+ goto cleanup;
+ }
- if (chr) {
- if (VIR_ALLOC_N(def->serials, 1) < 0) {
- virDomainChrDefFree(chr);
- goto no_memory;
+ port = list->str;
+ if (VIR_ALLOC(chr) < 0)
+ goto no_memory;
+ if (!(chr = xenDaemonParseSxprChar(port, NULL)))
+ goto cleanup;
+
+ if (VIR_REALLOC_N(def->serials, def->nserials+1) < 0)
+ goto no_memory;
+
+ chr->targetType = VIR_DOMAIN_CHR_TARGET_TYPE_SERIAL;
+ chr->target.port = def->nserials;
+
+ def->serials[def->nserials++] = chr;
+ chr = NULL;
+
+ list = list->next;
+ }
+ }
+ /* If domain is not using multiple serial ports we parse data old way */
+ else {
+ if (xenXMConfigGetString(conf, "serial", &str, NULL) < 0)
+ goto cleanup;
+ if (str && STRNEQ(str, "none") &&
+ !(chr = xenDaemonParseSxprChar(str, NULL)))
+ goto cleanup;
+
+ if (chr) {
+ if (VIR_ALLOC_N(def->serials, 1) < 0) {
+ virDomainChrDefFree(chr);
+ goto no_memory;
+ }
+ chr->targetType = VIR_DOMAIN_CHR_TARGET_TYPE_SERIAL;
+ def->serials[0] = chr;
+ def->nserials++;
}
- chr->deviceType = VIR_DOMAIN_CHR_DEVICE_TYPE_SERIAL;
- def->serials[0] = chr;
- def->nserials++;
}
} else {
if (!(def->console = xenDaemonParseSxprChar("pty", NULL)))
@@ -2123,6 +2157,45 @@ cleanup:
return -1;
}
+static int xenXMDomainConfigFormatSerial(virConfValuePtr list,
+ virDomainChrDefPtr serial)
+{
+ virBuffer buf = VIR_BUFFER_INITIALIZER;
+ virConfValuePtr val, tmp;
+ int ret;
+
+ ret = xenDaemonFormatSxprChr(serial, &buf);
+ if (ret < 0) {
+ virReportOOMError();
+ goto cleanup;
+ }
+ if (virBufferError(&buf)) {
+ virReportOOMError();
+ goto cleanup;
+ }
+
+ if (VIR_ALLOC(val) < 0) {
+ virReportOOMError();
+ goto cleanup;
+ }
+
+ val->type = VIR_CONF_STRING;
+ val->str = virBufferContentAndReset(&buf);
+ tmp = list->list;
+ while (tmp && tmp->next)
+ tmp = tmp->next;
+ if (tmp)
+ tmp->next = val;
+ else
+ list->list = val;
+
+ return 0;
+
+cleanup:
+ virBufferFreeAndReset(&buf);
+ return -1;
+}
+
static int xenXMDomainConfigFormatNet(virConnectPtr conn,
virConfValuePtr list,
virDomainNetDefPtr net,
@@ -2685,17 +2758,41 @@ virConfPtr xenXMDomainConfigFormat(virConnectPtr conn,
}
if (def->nserials) {
- virBuffer buf = VIR_BUFFER_INITIALIZER;
- char *str;
- int ret;
+ /* If there's a single serial port definition use the old approach not to break old configs */
+ if (def->nserials == 1) {
+ virBuffer buf = VIR_BUFFER_INITIALIZER;
+ char *str;
+ int ret;
+
+ ret = xenDaemonFormatSxprChr(def->serials[0], &buf);
+ str = virBufferContentAndReset(&buf);
+ if (ret == 0)
+ ret = xenXMConfigSetString(conf, "serial", str);
+ VIR_FREE(str);
+ if (ret < 0)
+ goto no_memory;
+ }
+ else {
+ virConfValuePtr serialVal = NULL;
- ret = xenDaemonFormatSxprChr(def->serials[0], &buf);
- str = virBufferContentAndReset(&buf);
- if (ret == 0)
- ret = xenXMConfigSetString(conf, "serial", str);
- VIR_FREE(str);
- if (ret < 0)
- goto no_memory;
+ if (VIR_ALLOC(serialVal) < 0)
+ goto no_memory;
+ serialVal->type = VIR_CONF_LIST;
+ serialVal->list = NULL;
+
+ for (i = 0; i < def->nserials; i++) {
+ if (xenXMDomainConfigFormatSerial(serialVal, def->serials[i]) < 0)
+ goto cleanup;
+ }
+
+ if (serialVal->list != NULL) {
+ int ret = virConfSetValue(conf, "serial", serialVal);
+ serialVal = NULL;
+ if (ret < 0)
+ goto no_memory;
+ }
+ VIR_FREE(serialVal);
+ }
} else {
if (xenXMConfigSetString(conf, "serial", "none") < 0)
goto no_memory;
diff --git a/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr b/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr
new file mode 100644
index 0000000..e709eb0
--- /dev/null
+++ b/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr
@@ -0,0 +1 @@
+(domain (domid 1)(name 'fvtest')(memory 400)(maxmem 400)(vcpus 1)(uuid 'b5d70dd275cdaca517769660b059d8ff')(on_poweroff 'destroy')(on_reboot 'restart')(on_crash 'restart')(image (hvm (kernel '/usr/lib/xen/boot/hvmloader')(vcpus 1)(boot c)(cdrom '/root/boot.iso')(acpi 1)(usb 1)(parallel none)(serial (/dev/ttyS0 /dev/ttyS1))(device_model '/usr/lib64/xen/bin/qemu-dm')(vnc 1)))(device (vbd (dev 'ioemu:hda')(uname 'file:/root/foo.img')(mode 'w')))(device (vif (mac '00:16:3e:1b:b1:47')(bridge 'xenbr0')(script 'vif-bridge')(type ioemu))))
diff --git a/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml b/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml
new file mode 100644
index 0000000..783cd67
--- /dev/null
+++ b/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml
@@ -0,0 +1,53 @@
+<domain type='xen' id='1'>
+ <name>fvtest</name>
+ <uuid>b5d70dd2-75cd-aca5-1776-9660b059d8ff</uuid>
+ <memory>409600</memory>
+ <currentMemory>409600</currentMemory>
+ <vcpu>1</vcpu>
+ <os>
+ <type>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <boot dev='hd'/>
+ </os>
+ <features>
+ <acpi/>
+ </features>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ <devices>
+ <emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
+ <disk type='file' device='disk'>
+ <driver name='file'/>
+ <source file='/root/foo.img'/>
+ <target dev='hda' bus='ide'/>
+ </disk>
+ <disk type='file' device='cdrom'>
+ <driver name='file'/>
+ <source file='/root/boot.iso'/>
+ <target dev='hdc' bus='ide'/>
+ <readonly/>
+ </disk>
+ <interface type='bridge'>
+ <mac address='00:16:3e:1b:b1:47'/>
+ <source bridge='xenbr0'/>
+ <script path='vif-bridge'/>
+ <target dev='vif1.0'/>
+ </interface>
+ <serial type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </serial>
+ <serial type='dev'>
+ <source path='/dev/ttyS1'/>
+ <target port='1'/>
+ </serial>
+ <console type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </console>
+ <input type='mouse' bus='ps2'/>
+ <graphics type='vnc' port='5901' autoport='no'/>
+ </devices>
+</domain>
diff --git a/tests/sexpr2xmltest.c b/tests/sexpr2xmltest.c
index f100dd8..4b5766d 100644
--- a/tests/sexpr2xmltest.c
+++ b/tests/sexpr2xmltest.c
@@ -158,6 +158,7 @@ mymain(int argc, char **argv)
DO_TEST("fv-serial-null", "fv-serial-null", 1);
DO_TEST("fv-serial-file", "fv-serial-file", 1);
+ DO_TEST("fv-serial-dev-2-ports", "fv-serial-dev-2-ports", 1);
DO_TEST("fv-serial-stdio", "fv-serial-stdio", 1);
DO_TEST("fv-serial-pty", "fv-serial-pty", 1);
DO_TEST("fv-serial-pipe", "fv-serial-pipe", 1);
diff --git a/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg b/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg
new file mode 100644
index 0000000..86e7998
--- /dev/null
+++ b/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg
@@ -0,0 +1,25 @@
+name = "XenGuest2"
+uuid = "c7a5fdb2-cdaf-9455-926a-d65c16db1809"
+maxmem = 579
+memory = 394
+vcpus = 1
+builder = "hvm"
+kernel = "/usr/lib/xen/boot/hvmloader"
+boot = "d"
+pae = 1
+acpi = 1
+apic = 1
+localtime = 0
+on_poweroff = "destroy"
+on_reboot = "restart"
+on_crash = "restart"
+device_model = "/usr/lib/xen/bin/qemu-dm"
+sdl = 0
+vnc = 1
+vncunused = 1
+vnclisten = "127.0.0.1"
+vncpasswd = "123poi"
+disk = [ "phy:/dev/HostVG/XenGuest2,hda,w", "file:/root/boot.iso,hdc:cdrom,r" ]
+vif = [ "mac=00:16:3e:66:92:9c,bridge=xenbr1,script=vif-bridge,model=e1000,type=ioemu" ]
+parallel = "none"
+serial = [ "/dev/ttyS0", "/dev/ttyS1" ]
diff --git a/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml b/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml
new file mode 100644
index 0000000..e4d3f16
--- /dev/null
+++ b/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml
@@ -0,0 +1,55 @@
+<domain type='xen'>
+ <name>XenGuest2</name>
+ <uuid>c7a5fdb2-cdaf-9455-926a-d65c16db1809</uuid>
+ <memory>592896</memory>
+ <currentMemory>403456</currentMemory>
+ <vcpu>1</vcpu>
+ <os>
+ <type arch='i686' machine='xenfv'>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <boot dev='cdrom'/>
+ </os>
+ <features>
+ <acpi/>
+ <apic/>
+ <pae/>
+ </features>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ <devices>
+ <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
+ <disk type='block' device='disk'>
+ <driver name='phy'/>
+ <source dev='/dev/HostVG/XenGuest2'/>
+ <target dev='hda' bus='ide'/>
+ </disk>
+ <disk type='file' device='cdrom'>
+ <driver name='file'/>
+ <source file='/root/boot.iso'/>
+ <target dev='hdc' bus='ide'/>
+ <readonly/>
+ </disk>
+ <interface type='bridge'>
+ <mac address='00:16:3e:66:92:9c'/>
+ <source bridge='xenbr1'/>
+ <script path='vif-bridge'/>
+ <model type='e1000'/>
+ </interface>
+ <serial type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </serial>
+ <serial type='dev'>
+ <source path='/dev/ttyS1'/>
+ <target port='1'/>
+ </serial>
+ <console type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </console>
+ <input type='mouse' bus='ps2'/>
+ <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1' passwd='123poi'/>
+ </devices>
+</domain>
diff --git a/tests/xmconfigtest.c b/tests/xmconfigtest.c
index ea00747..0de890c 100644
--- a/tests/xmconfigtest.c
+++ b/tests/xmconfigtest.c
@@ -218,6 +218,7 @@ mymain(int argc, char **argv)
DO_TEST("fullvirt-usbtablet", 2);
DO_TEST("fullvirt-usbmouse", 2);
DO_TEST("fullvirt-serial-file", 2);
+ DO_TEST("fullvirt-serial-dev-2-ports", 2);
DO_TEST("fullvirt-serial-null", 2);
DO_TEST("fullvirt-serial-pipe", 2);
DO_TEST("fullvirt-serial-pty", 2);
diff --git a/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr b/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr
new file mode 100644
index 0000000..2048159
--- /dev/null
+++ b/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr
@@ -0,0 +1 @@
+(vm (name 'fvtest')(memory 400)(maxmem 400)(vcpus 1)(uuid 'b5d70dd2-75cd-aca5-1776-9660b059d8bc')(on_poweroff 'destroy')(on_reboot 'restart')(on_crash 'restart')(image (hvm (kernel '/usr/lib/xen/boot/hvmloader')(vcpus 1)(boot c)(cdrom '/root/boot.iso')(acpi 1)(usb 1)(parallel none)(serial (/dev/ttyS0 /dev/ttyS1))(device_model '/usr/lib64/xen/bin/qemu-dm')(vnc 1)))(device (vbd (dev 'ioemu:hda')(uname 'file:/root/foo.img')(mode 'w')))(device (vif (mac '00:16:3e:1b:b1:47')(bridge 'xenbr0')(script 'vif-bridge')(model 'e1000')(type ioemu))))
\ No newline at end of file
diff --git a/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml b/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml
new file mode 100644
index 0000000..e5d1817
--- /dev/null
+++ b/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml
@@ -0,0 +1,44 @@
+<domain type='xen'>
+ <name>fvtest</name>
+ <uuid>b5d70dd275cdaca517769660b059d8bc</uuid>
+ <os>
+ <type>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <boot dev='hd'/>
+ </os>
+ <memory>409600</memory>
+ <vcpu>1</vcpu>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ <features>
+ <acpi/>
+ </features>
+ <devices>
+ <emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
+ <interface type='bridge'>
+ <source bridge='xenbr0'/>
+ <mac address='00:16:3e:1b:b1:47'/>
+ <script path='vif-bridge'/>
+ <model type='e1000'/>
+ </interface>
+ <disk type='file' device='cdrom'>
+ <source file='/root/boot.iso'/>
+ <target dev='hdc'/>
+ <readonly/>
+ </disk>
+ <disk type='file'>
+ <source file='/root/foo.img'/>
+ <target dev='ioemu:hda'/>
+ </disk>
+ <serial type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </serial>
+ <serial type='dev'>
+ <source path='/dev/ttyS1'/>
+ <target port='1'/>
+ </serial>
+ <graphics type='vnc' port='5917' keymap='ja'/>
+ </devices>
+</domain>
diff --git a/tests/xml2sexprtest.c b/tests/xml2sexprtest.c
index 8a5d115..ed10dec 100644
--- a/tests/xml2sexprtest.c
+++ b/tests/xml2sexprtest.c
@@ -148,6 +148,7 @@ mymain(int argc, char **argv)
DO_TEST("fv-serial-null", "fv-serial-null", "fvtest", 1);
DO_TEST("fv-serial-file", "fv-serial-file", "fvtest", 1);
+ DO_TEST("fv-serial-dev-2-ports", "fv-serial-dev-2-ports", "fvtest", 1);
DO_TEST("fv-serial-stdio", "fv-serial-stdio", "fvtest", 1);
DO_TEST("fv-serial-pty", "fv-serial-pty", "fvtest", 1);
DO_TEST("fv-serial-pipe", "fv-serial-pipe", "fvtest", 1);
--
1.7.3.2
2
7
Also cfg.mk is tweaked to force this for all future changes to *.py
files.
---
cfg.mk | 4 +-
docs/apibuild.py | 3194 ++++++++++++++++++++++++------------------------
docs/index.py | 834 +++++++-------
python/generator.py | 578 +++++-----
python/tests/create.py | 8 +-
5 files changed, 2309 insertions(+), 2309 deletions(-)
diff --git a/cfg.mk b/cfg.mk
index 4bbeb96..f870723 100644
--- a/cfg.mk
+++ b/cfg.mk
@@ -327,8 +327,8 @@ sc_prohibit_ctype_h:
# files in gnulib, since they're imported.
sc_TAB_in_indentation:
@prohibit='^ * ' \
- in_vc_files='(\.(rng|[ch](\.in)?|html.in)|(daemon|tools)/.*\.in)$$' \
- halt='use leading spaces, not TAB, in C, sh, html, and RNG schemas' \
+ in_vc_files='(\.(rng|[ch](\.in)?|html.in|py)|(daemon|tools)/.*\.in)$$' \
+ halt='use leading spaces, not TAB, in C, sh, html, py, and RNG schemas' \
$(_sc_search_regexp)
ctype_re = isalnum|isalpha|isascii|isblank|iscntrl|isdigit|isgraph|islower\
diff --git a/docs/apibuild.py b/docs/apibuild.py
index 895a313..506932d 100755
--- a/docs/apibuild.py
+++ b/docs/apibuild.py
@@ -72,34 +72,34 @@ class identifier:
def __init__(self, name, header=None, module=None, type=None, lineno = 0,
info=None, extra=None, conditionals = None):
self.name = name
- self.header = header
- self.module = module
- self.type = type
- self.info = info
- self.extra = extra
- self.lineno = lineno
- self.static = 0
- if conditionals == None or len(conditionals) == 0:
- self.conditionals = None
- else:
- self.conditionals = conditionals[:]
- if self.name == debugsym:
- print "=> define %s : %s" % (debugsym, (module, type, info,
- extra, conditionals))
+ self.header = header
+ self.module = module
+ self.type = type
+ self.info = info
+ self.extra = extra
+ self.lineno = lineno
+ self.static = 0
+ if conditionals == None or len(conditionals) == 0:
+ self.conditionals = None
+ else:
+ self.conditionals = conditionals[:]
+ if self.name == debugsym:
+ print "=> define %s : %s" % (debugsym, (module, type, info,
+ extra, conditionals))
def __repr__(self):
r = "%s %s:" % (self.type, self.name)
- if self.static:
- r = r + " static"
- if self.module != None:
- r = r + " from %s" % (self.module)
- if self.info != None:
- r = r + " " + `self.info`
- if self.extra != None:
- r = r + " " + `self.extra`
- if self.conditionals != None:
- r = r + " " + `self.conditionals`
- return r
+ if self.static:
+ r = r + " static"
+ if self.module != None:
+ r = r + " from %s" % (self.module)
+ if self.info != None:
+ r = r + " " + `self.info`
+ if self.extra != None:
+ r = r + " " + `self.extra`
+ if self.conditionals != None:
+ r = r + " " + `self.conditionals`
+ return r
def set_header(self, header):
@@ -117,10 +117,10 @@ class identifier:
def set_static(self, static):
self.static = static
def set_conditionals(self, conditionals):
- if conditionals == None or len(conditionals) == 0:
- self.conditionals = None
- else:
- self.conditionals = conditionals[:]
+ if conditionals == None or len(conditionals) == 0:
+ self.conditionals = None
+ else:
+ self.conditionals = conditionals[:]
def get_name(self):
return self.name
@@ -143,96 +143,96 @@ class identifier:
def update(self, header, module, type = None, info = None, extra=None,
conditionals=None):
- if self.name == debugsym:
- print "=> update %s : %s" % (debugsym, (module, type, info,
- extra, conditionals))
+ if self.name == debugsym:
+ print "=> update %s : %s" % (debugsym, (module, type, info,
+ extra, conditionals))
if header != None and self.header == None:
- self.set_header(module)
+ self.set_header(module)
if module != None and (self.module == None or self.header == self.module):
- self.set_module(module)
+ self.set_module(module)
if type != None and self.type == None:
- self.set_type(type)
+ self.set_type(type)
if info != None:
- self.set_info(info)
+ self.set_info(info)
if extra != None:
- self.set_extra(extra)
+ self.set_extra(extra)
if conditionals != None:
- self.set_conditionals(conditionals)
+ self.set_conditionals(conditionals)
class index:
def __init__(self, name = "noname"):
self.name = name
self.identifiers = {}
self.functions = {}
- self.variables = {}
- self.includes = {}
- self.structs = {}
- self.enums = {}
- self.typedefs = {}
- self.macros = {}
- self.references = {}
- self.info = {}
+ self.variables = {}
+ self.includes = {}
+ self.structs = {}
+ self.enums = {}
+ self.typedefs = {}
+ self.macros = {}
+ self.references = {}
+ self.info = {}
def add_ref(self, name, header, module, static, type, lineno, info=None, extra=None, conditionals = None):
if name[0:2] == '__':
- return None
+ return None
d = None
try:
- d = self.identifiers[name]
- d.update(header, module, type, lineno, info, extra, conditionals)
- except:
- d = identifier(name, header, module, type, lineno, info, extra, conditionals)
- self.identifiers[name] = d
+ d = self.identifiers[name]
+ d.update(header, module, type, lineno, info, extra, conditionals)
+ except:
+ d = identifier(name, header, module, type, lineno, info, extra, conditionals)
+ self.identifiers[name] = d
- if d != None and static == 1:
- d.set_static(1)
+ if d != None and static == 1:
+ d.set_static(1)
- if d != None and name != None and type != None:
- self.references[name] = d
+ if d != None and name != None and type != None:
+ self.references[name] = d
- if name == debugsym:
- print "New ref: %s" % (d)
+ if name == debugsym:
+ print "New ref: %s" % (d)
- return d
+ return d
def add(self, name, header, module, static, type, lineno, info=None, extra=None, conditionals = None):
if name[0:2] == '__':
- return None
+ return None
d = None
try:
- d = self.identifiers[name]
- d.update(header, module, type, lineno, info, extra, conditionals)
- except:
- d = identifier(name, header, module, type, lineno, info, extra, conditionals)
- self.identifiers[name] = d
-
- if d != None and static == 1:
- d.set_static(1)
-
- if d != None and name != None and type != None:
- if type == "function":
- self.functions[name] = d
- elif type == "functype":
- self.functions[name] = d
- elif type == "variable":
- self.variables[name] = d
- elif type == "include":
- self.includes[name] = d
- elif type == "struct":
- self.structs[name] = d
- elif type == "enum":
- self.enums[name] = d
- elif type == "typedef":
- self.typedefs[name] = d
- elif type == "macro":
- self.macros[name] = d
- else:
- print "Unable to register type ", type
-
- if name == debugsym:
- print "New symbol: %s" % (d)
-
- return d
+ d = self.identifiers[name]
+ d.update(header, module, type, lineno, info, extra, conditionals)
+ except:
+ d = identifier(name, header, module, type, lineno, info, extra, conditionals)
+ self.identifiers[name] = d
+
+ if d != None and static == 1:
+ d.set_static(1)
+
+ if d != None and name != None and type != None:
+ if type == "function":
+ self.functions[name] = d
+ elif type == "functype":
+ self.functions[name] = d
+ elif type == "variable":
+ self.variables[name] = d
+ elif type == "include":
+ self.includes[name] = d
+ elif type == "struct":
+ self.structs[name] = d
+ elif type == "enum":
+ self.enums[name] = d
+ elif type == "typedef":
+ self.typedefs[name] = d
+ elif type == "macro":
+ self.macros[name] = d
+ else:
+ print "Unable to register type ", type
+
+ if name == debugsym:
+ print "New symbol: %s" % (d)
+
+ return d
def merge(self, idx):
for id in idx.functions.keys():
@@ -240,41 +240,41 @@ class index:
# macro might be used to override functions or variables
# definitions
#
- if self.macros.has_key(id):
- del self.macros[id]
- if self.functions.has_key(id):
- print "function %s from %s redeclared in %s" % (
- id, self.functions[id].header, idx.functions[id].header)
- else:
- self.functions[id] = idx.functions[id]
- self.identifiers[id] = idx.functions[id]
+ if self.macros.has_key(id):
+ del self.macros[id]
+ if self.functions.has_key(id):
+ print "function %s from %s redeclared in %s" % (
+ id, self.functions[id].header, idx.functions[id].header)
+ else:
+ self.functions[id] = idx.functions[id]
+ self.identifiers[id] = idx.functions[id]
for id in idx.variables.keys():
#
# macro might be used to override functions or variables
# definitions
#
- if self.macros.has_key(id):
- del self.macros[id]
- if self.variables.has_key(id):
- print "variable %s from %s redeclared in %s" % (
- id, self.variables[id].header, idx.variables[id].header)
- else:
- self.variables[id] = idx.variables[id]
- self.identifiers[id] = idx.variables[id]
+ if self.macros.has_key(id):
+ del self.macros[id]
+ if self.variables.has_key(id):
+ print "variable %s from %s redeclared in %s" % (
+ id, self.variables[id].header, idx.variables[id].header)
+ else:
+ self.variables[id] = idx.variables[id]
+ self.identifiers[id] = idx.variables[id]
for id in idx.structs.keys():
- if self.structs.has_key(id):
- print "struct %s from %s redeclared in %s" % (
- id, self.structs[id].header, idx.structs[id].header)
- else:
- self.structs[id] = idx.structs[id]
- self.identifiers[id] = idx.structs[id]
+ if self.structs.has_key(id):
+ print "struct %s from %s redeclared in %s" % (
+ id, self.structs[id].header, idx.structs[id].header)
+ else:
+ self.structs[id] = idx.structs[id]
+ self.identifiers[id] = idx.structs[id]
for id in idx.typedefs.keys():
- if self.typedefs.has_key(id):
- print "typedef %s from %s redeclared in %s" % (
- id, self.typedefs[id].header, idx.typedefs[id].header)
- else:
- self.typedefs[id] = idx.typedefs[id]
- self.identifiers[id] = idx.typedefs[id]
+ if self.typedefs.has_key(id):
+ print "typedef %s from %s redeclared in %s" % (
+ id, self.typedefs[id].header, idx.typedefs[id].header)
+ else:
+ self.typedefs[id] = idx.typedefs[id]
+ self.identifiers[id] = idx.typedefs[id]
for id in idx.macros.keys():
#
# macro might be used to override functions or variables
@@ -286,88 +286,88 @@ class index:
continue
if self.enums.has_key(id):
continue
- if self.macros.has_key(id):
- print "macro %s from %s redeclared in %s" % (
- id, self.macros[id].header, idx.macros[id].header)
- else:
- self.macros[id] = idx.macros[id]
- self.identifiers[id] = idx.macros[id]
+ if self.macros.has_key(id):
+ print "macro %s from %s redeclared in %s" % (
+ id, self.macros[id].header, idx.macros[id].header)
+ else:
+ self.macros[id] = idx.macros[id]
+ self.identifiers[id] = idx.macros[id]
for id in idx.enums.keys():
- if self.enums.has_key(id):
- print "enum %s from %s redeclared in %s" % (
- id, self.enums[id].header, idx.enums[id].header)
- else:
- self.enums[id] = idx.enums[id]
- self.identifiers[id] = idx.enums[id]
+ if self.enums.has_key(id):
+ print "enum %s from %s redeclared in %s" % (
+ id, self.enums[id].header, idx.enums[id].header)
+ else:
+ self.enums[id] = idx.enums[id]
+ self.identifiers[id] = idx.enums[id]
def merge_public(self, idx):
for id in idx.functions.keys():
- if self.functions.has_key(id):
- # check that function condition agrees with header
- if idx.functions[id].conditionals != \
- self.functions[id].conditionals:
- print "Header condition differs from Function for %s:" \
- % id
- print " H: %s" % self.functions[id].conditionals
- print " C: %s" % idx.functions[id].conditionals
- up = idx.functions[id]
- self.functions[id].update(None, up.module, up.type, up.info, up.extra)
- # else:
- # print "Function %s from %s is not declared in headers" % (
- # id, idx.functions[id].module)
- # TODO: do the same for variables.
+ if self.functions.has_key(id):
+ # check that function condition agrees with header
+ if idx.functions[id].conditionals != \
+ self.functions[id].conditionals:
+ print "Header condition differs from Function for %s:" \
+ % id
+ print " H: %s" % self.functions[id].conditionals
+ print " C: %s" % idx.functions[id].conditionals
+ up = idx.functions[id]
+ self.functions[id].update(None, up.module, up.type, up.info, up.extra)
+ # else:
+ # print "Function %s from %s is not declared in headers" % (
+ # id, idx.functions[id].module)
+ # TODO: do the same for variables.
def analyze_dict(self, type, dict):
count = 0
- public = 0
+ public = 0
for name in dict.keys():
- id = dict[name]
- count = count + 1
- if id.static == 0:
- public = public + 1
+ id = dict[name]
+ count = count + 1
+ if id.static == 0:
+ public = public + 1
if count != public:
- print " %d %s , %d public" % (count, type, public)
- elif count != 0:
- print " %d public %s" % (count, type)
+ print " %d %s , %d public" % (count, type, public)
+ elif count != 0:
+ print " %d public %s" % (count, type)
def analyze(self):
- self.analyze_dict("functions", self.functions)
- self.analyze_dict("variables", self.variables)
- self.analyze_dict("structs", self.structs)
- self.analyze_dict("typedefs", self.typedefs)
- self.analyze_dict("macros", self.macros)
+ self.analyze_dict("functions", self.functions)
+ self.analyze_dict("variables", self.variables)
+ self.analyze_dict("structs", self.structs)
+ self.analyze_dict("typedefs", self.typedefs)
+ self.analyze_dict("macros", self.macros)
class CLexer:
"""A lexer for the C language, tokenize the input by reading and
analyzing it line by line"""
def __init__(self, input):
self.input = input
- self.tokens = []
- self.line = ""
- self.lineno = 0
+ self.tokens = []
+ self.line = ""
+ self.lineno = 0
def getline(self):
line = ''
- while line == '':
- line = self.input.readline()
- if not line:
- return None
- self.lineno = self.lineno + 1
- line = string.lstrip(line)
- line = string.rstrip(line)
- if line == '':
- continue
- while line[-1] == '\\':
- line = line[:-1]
- n = self.input.readline()
- self.lineno = self.lineno + 1
- n = string.lstrip(n)
- n = string.rstrip(n)
- if not n:
- break
- else:
- line = line + n
+ while line == '':
+ line = self.input.readline()
+ if not line:
+ return None
+ self.lineno = self.lineno + 1
+ line = string.lstrip(line)
+ line = string.rstrip(line)
+ if line == '':
+ continue
+ while line[-1] == '\\':
+ line = line[:-1]
+ n = self.input.readline()
+ self.lineno = self.lineno + 1
+ n = string.lstrip(n)
+ n = string.rstrip(n)
+ if not n:
+ break
+ else:
+ line = line + n
return line
def getlineno(self):
@@ -378,193 +378,193 @@ class CLexer:
def debug(self):
print "Last token: ", self.last
- print "Token queue: ", self.tokens
- print "Line %d end: " % (self.lineno), self.line
+ print "Token queue: ", self.tokens
+ print "Line %d end: " % (self.lineno), self.line
def token(self):
while self.tokens == []:
- if self.line == "":
- line = self.getline()
- else:
- line = self.line
- self.line = ""
- if line == None:
- return None
-
- if line[0] == '#':
- self.tokens = map((lambda x: ('preproc', x)),
- string.split(line))
- break;
- l = len(line)
- if line[0] == '"' or line[0] == "'":
- end = line[0]
- line = line[1:]
- found = 0
- tok = ""
- while found == 0:
- i = 0
- l = len(line)
- while i < l:
- if line[i] == end:
- self.line = line[i+1:]
- line = line[:i]
- l = i
- found = 1
- break
- if line[i] == '\\':
- i = i + 1
- i = i + 1
- tok = tok + line
- if found == 0:
- line = self.getline()
- if line == None:
- return None
- self.last = ('string', tok)
- return self.last
-
- if l >= 2 and line[0] == '/' and line[1] == '*':
- line = line[2:]
- found = 0
- tok = ""
- while found == 0:
- i = 0
- l = len(line)
- while i < l:
- if line[i] == '*' and i+1 < l and line[i+1] == '/':
- self.line = line[i+2:]
- line = line[:i-1]
- l = i
- found = 1
- break
- i = i + 1
- if tok != "":
- tok = tok + "\n"
- tok = tok + line
- if found == 0:
- line = self.getline()
- if line == None:
- return None
- self.last = ('comment', tok)
- return self.last
- if l >= 2 and line[0] == '/' and line[1] == '/':
- line = line[2:]
- self.last = ('comment', line)
- return self.last
- i = 0
- while i < l:
- if line[i] == '/' and i+1 < l and line[i+1] == '/':
- self.line = line[i:]
- line = line[:i]
- break
- if line[i] == '/' and i+1 < l and line[i+1] == '*':
- self.line = line[i:]
- line = line[:i]
- break
- if line[i] == '"' or line[i] == "'":
- self.line = line[i:]
- line = line[:i]
- break
- i = i + 1
- l = len(line)
- i = 0
- while i < l:
- if line[i] == ' ' or line[i] == '\t':
- i = i + 1
- continue
- o = ord(line[i])
- if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
- (o >= 48 and o <= 57):
- s = i
- while i < l:
- o = ord(line[i])
- if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
- (o >= 48 and o <= 57) or string.find(
- " \t(){}:;,+-*/%&!|[]=><", line[i]) == -1:
- i = i + 1
- else:
- break
- self.tokens.append(('name', line[s:i]))
- continue
- if string.find("(){}:;,[]", line[i]) != -1:
+ if self.line == "":
+ line = self.getline()
+ else:
+ line = self.line
+ self.line = ""
+ if line == None:
+ return None
+
+ if line[0] == '#':
+ self.tokens = map((lambda x: ('preproc', x)),
+ string.split(line))
+ break;
+ l = len(line)
+ if line[0] == '"' or line[0] == "'":
+ end = line[0]
+ line = line[1:]
+ found = 0
+ tok = ""
+ while found == 0:
+ i = 0
+ l = len(line)
+ while i < l:
+ if line[i] == end:
+ self.line = line[i+1:]
+ line = line[:i]
+ l = i
+ found = 1
+ break
+ if line[i] == '\\':
+ i = i + 1
+ i = i + 1
+ tok = tok + line
+ if found == 0:
+ line = self.getline()
+ if line == None:
+ return None
+ self.last = ('string', tok)
+ return self.last
+
+ if l >= 2 and line[0] == '/' and line[1] == '*':
+ line = line[2:]
+ found = 0
+ tok = ""
+ while found == 0:
+ i = 0
+ l = len(line)
+ while i < l:
+ if line[i] == '*' and i+1 < l and line[i+1] == '/':
+ self.line = line[i+2:]
+ line = line[:i-1]
+ l = i
+ found = 1
+ break
+ i = i + 1
+ if tok != "":
+ tok = tok + "\n"
+ tok = tok + line
+ if found == 0:
+ line = self.getline()
+ if line == None:
+ return None
+ self.last = ('comment', tok)
+ return self.last
+ if l >= 2 and line[0] == '/' and line[1] == '/':
+ line = line[2:]
+ self.last = ('comment', line)
+ return self.last
+ i = 0
+ while i < l:
+ if line[i] == '/' and i+1 < l and line[i+1] == '/':
+ self.line = line[i:]
+ line = line[:i]
+ break
+ if line[i] == '/' and i+1 < l and line[i+1] == '*':
+ self.line = line[i:]
+ line = line[:i]
+ break
+ if line[i] == '"' or line[i] == "'":
+ self.line = line[i:]
+ line = line[:i]
+ break
+ i = i + 1
+ l = len(line)
+ i = 0
+ while i < l:
+ if line[i] == ' ' or line[i] == '\t':
+ i = i + 1
+ continue
+ o = ord(line[i])
+ if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
+ (o >= 48 and o <= 57):
+ s = i
+ while i < l:
+ o = ord(line[i])
+ if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
+ (o >= 48 and o <= 57) or string.find(
+ " \t(){}:;,+-*/%&!|[]=><", line[i]) == -1:
+ i = i + 1
+ else:
+ break
+ self.tokens.append(('name', line[s:i]))
+ continue
+ if string.find("(){}:;,[]", line[i]) != -1:
# if line[i] == '(' or line[i] == ')' or line[i] == '{' or \
-# line[i] == '}' or line[i] == ':' or line[i] == ';' or \
-# line[i] == ',' or line[i] == '[' or line[i] == ']':
- self.tokens.append(('sep', line[i]))
- i = i + 1
- continue
- if string.find("+-*><=/%&!|.", line[i]) != -1:
+# line[i] == '}' or line[i] == ':' or line[i] == ';' or \
+# line[i] == ',' or line[i] == '[' or line[i] == ']':
+ self.tokens.append(('sep', line[i]))
+ i = i + 1
+ continue
+ if string.find("+-*><=/%&!|.", line[i]) != -1:
# if line[i] == '+' or line[i] == '-' or line[i] == '*' or \
-# line[i] == '>' or line[i] == '<' or line[i] == '=' or \
-# line[i] == '/' or line[i] == '%' or line[i] == '&' or \
-# line[i] == '!' or line[i] == '|' or line[i] == '.':
- if line[i] == '.' and i + 2 < l and \
- line[i+1] == '.' and line[i+2] == '.':
- self.tokens.append(('name', '...'))
- i = i + 3
- continue
-
- j = i + 1
- if j < l and (
- string.find("+-*><=/%&!|", line[j]) != -1):
-# line[j] == '+' or line[j] == '-' or line[j] == '*' or \
-# line[j] == '>' or line[j] == '<' or line[j] == '=' or \
-# line[j] == '/' or line[j] == '%' or line[j] == '&' or \
-# line[j] == '!' or line[j] == '|'):
- self.tokens.append(('op', line[i:j+1]))
- i = j + 1
- else:
- self.tokens.append(('op', line[i]))
- i = i + 1
- continue
- s = i
- while i < l:
- o = ord(line[i])
- if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
- (o >= 48 and o <= 57) or (
- string.find(" \t(){}:;,+-*/%&!|[]=><", line[i]) == -1):
-# line[i] != ' ' and line[i] != '\t' and
-# line[i] != '(' and line[i] != ')' and
-# line[i] != '{' and line[i] != '}' and
-# line[i] != ':' and line[i] != ';' and
-# line[i] != ',' and line[i] != '+' and
-# line[i] != '-' and line[i] != '*' and
-# line[i] != '/' and line[i] != '%' and
-# line[i] != '&' and line[i] != '!' and
-# line[i] != '|' and line[i] != '[' and
-# line[i] != ']' and line[i] != '=' and
-# line[i] != '*' and line[i] != '>' and
-# line[i] != '<'):
- i = i + 1
- else:
- break
- self.tokens.append(('name', line[s:i]))
-
- tok = self.tokens[0]
- self.tokens = self.tokens[1:]
- self.last = tok
- return tok
+# line[i] == '>' or line[i] == '<' or line[i] == '=' or \
+# line[i] == '/' or line[i] == '%' or line[i] == '&' or \
+# line[i] == '!' or line[i] == '|' or line[i] == '.':
+ if line[i] == '.' and i + 2 < l and \
+ line[i+1] == '.' and line[i+2] == '.':
+ self.tokens.append(('name', '...'))
+ i = i + 3
+ continue
+
+ j = i + 1
+ if j < l and (
+ string.find("+-*><=/%&!|", line[j]) != -1):
+# line[j] == '+' or line[j] == '-' or line[j] == '*' or \
+# line[j] == '>' or line[j] == '<' or line[j] == '=' or \
+# line[j] == '/' or line[j] == '%' or line[j] == '&' or \
+# line[j] == '!' or line[j] == '|'):
+ self.tokens.append(('op', line[i:j+1]))
+ i = j + 1
+ else:
+ self.tokens.append(('op', line[i]))
+ i = i + 1
+ continue
+ s = i
+ while i < l:
+ o = ord(line[i])
+ if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
+ (o >= 48 and o <= 57) or (
+ string.find(" \t(){}:;,+-*/%&!|[]=><", line[i]) == -1):
+# line[i] != ' ' and line[i] != '\t' and
+# line[i] != '(' and line[i] != ')' and
+# line[i] != '{' and line[i] != '}' and
+# line[i] != ':' and line[i] != ';' and
+# line[i] != ',' and line[i] != '+' and
+# line[i] != '-' and line[i] != '*' and
+# line[i] != '/' and line[i] != '%' and
+# line[i] != '&' and line[i] != '!' and
+# line[i] != '|' and line[i] != '[' and
+# line[i] != ']' and line[i] != '=' and
+# line[i] != '*' and line[i] != '>' and
+# line[i] != '<'):
+ i = i + 1
+ else:
+ break
+ self.tokens.append(('name', line[s:i]))
+
+ tok = self.tokens[0]
+ self.tokens = self.tokens[1:]
+ self.last = tok
+ return tok
class CParser:
"""The C module parser"""
def __init__(self, filename, idx = None):
self.filename = filename
- if len(filename) > 2 and filename[-2:] == '.h':
- self.is_header = 1
- else:
- self.is_header = 0
+ if len(filename) > 2 and filename[-2:] == '.h':
+ self.is_header = 1
+ else:
+ self.is_header = 0
self.input = open(filename)
- self.lexer = CLexer(self.input)
- if idx == None:
- self.index = index()
- else:
- self.index = idx
- self.top_comment = ""
- self.last_comment = ""
- self.comment = None
- self.collect_ref = 0
- self.no_error = 0
- self.conditionals = []
- self.defines = []
+ self.lexer = CLexer(self.input)
+ if idx == None:
+ self.index = index()
+ else:
+ self.index = idx
+ self.top_comment = ""
+ self.last_comment = ""
+ self.comment = None
+ self.collect_ref = 0
+ self.no_error = 0
+ self.conditionals = []
+ self.defines = []
def collect_references(self):
self.collect_ref = 1
@@ -579,203 +579,203 @@ class CParser:
return self.lexer.getlineno()
def index_add(self, name, module, static, type, info=None, extra = None):
- if self.is_header == 1:
- self.index.add(name, module, module, static, type, self.lineno(),
- info, extra, self.conditionals)
- else:
- self.index.add(name, None, module, static, type, self.lineno(),
- info, extra, self.conditionals)
+ if self.is_header == 1:
+ self.index.add(name, module, module, static, type, self.lineno(),
+ info, extra, self.conditionals)
+ else:
+ self.index.add(name, None, module, static, type, self.lineno(),
+ info, extra, self.conditionals)
def index_add_ref(self, name, module, static, type, info=None,
extra = None):
- if self.is_header == 1:
- self.index.add_ref(name, module, module, static, type,
- self.lineno(), info, extra, self.conditionals)
- else:
- self.index.add_ref(name, None, module, static, type, self.lineno(),
- info, extra, self.conditionals)
+ if self.is_header == 1:
+ self.index.add_ref(name, module, module, static, type,
+ self.lineno(), info, extra, self.conditionals)
+ else:
+ self.index.add_ref(name, None, module, static, type, self.lineno(),
+ info, extra, self.conditionals)
def warning(self, msg):
if self.no_error:
- return
- print msg
+ return
+ print msg
def error(self, msg, token=-1):
if self.no_error:
- return
+ return
print "Parse Error: " + msg
- if token != -1:
- print "Got token ", token
- self.lexer.debug()
- sys.exit(1)
+ if token != -1:
+ print "Got token ", token
+ self.lexer.debug()
+ sys.exit(1)
def debug(self, msg, token=-1):
print "Debug: " + msg
- if token != -1:
- print "Got token ", token
- self.lexer.debug()
+ if token != -1:
+ print "Got token ", token
+ self.lexer.debug()
def parseTopComment(self, comment):
- res = {}
- lines = string.split(comment, "\n")
- item = None
- for line in lines:
- while line != "" and (line[0] == ' ' or line[0] == '\t'):
- line = line[1:]
- while line != "" and line[0] == '*':
- line = line[1:]
- while line != "" and (line[0] == ' ' or line[0] == '\t'):
- line = line[1:]
- try:
- (it, line) = string.split(line, ":", 1)
- item = it
- while line != "" and (line[0] == ' ' or line[0] == '\t'):
- line = line[1:]
- if res.has_key(item):
- res[item] = res[item] + " " + line
- else:
- res[item] = line
- except:
- if item != None:
- if res.has_key(item):
- res[item] = res[item] + " " + line
- else:
- res[item] = line
- self.index.info = res
+ res = {}
+ lines = string.split(comment, "\n")
+ item = None
+ for line in lines:
+ while line != "" and (line[0] == ' ' or line[0] == '\t'):
+ line = line[1:]
+ while line != "" and line[0] == '*':
+ line = line[1:]
+ while line != "" and (line[0] == ' ' or line[0] == '\t'):
+ line = line[1:]
+ try:
+ (it, line) = string.split(line, ":", 1)
+ item = it
+ while line != "" and (line[0] == ' ' or line[0] == '\t'):
+ line = line[1:]
+ if res.has_key(item):
+ res[item] = res[item] + " " + line
+ else:
+ res[item] = line
+ except:
+ if item != None:
+ if res.has_key(item):
+ res[item] = res[item] + " " + line
+ else:
+ res[item] = line
+ self.index.info = res
def parseComment(self, token):
if self.top_comment == "":
- self.top_comment = token[1]
- if self.comment == None or token[1][0] == '*':
- self.comment = token[1];
- else:
- self.comment = self.comment + token[1]
- token = self.lexer.token()
+ self.top_comment = token[1]
+ if self.comment == None or token[1][0] == '*':
+ self.comment = token[1];
+ else:
+ self.comment = self.comment + token[1]
+ token = self.lexer.token()
if string.find(self.comment, "DOC_DISABLE") != -1:
- self.stop_error()
+ self.stop_error()
if string.find(self.comment, "DOC_ENABLE") != -1:
- self.start_error()
+ self.start_error()
- return token
+ return token
#
# Parse a comment block associate to a typedef
#
def parseTypeComment(self, name, quiet = 0):
if name[0:2] == '__':
- quiet = 1
+ quiet = 1
args = []
- desc = ""
+ desc = ""
if self.comment == None:
- if not quiet:
- self.warning("Missing comment for type %s" % (name))
- return((args, desc))
+ if not quiet:
+ self.warning("Missing comment for type %s" % (name))
+ return((args, desc))
if self.comment[0] != '*':
- if not quiet:
- self.warning("Missing * in type comment for %s" % (name))
- return((args, desc))
- lines = string.split(self.comment, '\n')
- if lines[0] == '*':
- del lines[0]
- if lines[0] != "* %s:" % (name):
- if not quiet:
- self.warning("Misformatted type comment for %s" % (name))
- self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
- return((args, desc))
- del lines[0]
- while len(lines) > 0 and lines[0] == '*':
- del lines[0]
- desc = ""
- while len(lines) > 0:
- l = lines[0]
- while len(l) > 0 and l[0] == '*':
- l = l[1:]
- l = string.strip(l)
- desc = desc + " " + l
- del lines[0]
-
- desc = string.strip(desc)
-
- if quiet == 0:
- if desc == "":
- self.warning("Type comment for %s lack description of the macro" % (name))
-
- return(desc)
+ if not quiet:
+ self.warning("Missing * in type comment for %s" % (name))
+ return((args, desc))
+ lines = string.split(self.comment, '\n')
+ if lines[0] == '*':
+ del lines[0]
+ if lines[0] != "* %s:" % (name):
+ if not quiet:
+ self.warning("Misformatted type comment for %s" % (name))
+ self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
+ return((args, desc))
+ del lines[0]
+ while len(lines) > 0 and lines[0] == '*':
+ del lines[0]
+ desc = ""
+ while len(lines) > 0:
+ l = lines[0]
+ while len(l) > 0 and l[0] == '*':
+ l = l[1:]
+ l = string.strip(l)
+ desc = desc + " " + l
+ del lines[0]
+
+ desc = string.strip(desc)
+
+ if quiet == 0:
+ if desc == "":
+ self.warning("Type comment for %s lack description of the macro" % (name))
+
+ return(desc)
#
# Parse a comment block associate to a macro
#
def parseMacroComment(self, name, quiet = 0):
if name[0:2] == '__':
- quiet = 1
+ quiet = 1
args = []
- desc = ""
+ desc = ""
if self.comment == None:
- if not quiet:
- self.warning("Missing comment for macro %s" % (name))
- return((args, desc))
+ if not quiet:
+ self.warning("Missing comment for macro %s" % (name))
+ return((args, desc))
if self.comment[0] != '*':
- if not quiet:
- self.warning("Missing * in macro comment for %s" % (name))
- return((args, desc))
- lines = string.split(self.comment, '\n')
- if lines[0] == '*':
- del lines[0]
- if lines[0] != "* %s:" % (name):
- if not quiet:
- self.warning("Misformatted macro comment for %s" % (name))
- self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
- return((args, desc))
- del lines[0]
- while lines[0] == '*':
- del lines[0]
- while len(lines) > 0 and lines[0][0:3] == '* @':
- l = lines[0][3:]
- try:
- (arg, desc) = string.split(l, ':', 1)
- desc=string.strip(desc)
- arg=string.strip(arg)
+ if not quiet:
+ self.warning("Missing * in macro comment for %s" % (name))
+ return((args, desc))
+ lines = string.split(self.comment, '\n')
+ if lines[0] == '*':
+ del lines[0]
+ if lines[0] != "* %s:" % (name):
+ if not quiet:
+ self.warning("Misformatted macro comment for %s" % (name))
+ self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
+ return((args, desc))
+ del lines[0]
+ while lines[0] == '*':
+ del lines[0]
+ while len(lines) > 0 and lines[0][0:3] == '* @':
+ l = lines[0][3:]
+ try:
+ (arg, desc) = string.split(l, ':', 1)
+ desc=string.strip(desc)
+ arg=string.strip(arg)
except:
- if not quiet:
- self.warning("Misformatted macro comment for %s" % (name))
- self.warning(" problem with '%s'" % (lines[0]))
- del lines[0]
- continue
- del lines[0]
- l = string.strip(lines[0])
- while len(l) > 2 and l[0:3] != '* @':
- while l[0] == '*':
- l = l[1:]
- desc = desc + ' ' + string.strip(l)
- del lines[0]
- if len(lines) == 0:
- break
- l = lines[0]
+ if not quiet:
+ self.warning("Misformatted macro comment for %s" % (name))
+ self.warning(" problem with '%s'" % (lines[0]))
+ del lines[0]
+ continue
+ del lines[0]
+ l = string.strip(lines[0])
+ while len(l) > 2 and l[0:3] != '* @':
+ while l[0] == '*':
+ l = l[1:]
+ desc = desc + ' ' + string.strip(l)
+ del lines[0]
+ if len(lines) == 0:
+ break
+ l = lines[0]
args.append((arg, desc))
- while len(lines) > 0 and lines[0] == '*':
- del lines[0]
- desc = ""
- while len(lines) > 0:
- l = lines[0]
- while len(l) > 0 and l[0] == '*':
- l = l[1:]
- l = string.strip(l)
- desc = desc + " " + l
- del lines[0]
+ while len(lines) > 0 and lines[0] == '*':
+ del lines[0]
+ desc = ""
+ while len(lines) > 0:
+ l = lines[0]
+ while len(l) > 0 and l[0] == '*':
+ l = l[1:]
+ l = string.strip(l)
+ desc = desc + " " + l
+ del lines[0]
- desc = string.strip(desc)
+ desc = string.strip(desc)
- if quiet == 0:
- if desc == "":
- self.warning("Macro comment for %s lack description of the macro" % (name))
+ if quiet == 0:
+ if desc == "":
+ self.warning("Macro comment for %s lack description of the macro" % (name))
- return((args, desc))
+ return((args, desc))
#
# Parse a comment block and merge the information found in the
@@ -786,219 +786,219 @@ class CParser:
global ignored_functions
if name == 'main':
- quiet = 1
+ quiet = 1
if name[0:2] == '__':
- quiet = 1
+ quiet = 1
if ignored_functions.has_key(name):
quiet = 1
- (ret, args) = description
- desc = ""
- retdesc = ""
+ (ret, args) = description
+ desc = ""
+ retdesc = ""
if self.comment == None:
- if not quiet:
- self.warning("Missing comment for function %s" % (name))
- return(((ret[0], retdesc), args, desc))
+ if not quiet:
+ self.warning("Missing comment for function %s" % (name))
+ return(((ret[0], retdesc), args, desc))
if self.comment[0] != '*':
- if not quiet:
- self.warning("Missing * in function comment for %s" % (name))
- return(((ret[0], retdesc), args, desc))
- lines = string.split(self.comment, '\n')
- if lines[0] == '*':
- del lines[0]
- if lines[0] != "* %s:" % (name):
- if not quiet:
- self.warning("Misformatted function comment for %s" % (name))
- self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
- return(((ret[0], retdesc), args, desc))
- del lines[0]
- while lines[0] == '*':
- del lines[0]
- nbargs = len(args)
- while len(lines) > 0 and lines[0][0:3] == '* @':
- l = lines[0][3:]
- try:
- (arg, desc) = string.split(l, ':', 1)
- desc=string.strip(desc)
- arg=string.strip(arg)
+ if not quiet:
+ self.warning("Missing * in function comment for %s" % (name))
+ return(((ret[0], retdesc), args, desc))
+ lines = string.split(self.comment, '\n')
+ if lines[0] == '*':
+ del lines[0]
+ if lines[0] != "* %s:" % (name):
+ if not quiet:
+ self.warning("Misformatted function comment for %s" % (name))
+ self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
+ return(((ret[0], retdesc), args, desc))
+ del lines[0]
+ while lines[0] == '*':
+ del lines[0]
+ nbargs = len(args)
+ while len(lines) > 0 and lines[0][0:3] == '* @':
+ l = lines[0][3:]
+ try:
+ (arg, desc) = string.split(l, ':', 1)
+ desc=string.strip(desc)
+ arg=string.strip(arg)
except:
- if not quiet:
- self.warning("Misformatted function comment for %s" % (name))
- self.warning(" problem with '%s'" % (lines[0]))
- del lines[0]
- continue
- del lines[0]
- l = string.strip(lines[0])
- while len(l) > 2 and l[0:3] != '* @':
- while l[0] == '*':
- l = l[1:]
- desc = desc + ' ' + string.strip(l)
- del lines[0]
- if len(lines) == 0:
- break
- l = lines[0]
- i = 0
- while i < nbargs:
- if args[i][1] == arg:
- args[i] = (args[i][0], arg, desc)
- break;
- i = i + 1
- if i >= nbargs:
- if not quiet:
- self.warning("Unable to find arg %s from function comment for %s" % (
- arg, name))
- while len(lines) > 0 and lines[0] == '*':
- del lines[0]
- desc = None
- while len(lines) > 0:
- l = lines[0]
- i = 0
- # Remove all leading '*', followed by at most one ' ' character
- # since we need to preserve correct identation of code examples
- while i < len(l) and l[i] == '*':
- i = i + 1
- if i > 0:
- if i < len(l) and l[i] == ' ':
- i = i + 1
- l = l[i:]
- if len(l) >= 6 and l[0:7] == "returns" or l[0:7] == "Returns":
- try:
- l = string.split(l, ' ', 1)[1]
- except:
- l = ""
- retdesc = string.strip(l)
- del lines[0]
- while len(lines) > 0:
- l = lines[0]
- while len(l) > 0 and l[0] == '*':
- l = l[1:]
- l = string.strip(l)
- retdesc = retdesc + " " + l
- del lines[0]
- else:
- if desc is not None:
- desc = desc + "\n" + l
- else:
- desc = l
- del lines[0]
-
- if desc is None:
- desc = ""
- retdesc = string.strip(retdesc)
- desc = string.strip(desc)
-
- if quiet == 0:
- #
- # report missing comments
- #
- i = 0
- while i < nbargs:
- if args[i][2] == None and args[i][0] != "void" and args[i][1] != None:
- self.warning("Function comment for %s lacks description of arg %s" % (name, args[i][1]))
- i = i + 1
- if retdesc == "" and ret[0] != "void":
- self.warning("Function comment for %s lacks description of return value" % (name))
- if desc == "":
- self.warning("Function comment for %s lacks description of the function" % (name))
-
-
- return(((ret[0], retdesc), args, desc))
+ if not quiet:
+ self.warning("Misformatted function comment for %s" % (name))
+ self.warning(" problem with '%s'" % (lines[0]))
+ del lines[0]
+ continue
+ del lines[0]
+ l = string.strip(lines[0])
+ while len(l) > 2 and l[0:3] != '* @':
+ while l[0] == '*':
+ l = l[1:]
+ desc = desc + ' ' + string.strip(l)
+ del lines[0]
+ if len(lines) == 0:
+ break
+ l = lines[0]
+ i = 0
+ while i < nbargs:
+ if args[i][1] == arg:
+ args[i] = (args[i][0], arg, desc)
+ break;
+ i = i + 1
+ if i >= nbargs:
+ if not quiet:
+ self.warning("Unable to find arg %s from function comment for %s" % (
+ arg, name))
+ while len(lines) > 0 and lines[0] == '*':
+ del lines[0]
+ desc = None
+ while len(lines) > 0:
+ l = lines[0]
+ i = 0
+ # Remove all leading '*', followed by at most one ' ' character
+ # since we need to preserve correct identation of code examples
+ while i < len(l) and l[i] == '*':
+ i = i + 1
+ if i > 0:
+ if i < len(l) and l[i] == ' ':
+ i = i + 1
+ l = l[i:]
+ if len(l) >= 6 and l[0:7] == "returns" or l[0:7] == "Returns":
+ try:
+ l = string.split(l, ' ', 1)[1]
+ except:
+ l = ""
+ retdesc = string.strip(l)
+ del lines[0]
+ while len(lines) > 0:
+ l = lines[0]
+ while len(l) > 0 and l[0] == '*':
+ l = l[1:]
+ l = string.strip(l)
+ retdesc = retdesc + " " + l
+ del lines[0]
+ else:
+ if desc is not None:
+ desc = desc + "\n" + l
+ else:
+ desc = l
+ del lines[0]
+
+ if desc is None:
+ desc = ""
+ retdesc = string.strip(retdesc)
+ desc = string.strip(desc)
+
+ if quiet == 0:
+ #
+ # report missing comments
+ #
+ i = 0
+ while i < nbargs:
+ if args[i][2] == None and args[i][0] != "void" and args[i][1] != None:
+ self.warning("Function comment for %s lacks description of arg %s" % (name, args[i][1]))
+ i = i + 1
+ if retdesc == "" and ret[0] != "void":
+ self.warning("Function comment for %s lacks description of return value" % (name))
+ if desc == "":
+ self.warning("Function comment for %s lacks description of the function" % (name))
+
+
+ return(((ret[0], retdesc), args, desc))
def parsePreproc(self, token):
- if debug:
- print "=> preproc ", token, self.lexer.tokens
+ if debug:
+ print "=> preproc ", token, self.lexer.tokens
name = token[1]
- if name == "#include":
- token = self.lexer.token()
- if token == None:
- return None
- if token[0] == 'preproc':
- self.index_add(token[1], self.filename, not self.is_header,
- "include")
- return self.lexer.token()
- return token
- if name == "#define":
- token = self.lexer.token()
- if token == None:
- return None
- if token[0] == 'preproc':
- # TODO macros with arguments
- name = token[1]
- lst = []
- token = self.lexer.token()
- while token != None and token[0] == 'preproc' and \
- token[1][0] != '#':
- lst.append(token[1])
- token = self.lexer.token()
+ if name == "#include":
+ token = self.lexer.token()
+ if token == None:
+ return None
+ if token[0] == 'preproc':
+ self.index_add(token[1], self.filename, not self.is_header,
+ "include")
+ return self.lexer.token()
+ return token
+ if name == "#define":
+ token = self.lexer.token()
+ if token == None:
+ return None
+ if token[0] == 'preproc':
+ # TODO macros with arguments
+ name = token[1]
+ lst = []
+ token = self.lexer.token()
+ while token != None and token[0] == 'preproc' and \
+ token[1][0] != '#':
+ lst.append(token[1])
+ token = self.lexer.token()
try:
- name = string.split(name, '(') [0]
+ name = string.split(name, '(') [0]
except:
pass
info = self.parseMacroComment(name, not self.is_header)
- self.index_add(name, self.filename, not self.is_header,
- "macro", info)
- return token
-
- #
- # Processing of conditionals modified by Bill 1/1/05
- #
- # We process conditionals (i.e. tokens from #ifdef, #ifndef,
- # #if, #else and #endif) for headers and mainline code,
- # store the ones from the header in libxml2-api.xml, and later
- # (in the routine merge_public) verify that the two (header and
- # mainline code) agree.
- #
- # There is a small problem with processing the headers. Some of
- # the variables are not concerned with enabling / disabling of
- # library functions (e.g. '__XML_PARSER_H__'), and we don't want
- # them to be included in libxml2-api.xml, or involved in
- # the check between the header and the mainline code. To
- # accomplish this, we ignore any conditional which doesn't include
- # the string 'ENABLED'
- #
- if name == "#ifdef":
- apstr = self.lexer.tokens[0][1]
- try:
- self.defines.append(apstr)
- if string.find(apstr, 'ENABLED') != -1:
- self.conditionals.append("defined(%s)" % apstr)
- except:
- pass
- elif name == "#ifndef":
- apstr = self.lexer.tokens[0][1]
- try:
- self.defines.append(apstr)
- if string.find(apstr, 'ENABLED') != -1:
- self.conditionals.append("!defined(%s)" % apstr)
- except:
- pass
- elif name == "#if":
- apstr = ""
- for tok in self.lexer.tokens:
- if apstr != "":
- apstr = apstr + " "
- apstr = apstr + tok[1]
- try:
- self.defines.append(apstr)
- if string.find(apstr, 'ENABLED') != -1:
- self.conditionals.append(apstr)
- except:
- pass
- elif name == "#else":
- if self.conditionals != [] and \
- string.find(self.defines[-1], 'ENABLED') != -1:
- self.conditionals[-1] = "!(%s)" % self.conditionals[-1]
- elif name == "#endif":
- if self.conditionals != [] and \
- string.find(self.defines[-1], 'ENABLED') != -1:
- self.conditionals = self.conditionals[:-1]
- self.defines = self.defines[:-1]
- token = self.lexer.token()
- while token != None and token[0] == 'preproc' and \
- token[1][0] != '#':
- token = self.lexer.token()
- return token
+ self.index_add(name, self.filename, not self.is_header,
+ "macro", info)
+ return token
+
+ #
+ # Processing of conditionals modified by Bill 1/1/05
+ #
+ # We process conditionals (i.e. tokens from #ifdef, #ifndef,
+ # #if, #else and #endif) for headers and mainline code,
+ # store the ones from the header in libxml2-api.xml, and later
+ # (in the routine merge_public) verify that the two (header and
+ # mainline code) agree.
+ #
+ # There is a small problem with processing the headers. Some of
+ # the variables are not concerned with enabling / disabling of
+ # library functions (e.g. '__XML_PARSER_H__'), and we don't want
+ # them to be included in libxml2-api.xml, or involved in
+ # the check between the header and the mainline code. To
+ # accomplish this, we ignore any conditional which doesn't include
+ # the string 'ENABLED'
+ #
+ if name == "#ifdef":
+ apstr = self.lexer.tokens[0][1]
+ try:
+ self.defines.append(apstr)
+ if string.find(apstr, 'ENABLED') != -1:
+ self.conditionals.append("defined(%s)" % apstr)
+ except:
+ pass
+ elif name == "#ifndef":
+ apstr = self.lexer.tokens[0][1]
+ try:
+ self.defines.append(apstr)
+ if string.find(apstr, 'ENABLED') != -1:
+ self.conditionals.append("!defined(%s)" % apstr)
+ except:
+ pass
+ elif name == "#if":
+ apstr = ""
+ for tok in self.lexer.tokens:
+ if apstr != "":
+ apstr = apstr + " "
+ apstr = apstr + tok[1]
+ try:
+ self.defines.append(apstr)
+ if string.find(apstr, 'ENABLED') != -1:
+ self.conditionals.append(apstr)
+ except:
+ pass
+ elif name == "#else":
+ if self.conditionals != [] and \
+ string.find(self.defines[-1], 'ENABLED') != -1:
+ self.conditionals[-1] = "!(%s)" % self.conditionals[-1]
+ elif name == "#endif":
+ if self.conditionals != [] and \
+ string.find(self.defines[-1], 'ENABLED') != -1:
+ self.conditionals = self.conditionals[:-1]
+ self.defines = self.defines[:-1]
+ token = self.lexer.token()
+ while token != None and token[0] == 'preproc' and \
+ token[1][0] != '#':
+ token = self.lexer.token()
+ return token
#
# token acquisition on top of the lexer, it handle internally
@@ -1012,89 +1012,89 @@ class CParser:
global ignored_words
token = self.lexer.token()
- while token != None:
- if token[0] == 'comment':
- token = self.parseComment(token)
- continue
- elif token[0] == 'preproc':
- token = self.parsePreproc(token)
- continue
- elif token[0] == "name" and token[1] == "__const":
- token = ("name", "const")
- return token
- elif token[0] == "name" and token[1] == "__attribute":
- token = self.lexer.token()
- while token != None and token[1] != ";":
- token = self.lexer.token()
- return token
- elif token[0] == "name" and ignored_words.has_key(token[1]):
- (n, info) = ignored_words[token[1]]
- i = 0
- while i < n:
- token = self.lexer.token()
- i = i + 1
- token = self.lexer.token()
- continue
- else:
- if debug:
- print "=> ", token
- return token
- return None
+ while token != None:
+ if token[0] == 'comment':
+ token = self.parseComment(token)
+ continue
+ elif token[0] == 'preproc':
+ token = self.parsePreproc(token)
+ continue
+ elif token[0] == "name" and token[1] == "__const":
+ token = ("name", "const")
+ return token
+ elif token[0] == "name" and token[1] == "__attribute":
+ token = self.lexer.token()
+ while token != None and token[1] != ";":
+ token = self.lexer.token()
+ return token
+ elif token[0] == "name" and ignored_words.has_key(token[1]):
+ (n, info) = ignored_words[token[1]]
+ i = 0
+ while i < n:
+ token = self.lexer.token()
+ i = i + 1
+ token = self.lexer.token()
+ continue
+ else:
+ if debug:
+ print "=> ", token
+ return token
+ return None
#
# Parse a typedef, it records the type and its name.
#
def parseTypedef(self, token):
if token == None:
- return None
- token = self.parseType(token)
- if token == None:
- self.error("parsing typedef")
- return None
- base_type = self.type
- type = base_type
- #self.debug("end typedef type", token)
- while token != None:
- if token[0] == "name":
- name = token[1]
- signature = self.signature
- if signature != None:
- type = string.split(type, '(')[0]
- d = self.mergeFunctionComment(name,
- ((type, None), signature), 1)
- self.index_add(name, self.filename, not self.is_header,
- "functype", d)
- else:
- if base_type == "struct":
- self.index_add(name, self.filename, not self.is_header,
- "struct", type)
- base_type = "struct " + name
- else:
- # TODO report missing or misformatted comments
- info = self.parseTypeComment(name, 1)
- self.index_add(name, self.filename, not self.is_header,
- "typedef", type, info)
- token = self.token()
- else:
- self.error("parsing typedef: expecting a name")
- return token
- #self.debug("end typedef", token)
- if token != None and token[0] == 'sep' and token[1] == ',':
- type = base_type
- token = self.token()
- while token != None and token[0] == "op":
- type = type + token[1]
- token = self.token()
- elif token != None and token[0] == 'sep' and token[1] == ';':
- break;
- elif token != None and token[0] == 'name':
- type = base_type
- continue;
- else:
- self.error("parsing typedef: expecting ';'", token)
- return token
- token = self.token()
- return token
+ return None
+ token = self.parseType(token)
+ if token == None:
+ self.error("parsing typedef")
+ return None
+ base_type = self.type
+ type = base_type
+ #self.debug("end typedef type", token)
+ while token != None:
+ if token[0] == "name":
+ name = token[1]
+ signature = self.signature
+ if signature != None:
+ type = string.split(type, '(')[0]
+ d = self.mergeFunctionComment(name,
+ ((type, None), signature), 1)
+ self.index_add(name, self.filename, not self.is_header,
+ "functype", d)
+ else:
+ if base_type == "struct":
+ self.index_add(name, self.filename, not self.is_header,
+ "struct", type)
+ base_type = "struct " + name
+ else:
+ # TODO report missing or misformatted comments
+ info = self.parseTypeComment(name, 1)
+ self.index_add(name, self.filename, not self.is_header,
+ "typedef", type, info)
+ token = self.token()
+ else:
+ self.error("parsing typedef: expecting a name")
+ return token
+ #self.debug("end typedef", token)
+ if token != None and token[0] == 'sep' and token[1] == ',':
+ type = base_type
+ token = self.token()
+ while token != None and token[0] == "op":
+ type = type + token[1]
+ token = self.token()
+ elif token != None and token[0] == 'sep' and token[1] == ';':
+ break;
+ elif token != None and token[0] == 'name':
+ type = base_type
+ continue;
+ else:
+ self.error("parsing typedef: expecting ';'", token)
+ return token
+ token = self.token()
+ return token
#
# Parse a C code block, used for functions it parse till
@@ -1102,138 +1102,138 @@ class CParser:
#
def parseBlock(self, token):
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- self.comment = None
- token = self.token()
- return token
- else:
- if self.collect_ref == 1:
- oldtok = token
- token = self.token()
- if oldtok[0] == "name" and oldtok[1][0:3] == "vir":
- if token[0] == "sep" and token[1] == "(":
- self.index_add_ref(oldtok[1], self.filename,
- 0, "function")
- token = self.token()
- elif token[0] == "name":
- token = self.token()
- if token[0] == "sep" and (token[1] == ";" or
- token[1] == "," or token[1] == "="):
- self.index_add_ref(oldtok[1], self.filename,
- 0, "type")
- elif oldtok[0] == "name" and oldtok[1][0:4] == "XEN_":
- self.index_add_ref(oldtok[1], self.filename,
- 0, "typedef")
- elif oldtok[0] == "name" and oldtok[1][0:7] == "LIBXEN_":
- self.index_add_ref(oldtok[1], self.filename,
- 0, "typedef")
-
- else:
- token = self.token()
- return token
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ self.comment = None
+ token = self.token()
+ return token
+ else:
+ if self.collect_ref == 1:
+ oldtok = token
+ token = self.token()
+ if oldtok[0] == "name" and oldtok[1][0:3] == "vir":
+ if token[0] == "sep" and token[1] == "(":
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "function")
+ token = self.token()
+ elif token[0] == "name":
+ token = self.token()
+ if token[0] == "sep" and (token[1] == ";" or
+ token[1] == "," or token[1] == "="):
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "type")
+ elif oldtok[0] == "name" and oldtok[1][0:4] == "XEN_":
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "typedef")
+ elif oldtok[0] == "name" and oldtok[1][0:7] == "LIBXEN_":
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "typedef")
+
+ else:
+ token = self.token()
+ return token
#
# Parse a C struct definition till the balancing }
#
def parseStruct(self, token):
fields = []
- #self.debug("start parseStruct", token)
+ #self.debug("start parseStruct", token)
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- self.struct_fields = fields
- #self.debug("end parseStruct", token)
- #print fields
- token = self.token()
- return token
- else:
- base_type = self.type
- #self.debug("before parseType", token)
- token = self.parseType(token)
- #self.debug("after parseType", token)
- if token != None and token[0] == "name":
- fname = token[1]
- token = self.token()
- if token[0] == "sep" and token[1] == ";":
- self.comment = None
- token = self.token()
- fields.append((self.type, fname, self.comment))
- self.comment = None
- else:
- self.error("parseStruct: expecting ;", token)
- elif token != None and token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- if token != None and token[0] == "name":
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == ";":
- token = self.token()
- else:
- self.error("parseStruct: expecting ;", token)
- else:
- self.error("parseStruct: name", token)
- token = self.token()
- self.type = base_type;
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ self.struct_fields = fields
+ #self.debug("end parseStruct", token)
+ #print fields
+ token = self.token()
+ return token
+ else:
+ base_type = self.type
+ #self.debug("before parseType", token)
+ token = self.parseType(token)
+ #self.debug("after parseType", token)
+ if token != None and token[0] == "name":
+ fname = token[1]
+ token = self.token()
+ if token[0] == "sep" and token[1] == ";":
+ self.comment = None
+ token = self.token()
+ fields.append((self.type, fname, self.comment))
+ self.comment = None
+ else:
+ self.error("parseStruct: expecting ;", token)
+ elif token != None and token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ if token != None and token[0] == "name":
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == ";":
+ token = self.token()
+ else:
+ self.error("parseStruct: expecting ;", token)
+ else:
+ self.error("parseStruct: name", token)
+ token = self.token()
+ self.type = base_type;
self.struct_fields = fields
- #self.debug("end parseStruct", token)
- #print fields
- return token
+ #self.debug("end parseStruct", token)
+ #print fields
+ return token
#
# Parse a C enum block, parse till the balancing }
#
def parseEnumBlock(self, token):
self.enums = []
- name = None
- self.comment = None
- comment = ""
- value = "0"
+ name = None
+ self.comment = None
+ comment = ""
+ value = "0"
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- if name != None:
- if self.comment != None:
- comment = self.comment
- self.comment = None
- self.enums.append((name, value, comment))
- token = self.token()
- return token
- elif token[0] == "name":
- if name != None:
- if self.comment != None:
- comment = string.strip(self.comment)
- self.comment = None
- self.enums.append((name, value, comment))
- name = token[1]
- comment = ""
- token = self.token()
- if token[0] == "op" and token[1][0] == "=":
- value = ""
- if len(token[1]) > 1:
- value = token[1][1:]
- token = self.token()
- while token[0] != "sep" or (token[1] != ',' and
- token[1] != '}'):
- value = value + token[1]
- token = self.token()
- else:
- try:
- value = "%d" % (int(value) + 1)
- except:
- self.warning("Failed to compute value of enum %s" % (name))
- value=""
- if token[0] == "sep" and token[1] == ",":
- token = self.token()
- else:
- token = self.token()
- return token
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ if name != None:
+ if self.comment != None:
+ comment = self.comment
+ self.comment = None
+ self.enums.append((name, value, comment))
+ token = self.token()
+ return token
+ elif token[0] == "name":
+ if name != None:
+ if self.comment != None:
+ comment = string.strip(self.comment)
+ self.comment = None
+ self.enums.append((name, value, comment))
+ name = token[1]
+ comment = ""
+ token = self.token()
+ if token[0] == "op" and token[1][0] == "=":
+ value = ""
+ if len(token[1]) > 1:
+ value = token[1][1:]
+ token = self.token()
+ while token[0] != "sep" or (token[1] != ',' and
+ token[1] != '}'):
+ value = value + token[1]
+ token = self.token()
+ else:
+ try:
+ value = "%d" % (int(value) + 1)
+ except:
+ self.warning("Failed to compute value of enum %s" % (name))
+ value=""
+ if token[0] == "sep" and token[1] == ",":
+ token = self.token()
+ else:
+ token = self.token()
+ return token
#
# Parse a C definition block, used for structs it parse till
@@ -1241,15 +1241,15 @@ class CParser:
#
def parseTypeBlock(self, token):
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- token = self.token()
- return token
- else:
- token = self.token()
- return token
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ token = self.token()
+ return token
+ else:
+ token = self.token()
+ return token
#
# Parse a type: the fact that the type name can either occur after
@@ -1258,221 +1258,221 @@ class CParser:
#
def parseType(self, token):
self.type = ""
- self.struct_fields = []
+ self.struct_fields = []
self.signature = None
- if token == None:
- return token
-
- while token[0] == "name" and (
- token[1] == "const" or \
- token[1] == "unsigned" or \
- token[1] == "signed"):
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- token = self.token()
+ if token == None:
+ return token
+
+ while token[0] == "name" and (
+ token[1] == "const" or \
+ token[1] == "unsigned" or \
+ token[1] == "signed"):
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ token = self.token()
if token[0] == "name" and token[1] == "long":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
-
- # some read ahead for long long
- oldtmp = token
- token = self.token()
- if token[0] == "name" and token[1] == "long":
- self.type = self.type + " " + token[1]
- else:
- self.push(token)
- token = oldtmp
-
- if token[0] == "name" and token[1] == "int":
- if self.type == "":
- self.type = tmp[1]
- else:
- self.type = self.type + " " + tmp[1]
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+
+ # some read ahead for long long
+ oldtmp = token
+ token = self.token()
+ if token[0] == "name" and token[1] == "long":
+ self.type = self.type + " " + token[1]
+ else:
+ self.push(token)
+ token = oldtmp
+
+ if token[0] == "name" and token[1] == "int":
+ if self.type == "":
+ self.type = tmp[1]
+ else:
+ self.type = self.type + " " + tmp[1]
elif token[0] == "name" and token[1] == "short":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- if token[0] == "name" and token[1] == "int":
- if self.type == "":
- self.type = tmp[1]
- else:
- self.type = self.type + " " + tmp[1]
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ if token[0] == "name" and token[1] == "int":
+ if self.type == "":
+ self.type = tmp[1]
+ else:
+ self.type = self.type + " " + tmp[1]
elif token[0] == "name" and token[1] == "struct":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- token = self.token()
- nametok = None
- if token[0] == "name":
- nametok = token
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseStruct(token)
- elif token != None and token[0] == "op" and token[1] == "*":
- self.type = self.type + " " + nametok[1] + " *"
- token = self.token()
- while token != None and token[0] == "op" and token[1] == "*":
- self.type = self.type + " *"
- token = self.token()
- if token[0] == "name":
- nametok = token
- token = self.token()
- else:
- self.error("struct : expecting name", token)
- return token
- elif token != None and token[0] == "name" and nametok != None:
- self.type = self.type + " " + nametok[1]
- return token
-
- if nametok != None:
- self.lexer.push(token)
- token = nametok
- return token
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ token = self.token()
+ nametok = None
+ if token[0] == "name":
+ nametok = token
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseStruct(token)
+ elif token != None and token[0] == "op" and token[1] == "*":
+ self.type = self.type + " " + nametok[1] + " *"
+ token = self.token()
+ while token != None and token[0] == "op" and token[1] == "*":
+ self.type = self.type + " *"
+ token = self.token()
+ if token[0] == "name":
+ nametok = token
+ token = self.token()
+ else:
+ self.error("struct : expecting name", token)
+ return token
+ elif token != None and token[0] == "name" and nametok != None:
+ self.type = self.type + " " + nametok[1]
+ return token
+
+ if nametok != None:
+ self.lexer.push(token)
+ token = nametok
+ return token
elif token[0] == "name" and token[1] == "enum":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- self.enums = []
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseEnumBlock(token)
- else:
- self.error("parsing enum: expecting '{'", token)
- enum_type = None
- if token != None and token[0] != "name":
- self.lexer.push(token)
- token = ("name", "enum")
- else:
- enum_type = token[1]
- for enum in self.enums:
- self.index_add(enum[0], self.filename,
- not self.is_header, "enum",
- (enum[1], enum[2], enum_type))
- return token
-
- elif token[0] == "name":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- else:
- self.error("parsing type %s: expecting a name" % (self.type),
- token)
- return token
- token = self.token()
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ self.enums = []
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseEnumBlock(token)
+ else:
+ self.error("parsing enum: expecting '{'", token)
+ enum_type = None
+ if token != None and token[0] != "name":
+ self.lexer.push(token)
+ token = ("name", "enum")
+ else:
+ enum_type = token[1]
+ for enum in self.enums:
+ self.index_add(enum[0], self.filename,
+ not self.is_header, "enum",
+ (enum[1], enum[2], enum_type))
+ return token
+
+ elif token[0] == "name":
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ else:
+ self.error("parsing type %s: expecting a name" % (self.type),
+ token)
+ return token
+ token = self.token()
while token != None and (token[0] == "op" or
- token[0] == "name" and token[1] == "const"):
- self.type = self.type + " " + token[1]
- token = self.token()
-
- #
- # if there is a parenthesis here, this means a function type
- #
- if token != None and token[0] == "sep" and token[1] == '(':
- self.type = self.type + token[1]
- token = self.token()
- while token != None and token[0] == "op" and token[1] == '*':
- self.type = self.type + token[1]
- token = self.token()
- if token == None or token[0] != "name" :
- self.error("parsing function type, name expected", token);
- return token
- self.type = self.type + token[1]
- nametok = token
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == ')':
- self.type = self.type + token[1]
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == '(':
- token = self.token()
- type = self.type;
- token = self.parseSignature(token);
- self.type = type;
- else:
- self.error("parsing function type, '(' expected", token);
- return token
- else:
- self.error("parsing function type, ')' expected", token);
- return token
- self.lexer.push(token)
- token = nametok
- return token
+ token[0] == "name" and token[1] == "const"):
+ self.type = self.type + " " + token[1]
+ token = self.token()
#
- # do some lookahead for arrays
- #
- if token != None and token[0] == "name":
- nametok = token
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == '[':
- self.type = self.type + nametok[1]
- while token != None and token[0] == "sep" and token[1] == '[':
- self.type = self.type + token[1]
- token = self.token()
- while token != None and token[0] != 'sep' and \
- token[1] != ']' and token[1] != ';':
- self.type = self.type + token[1]
- token = self.token()
- if token != None and token[0] == 'sep' and token[1] == ']':
- self.type = self.type + token[1]
- token = self.token()
- else:
- self.error("parsing array type, ']' expected", token);
- return token
- elif token != None and token[0] == "sep" and token[1] == ':':
- # remove :12 in case it's a limited int size
- token = self.token()
- token = self.token()
- self.lexer.push(token)
- token = nametok
-
- return token
+ # if there is a parenthesis here, this means a function type
+ #
+ if token != None and token[0] == "sep" and token[1] == '(':
+ self.type = self.type + token[1]
+ token = self.token()
+ while token != None and token[0] == "op" and token[1] == '*':
+ self.type = self.type + token[1]
+ token = self.token()
+ if token == None or token[0] != "name" :
+ self.error("parsing function type, name expected", token);
+ return token
+ self.type = self.type + token[1]
+ nametok = token
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == ')':
+ self.type = self.type + token[1]
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == '(':
+ token = self.token()
+ type = self.type;
+ token = self.parseSignature(token);
+ self.type = type;
+ else:
+ self.error("parsing function type, '(' expected", token);
+ return token
+ else:
+ self.error("parsing function type, ')' expected", token);
+ return token
+ self.lexer.push(token)
+ token = nametok
+ return token
+
+ #
+ # do some lookahead for arrays
+ #
+ if token != None and token[0] == "name":
+ nametok = token
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == '[':
+ self.type = self.type + nametok[1]
+ while token != None and token[0] == "sep" and token[1] == '[':
+ self.type = self.type + token[1]
+ token = self.token()
+ while token != None and token[0] != 'sep' and \
+ token[1] != ']' and token[1] != ';':
+ self.type = self.type + token[1]
+ token = self.token()
+ if token != None and token[0] == 'sep' and token[1] == ']':
+ self.type = self.type + token[1]
+ token = self.token()
+ else:
+ self.error("parsing array type, ']' expected", token);
+ return token
+ elif token != None and token[0] == "sep" and token[1] == ':':
+ # remove :12 in case it's a limited int size
+ token = self.token()
+ token = self.token()
+ self.lexer.push(token)
+ token = nametok
+
+ return token
#
# Parse a signature: '(' has been parsed and we scan the type definition
# up to the ')' included
def parseSignature(self, token):
signature = []
- if token != None and token[0] == "sep" and token[1] == ')':
- self.signature = []
- token = self.token()
- return token
- while token != None:
- token = self.parseType(token)
- if token != None and token[0] == "name":
- signature.append((self.type, token[1], None))
- token = self.token()
- elif token != None and token[0] == "sep" and token[1] == ',':
- token = self.token()
- continue
- elif token != None and token[0] == "sep" and token[1] == ')':
- # only the type was provided
- if self.type == "...":
- signature.append((self.type, "...", None))
- else:
- signature.append((self.type, None, None))
- if token != None and token[0] == "sep":
- if token[1] == ',':
- token = self.token()
- continue
- elif token[1] == ')':
- token = self.token()
- break
- self.signature = signature
- return token
+ if token != None and token[0] == "sep" and token[1] == ')':
+ self.signature = []
+ token = self.token()
+ return token
+ while token != None:
+ token = self.parseType(token)
+ if token != None and token[0] == "name":
+ signature.append((self.type, token[1], None))
+ token = self.token()
+ elif token != None and token[0] == "sep" and token[1] == ',':
+ token = self.token()
+ continue
+ elif token != None and token[0] == "sep" and token[1] == ')':
+ # only the type was provided
+ if self.type == "...":
+ signature.append((self.type, "...", None))
+ else:
+ signature.append((self.type, None, None))
+ if token != None and token[0] == "sep":
+ if token[1] == ',':
+ token = self.token()
+ continue
+ elif token[1] == ')':
+ token = self.token()
+ break
+ self.signature = signature
+ return token
#
# Parse a global definition, be it a type, variable or function
@@ -1481,134 +1481,134 @@ class CParser:
def parseGlobal(self, token):
static = 0
if token[1] == 'extern':
- token = self.token()
- if token == None:
- return token
- if token[0] == 'string':
- if token[1] == 'C':
- token = self.token()
- if token == None:
- return token
- if token[0] == 'sep' and token[1] == "{":
- token = self.token()
-# print 'Entering extern "C line ', self.lineno()
- while token != None and (token[0] != 'sep' or
- token[1] != "}"):
- if token[0] == 'name':
- token = self.parseGlobal(token)
- else:
- self.error(
- "token %s %s unexpected at the top level" % (
- token[0], token[1]))
- token = self.parseGlobal(token)
-# print 'Exiting extern "C" line', self.lineno()
- token = self.token()
- return token
- else:
- return token
- elif token[1] == 'static':
- static = 1
- token = self.token()
- if token == None or token[0] != 'name':
- return token
-
- if token[1] == 'typedef':
- token = self.token()
- return self.parseTypedef(token)
- else:
- token = self.parseType(token)
- type_orig = self.type
- if token == None or token[0] != "name":
- return token
- type = type_orig
- self.name = token[1]
- token = self.token()
- while token != None and (token[0] == "sep" or token[0] == "op"):
- if token[0] == "sep":
- if token[1] == "[":
- type = type + token[1]
- token = self.token()
- while token != None and (token[0] != "sep" or \
- token[1] != ";"):
- type = type + token[1]
- token = self.token()
-
- if token != None and token[0] == "op" and token[1] == "=":
- #
- # Skip the initialization of the variable
- #
- token = self.token()
- if token[0] == 'sep' and token[1] == '{':
- token = self.token()
- token = self.parseBlock(token)
- else:
- self.comment = None
- while token != None and (token[0] != "sep" or \
- (token[1] != ';' and token[1] != ',')):
- token = self.token()
- self.comment = None
- if token == None or token[0] != "sep" or (token[1] != ';' and
- token[1] != ','):
- self.error("missing ';' or ',' after value")
-
- if token != None and token[0] == "sep":
- if token[1] == ";":
- self.comment = None
- token = self.token()
- if type == "struct":
- self.index_add(self.name, self.filename,
- not self.is_header, "struct", self.struct_fields)
- else:
- self.index_add(self.name, self.filename,
- not self.is_header, "variable", type)
- break
- elif token[1] == "(":
- token = self.token()
- token = self.parseSignature(token)
- if token == None:
- return None
- if token[0] == "sep" and token[1] == ";":
- d = self.mergeFunctionComment(self.name,
- ((type, None), self.signature), 1)
- self.index_add(self.name, self.filename, static,
- "function", d)
- token = self.token()
- elif token[0] == "sep" and token[1] == "{":
- d = self.mergeFunctionComment(self.name,
- ((type, None), self.signature), static)
- self.index_add(self.name, self.filename, static,
- "function", d)
- token = self.token()
- token = self.parseBlock(token);
- elif token[1] == ',':
- self.comment = None
- self.index_add(self.name, self.filename, static,
- "variable", type)
- type = type_orig
- token = self.token()
- while token != None and token[0] == "sep":
- type = type + token[1]
- token = self.token()
- if token != None and token[0] == "name":
- self.name = token[1]
- token = self.token()
- else:
- break
-
- return token
+ token = self.token()
+ if token == None:
+ return token
+ if token[0] == 'string':
+ if token[1] == 'C':
+ token = self.token()
+ if token == None:
+ return token
+ if token[0] == 'sep' and token[1] == "{":
+ token = self.token()
+# print 'Entering extern "C line ', self.lineno()
+ while token != None and (token[0] != 'sep' or
+ token[1] != "}"):
+ if token[0] == 'name':
+ token = self.parseGlobal(token)
+ else:
+ self.error(
+ "token %s %s unexpected at the top level" % (
+ token[0], token[1]))
+ token = self.parseGlobal(token)
+# print 'Exiting extern "C" line', self.lineno()
+ token = self.token()
+ return token
+ else:
+ return token
+ elif token[1] == 'static':
+ static = 1
+ token = self.token()
+ if token == None or token[0] != 'name':
+ return token
+
+ if token[1] == 'typedef':
+ token = self.token()
+ return self.parseTypedef(token)
+ else:
+ token = self.parseType(token)
+ type_orig = self.type
+ if token == None or token[0] != "name":
+ return token
+ type = type_orig
+ self.name = token[1]
+ token = self.token()
+ while token != None and (token[0] == "sep" or token[0] == "op"):
+ if token[0] == "sep":
+ if token[1] == "[":
+ type = type + token[1]
+ token = self.token()
+ while token != None and (token[0] != "sep" or \
+ token[1] != ";"):
+ type = type + token[1]
+ token = self.token()
+
+ if token != None and token[0] == "op" and token[1] == "=":
+ #
+ # Skip the initialization of the variable
+ #
+ token = self.token()
+ if token[0] == 'sep' and token[1] == '{':
+ token = self.token()
+ token = self.parseBlock(token)
+ else:
+ self.comment = None
+ while token != None and (token[0] != "sep" or \
+ (token[1] != ';' and token[1] != ',')):
+ token = self.token()
+ self.comment = None
+ if token == None or token[0] != "sep" or (token[1] != ';' and
+ token[1] != ','):
+ self.error("missing ';' or ',' after value")
+
+ if token != None and token[0] == "sep":
+ if token[1] == ";":
+ self.comment = None
+ token = self.token()
+ if type == "struct":
+ self.index_add(self.name, self.filename,
+ not self.is_header, "struct", self.struct_fields)
+ else:
+ self.index_add(self.name, self.filename,
+ not self.is_header, "variable", type)
+ break
+ elif token[1] == "(":
+ token = self.token()
+ token = self.parseSignature(token)
+ if token == None:
+ return None
+ if token[0] == "sep" and token[1] == ";":
+ d = self.mergeFunctionComment(self.name,
+ ((type, None), self.signature), 1)
+ self.index_add(self.name, self.filename, static,
+ "function", d)
+ token = self.token()
+ elif token[0] == "sep" and token[1] == "{":
+ d = self.mergeFunctionComment(self.name,
+ ((type, None), self.signature), static)
+ self.index_add(self.name, self.filename, static,
+ "function", d)
+ token = self.token()
+ token = self.parseBlock(token);
+ elif token[1] == ',':
+ self.comment = None
+ self.index_add(self.name, self.filename, static,
+ "variable", type)
+ type = type_orig
+ token = self.token()
+ while token != None and token[0] == "sep":
+ type = type + token[1]
+ token = self.token()
+ if token != None and token[0] == "name":
+ self.name = token[1]
+ token = self.token()
+ else:
+ break
+
+ return token
def parse(self):
self.warning("Parsing %s" % (self.filename))
token = self.token()
- while token != None:
+ while token != None:
if token[0] == 'name':
- token = self.parseGlobal(token)
+ token = self.parseGlobal(token)
else:
- self.error("token %s %s unexpected at the top level" % (
- token[0], token[1]))
- token = self.parseGlobal(token)
- return
- self.parseTopComment(self.top_comment)
+ self.error("token %s %s unexpected at the top level" % (
+ token[0], token[1]))
+ token = self.parseGlobal(token)
+ return
+ self.parseTopComment(self.top_comment)
return self.index
@@ -1618,449 +1618,449 @@ class docBuilder:
self.name = name
self.path = path
self.directories = directories
- self.includes = includes + included_files.keys()
- self.modules = {}
- self.headers = {}
- self.idx = index()
+ self.includes = includes + included_files.keys()
+ self.modules = {}
+ self.headers = {}
+ self.idx = index()
self.xref = {}
- self.index = {}
- self.basename = name
+ self.index = {}
+ self.basename = name
def indexString(self, id, str):
- if str == None:
- return
- str = string.replace(str, "'", ' ')
- str = string.replace(str, '"', ' ')
- str = string.replace(str, "/", ' ')
- str = string.replace(str, '*', ' ')
- str = string.replace(str, "[", ' ')
- str = string.replace(str, "]", ' ')
- str = string.replace(str, "(", ' ')
- str = string.replace(str, ")", ' ')
- str = string.replace(str, "<", ' ')
- str = string.replace(str, '>', ' ')
- str = string.replace(str, "&", ' ')
- str = string.replace(str, '#', ' ')
- str = string.replace(str, ",", ' ')
- str = string.replace(str, '.', ' ')
- str = string.replace(str, ';', ' ')
- tokens = string.split(str)
- for token in tokens:
- try:
- c = token[0]
- if string.find(string.letters, c) < 0:
- pass
- elif len(token) < 3:
- pass
- else:
- lower = string.lower(token)
- # TODO: generalize this a bit
- if lower == 'and' or lower == 'the':
- pass
- elif self.xref.has_key(token):
- self.xref[token].append(id)
- else:
- self.xref[token] = [id]
- except:
- pass
+ if str == None:
+ return
+ str = string.replace(str, "'", ' ')
+ str = string.replace(str, '"', ' ')
+ str = string.replace(str, "/", ' ')
+ str = string.replace(str, '*', ' ')
+ str = string.replace(str, "[", ' ')
+ str = string.replace(str, "]", ' ')
+ str = string.replace(str, "(", ' ')
+ str = string.replace(str, ")", ' ')
+ str = string.replace(str, "<", ' ')
+ str = string.replace(str, '>', ' ')
+ str = string.replace(str, "&", ' ')
+ str = string.replace(str, '#', ' ')
+ str = string.replace(str, ",", ' ')
+ str = string.replace(str, '.', ' ')
+ str = string.replace(str, ';', ' ')
+ tokens = string.split(str)
+ for token in tokens:
+ try:
+ c = token[0]
+ if string.find(string.letters, c) < 0:
+ pass
+ elif len(token) < 3:
+ pass
+ else:
+ lower = string.lower(token)
+ # TODO: generalize this a bit
+ if lower == 'and' or lower == 'the':
+ pass
+ elif self.xref.has_key(token):
+ self.xref[token].append(id)
+ else:
+ self.xref[token] = [id]
+ except:
+ pass
def analyze(self):
print "Project %s : %d headers, %d modules" % (self.name, len(self.headers.keys()), len(self.modules.keys()))
- self.idx.analyze()
+ self.idx.analyze()
def scanHeaders(self):
- for header in self.headers.keys():
- parser = CParser(header)
- idx = parser.parse()
- self.headers[header] = idx;
- self.idx.merge(idx)
+ for header in self.headers.keys():
+ parser = CParser(header)
+ idx = parser.parse()
+ self.headers[header] = idx;
+ self.idx.merge(idx)
def scanModules(self):
- for module in self.modules.keys():
- parser = CParser(module)
- idx = parser.parse()
- # idx.analyze()
- self.modules[module] = idx
- self.idx.merge_public(idx)
+ for module in self.modules.keys():
+ parser = CParser(module)
+ idx = parser.parse()
+ # idx.analyze()
+ self.modules[module] = idx
+ self.idx.merge_public(idx)
def scan(self):
for directory in self.directories:
- files = glob.glob(directory + "/*.c")
- for file in files:
- skip = 1
- for incl in self.includes:
- if string.find(file, incl) != -1:
- skip = 0;
- break
- if skip == 0:
- self.modules[file] = None;
- files = glob.glob(directory + "/*.h")
- for file in files:
- skip = 1
- for incl in self.includes:
- if string.find(file, incl) != -1:
- skip = 0;
- break
- if skip == 0:
- self.headers[file] = None;
- self.scanHeaders()
- self.scanModules()
+ files = glob.glob(directory + "/*.c")
+ for file in files:
+ skip = 1
+ for incl in self.includes:
+ if string.find(file, incl) != -1:
+ skip = 0;
+ break
+ if skip == 0:
+ self.modules[file] = None;
+ files = glob.glob(directory + "/*.h")
+ for file in files:
+ skip = 1
+ for incl in self.includes:
+ if string.find(file, incl) != -1:
+ skip = 0;
+ break
+ if skip == 0:
+ self.headers[file] = None;
+ self.scanHeaders()
+ self.scanModules()
def modulename_file(self, file):
module = os.path.basename(file)
- if module[-2:] == '.h':
- module = module[:-2]
- elif module[-2:] == '.c':
- module = module[:-2]
- return module
+ if module[-2:] == '.h':
+ module = module[:-2]
+ elif module[-2:] == '.c':
+ module = module[:-2]
+ return module
def serialize_enum(self, output, name):
id = self.idx.enums[name]
output.write(" <enum name='%s' file='%s'" % (name,
- self.modulename_file(id.header)))
- if id.info != None:
- info = id.info
- if info[0] != None and info[0] != '':
- try:
- val = eval(info[0])
- except:
- val = info[0]
- output.write(" value='%s'" % (val));
- if info[2] != None and info[2] != '':
- output.write(" type='%s'" % info[2]);
- if info[1] != None and info[1] != '':
- output.write(" info='%s'" % escape(info[1]));
+ self.modulename_file(id.header)))
+ if id.info != None:
+ info = id.info
+ if info[0] != None and info[0] != '':
+ try:
+ val = eval(info[0])
+ except:
+ val = info[0]
+ output.write(" value='%s'" % (val));
+ if info[2] != None and info[2] != '':
+ output.write(" type='%s'" % info[2]);
+ if info[1] != None and info[1] != '':
+ output.write(" info='%s'" % escape(info[1]));
output.write("/>\n")
def serialize_macro(self, output, name):
id = self.idx.macros[name]
output.write(" <macro name='%s' file='%s'>\n" % (name,
- self.modulename_file(id.header)))
- if id.info != None:
+ self.modulename_file(id.header)))
+ if id.info != None:
try:
- (args, desc) = id.info
- if desc != None and desc != "":
- output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
- self.indexString(name, desc)
- for arg in args:
- (name, desc) = arg
- if desc != None and desc != "":
- output.write(" <arg name='%s' info='%s'/>\n" % (
- name, escape(desc)))
- self.indexString(name, desc)
- else:
- output.write(" <arg name='%s'/>\n" % (name))
+ (args, desc) = id.info
+ if desc != None and desc != "":
+ output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
+ self.indexString(name, desc)
+ for arg in args:
+ (name, desc) = arg
+ if desc != None and desc != "":
+ output.write(" <arg name='%s' info='%s'/>\n" % (
+ name, escape(desc)))
+ self.indexString(name, desc)
+ else:
+ output.write(" <arg name='%s'/>\n" % (name))
except:
pass
output.write(" </macro>\n")
def serialize_typedef(self, output, name):
id = self.idx.typedefs[name]
- if id.info[0:7] == 'struct ':
- output.write(" <struct name='%s' file='%s' type='%s'" % (
- name, self.modulename_file(id.header), id.info))
- name = id.info[7:]
- if self.idx.structs.has_key(name) and ( \
- type(self.idx.structs[name].info) == type(()) or
- type(self.idx.structs[name].info) == type([])):
- output.write(">\n");
- try:
- for field in self.idx.structs[name].info:
- desc = field[2]
- self.indexString(name, desc)
- if desc == None:
- desc = ''
- else:
- desc = escape(desc)
- output.write(" <field name='%s' type='%s' info='%s'/>\n" % (field[1] , field[0], desc))
- except:
- print "Failed to serialize struct %s" % (name)
- output.write(" </struct>\n")
- else:
- output.write("/>\n");
- else :
- output.write(" <typedef name='%s' file='%s' type='%s'" % (
- name, self.modulename_file(id.header), id.info))
+ if id.info[0:7] == 'struct ':
+ output.write(" <struct name='%s' file='%s' type='%s'" % (
+ name, self.modulename_file(id.header), id.info))
+ name = id.info[7:]
+ if self.idx.structs.has_key(name) and ( \
+ type(self.idx.structs[name].info) == type(()) or
+ type(self.idx.structs[name].info) == type([])):
+ output.write(">\n");
+ try:
+ for field in self.idx.structs[name].info:
+ desc = field[2]
+ self.indexString(name, desc)
+ if desc == None:
+ desc = ''
+ else:
+ desc = escape(desc)
+ output.write(" <field name='%s' type='%s' info='%s'/>\n" % (field[1] , field[0], desc))
+ except:
+ print "Failed to serialize struct %s" % (name)
+ output.write(" </struct>\n")
+ else:
+ output.write("/>\n");
+ else :
+ output.write(" <typedef name='%s' file='%s' type='%s'" % (
+ name, self.modulename_file(id.header), id.info))
try:
- desc = id.extra
- if desc != None and desc != "":
- output.write(">\n <info><![CDATA[%s]]></info>\n" % (desc))
- output.write(" </typedef>\n")
- else:
- output.write("/>\n")
- except:
- output.write("/>\n")
+ desc = id.extra
+ if desc != None and desc != "":
+ output.write(">\n <info><![CDATA[%s]]></info>\n" % (desc))
+ output.write(" </typedef>\n")
+ else:
+ output.write("/>\n")
+ except:
+ output.write("/>\n")
def serialize_variable(self, output, name):
id = self.idx.variables[name]
- if id.info != None:
- output.write(" <variable name='%s' file='%s' type='%s'/>\n" % (
- name, self.modulename_file(id.header), id.info))
- else:
- output.write(" <variable name='%s' file='%s'/>\n" % (
- name, self.modulename_file(id.header)))
+ if id.info != None:
+ output.write(" <variable name='%s' file='%s' type='%s'/>\n" % (
+ name, self.modulename_file(id.header), id.info))
+ else:
+ output.write(" <variable name='%s' file='%s'/>\n" % (
+ name, self.modulename_file(id.header)))
def serialize_function(self, output, name):
id = self.idx.functions[name]
- if name == debugsym:
- print "=>", id
+ if name == debugsym:
+ print "=>", id
output.write(" <%s name='%s' file='%s' module='%s'>\n" % (id.type,
- name, self.modulename_file(id.header),
- self.modulename_file(id.module)))
- #
- # Processing of conditionals modified by Bill 1/1/05
- #
- if id.conditionals != None:
- apstr = ""
- for cond in id.conditionals:
- if apstr != "":
- apstr = apstr + " && "
- apstr = apstr + cond
- output.write(" <cond>%s</cond>\n"% (apstr));
- try:
- (ret, params, desc) = id.info
- output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
- self.indexString(name, desc)
- if ret[0] != None:
- if ret[0] == "void":
- output.write(" <return type='void'/>\n")
- else:
- output.write(" <return type='%s' info='%s'/>\n" % (
- ret[0], escape(ret[1])))
- self.indexString(name, ret[1])
- for param in params:
- if param[0] == 'void':
- continue
- if param[2] == None:
- output.write(" <arg name='%s' type='%s' info=''/>\n" % (param[1], param[0]))
- else:
- output.write(" <arg name='%s' type='%s' info='%s'/>\n" % (param[1], param[0], escape(param[2])))
- self.indexString(name, param[2])
- except:
- print "Failed to save function %s info: " % name, `id.info`
+ name, self.modulename_file(id.header),
+ self.modulename_file(id.module)))
+ #
+ # Processing of conditionals modified by Bill 1/1/05
+ #
+ if id.conditionals != None:
+ apstr = ""
+ for cond in id.conditionals:
+ if apstr != "":
+ apstr = apstr + " && "
+ apstr = apstr + cond
+ output.write(" <cond>%s</cond>\n"% (apstr));
+ try:
+ (ret, params, desc) = id.info
+ output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
+ self.indexString(name, desc)
+ if ret[0] != None:
+ if ret[0] == "void":
+ output.write(" <return type='void'/>\n")
+ else:
+ output.write(" <return type='%s' info='%s'/>\n" % (
+ ret[0], escape(ret[1])))
+ self.indexString(name, ret[1])
+ for param in params:
+ if param[0] == 'void':
+ continue
+ if param[2] == None:
+ output.write(" <arg name='%s' type='%s' info=''/>\n" % (param[1], param[0]))
+ else:
+ output.write(" <arg name='%s' type='%s' info='%s'/>\n" % (param[1], param[0], escape(param[2])))
+ self.indexString(name, param[2])
+ except:
+ print "Failed to save function %s info: " % name, `id.info`
output.write(" </%s>\n" % (id.type))
def serialize_exports(self, output, file):
module = self.modulename_file(file)
- output.write(" <file name='%s'>\n" % (module))
- dict = self.headers[file]
- if dict.info != None:
- for data in ('Summary', 'Description', 'Author'):
- try:
- output.write(" <%s>%s</%s>\n" % (
- string.lower(data),
- escape(dict.info[data]),
- string.lower(data)))
- except:
- print "Header %s lacks a %s description" % (module, data)
- if dict.info.has_key('Description'):
- desc = dict.info['Description']
- if string.find(desc, "DEPRECATED") != -1:
- output.write(" <deprecated/>\n")
+ output.write(" <file name='%s'>\n" % (module))
+ dict = self.headers[file]
+ if dict.info != None:
+ for data in ('Summary', 'Description', 'Author'):
+ try:
+ output.write(" <%s>%s</%s>\n" % (
+ string.lower(data),
+ escape(dict.info[data]),
+ string.lower(data)))
+ except:
+ print "Header %s lacks a %s description" % (module, data)
+ if dict.info.has_key('Description'):
+ desc = dict.info['Description']
+ if string.find(desc, "DEPRECATED") != -1:
+ output.write(" <deprecated/>\n")
ids = dict.macros.keys()
- ids.sort()
- for id in uniq(ids):
- # Macros are sometime used to masquerade other types.
- if dict.functions.has_key(id):
- continue
- if dict.variables.has_key(id):
- continue
- if dict.typedefs.has_key(id):
- continue
- if dict.structs.has_key(id):
- continue
- if dict.enums.has_key(id):
- continue
- output.write(" <exports symbol='%s' type='macro'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ # Macros are sometime used to masquerade other types.
+ if dict.functions.has_key(id):
+ continue
+ if dict.variables.has_key(id):
+ continue
+ if dict.typedefs.has_key(id):
+ continue
+ if dict.structs.has_key(id):
+ continue
+ if dict.enums.has_key(id):
+ continue
+ output.write(" <exports symbol='%s' type='macro'/>\n" % (id))
ids = dict.enums.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='enum'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='enum'/>\n" % (id))
ids = dict.typedefs.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='typedef'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='typedef'/>\n" % (id))
ids = dict.structs.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='struct'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='struct'/>\n" % (id))
ids = dict.variables.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='variable'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='variable'/>\n" % (id))
ids = dict.functions.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='function'/>\n" % (id))
- output.write(" </file>\n")
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='function'/>\n" % (id))
+ output.write(" </file>\n")
def serialize_xrefs_files(self, output):
headers = self.headers.keys()
headers.sort()
for file in headers:
- module = self.modulename_file(file)
- output.write(" <file name='%s'>\n" % (module))
- dict = self.headers[file]
- ids = uniq(dict.functions.keys() + dict.variables.keys() + \
- dict.macros.keys() + dict.typedefs.keys() + \
- dict.structs.keys() + dict.enums.keys())
- ids.sort()
- for id in ids:
- output.write(" <ref name='%s'/>\n" % (id))
- output.write(" </file>\n")
+ module = self.modulename_file(file)
+ output.write(" <file name='%s'>\n" % (module))
+ dict = self.headers[file]
+ ids = uniq(dict.functions.keys() + dict.variables.keys() + \
+ dict.macros.keys() + dict.typedefs.keys() + \
+ dict.structs.keys() + dict.enums.keys())
+ ids.sort()
+ for id in ids:
+ output.write(" <ref name='%s'/>\n" % (id))
+ output.write(" </file>\n")
pass
def serialize_xrefs_functions(self, output):
funcs = {}
- for name in self.idx.functions.keys():
- id = self.idx.functions[name]
- try:
- (ret, params, desc) = id.info
- for param in params:
- if param[0] == 'void':
- continue
- if funcs.has_key(param[0]):
- funcs[param[0]].append(name)
- else:
- funcs[param[0]] = [name]
- except:
- pass
- typ = funcs.keys()
- typ.sort()
- for type in typ:
- if type == '' or type == 'void' or type == "int" or \
- type == "char *" or type == "const char *" :
- continue
- output.write(" <type name='%s'>\n" % (type))
- ids = funcs[type]
- ids.sort()
- pid = '' # not sure why we have dups, but get rid of them!
- for id in ids:
- if id != pid:
- output.write(" <ref name='%s'/>\n" % (id))
- pid = id
- output.write(" </type>\n")
+ for name in self.idx.functions.keys():
+ id = self.idx.functions[name]
+ try:
+ (ret, params, desc) = id.info
+ for param in params:
+ if param[0] == 'void':
+ continue
+ if funcs.has_key(param[0]):
+ funcs[param[0]].append(name)
+ else:
+ funcs[param[0]] = [name]
+ except:
+ pass
+ typ = funcs.keys()
+ typ.sort()
+ for type in typ:
+ if type == '' or type == 'void' or type == "int" or \
+ type == "char *" or type == "const char *" :
+ continue
+ output.write(" <type name='%s'>\n" % (type))
+ ids = funcs[type]
+ ids.sort()
+ pid = '' # not sure why we have dups, but get rid of them!
+ for id in ids:
+ if id != pid:
+ output.write(" <ref name='%s'/>\n" % (id))
+ pid = id
+ output.write(" </type>\n")
def serialize_xrefs_constructors(self, output):
funcs = {}
- for name in self.idx.functions.keys():
- id = self.idx.functions[name]
- try:
- (ret, params, desc) = id.info
- if ret[0] == "void":
- continue
- if funcs.has_key(ret[0]):
- funcs[ret[0]].append(name)
- else:
- funcs[ret[0]] = [name]
- except:
- pass
- typ = funcs.keys()
- typ.sort()
- for type in typ:
- if type == '' or type == 'void' or type == "int" or \
- type == "char *" or type == "const char *" :
- continue
- output.write(" <type name='%s'>\n" % (type))
- ids = funcs[type]
- ids.sort()
- for id in ids:
- output.write(" <ref name='%s'/>\n" % (id))
- output.write(" </type>\n")
+ for name in self.idx.functions.keys():
+ id = self.idx.functions[name]
+ try:
+ (ret, params, desc) = id.info
+ if ret[0] == "void":
+ continue
+ if funcs.has_key(ret[0]):
+ funcs[ret[0]].append(name)
+ else:
+ funcs[ret[0]] = [name]
+ except:
+ pass
+ typ = funcs.keys()
+ typ.sort()
+ for type in typ:
+ if type == '' or type == 'void' or type == "int" or \
+ type == "char *" or type == "const char *" :
+ continue
+ output.write(" <type name='%s'>\n" % (type))
+ ids = funcs[type]
+ ids.sort()
+ for id in ids:
+ output.write(" <ref name='%s'/>\n" % (id))
+ output.write(" </type>\n")
def serialize_xrefs_alpha(self, output):
- letter = None
- ids = self.idx.identifiers.keys()
- ids.sort()
- for id in ids:
- if id[0] != letter:
- if letter != None:
- output.write(" </letter>\n")
- letter = id[0]
- output.write(" <letter name='%s'>\n" % (letter))
- output.write(" <ref name='%s'/>\n" % (id))
- if letter != None:
- output.write(" </letter>\n")
+ letter = None
+ ids = self.idx.identifiers.keys()
+ ids.sort()
+ for id in ids:
+ if id[0] != letter:
+ if letter != None:
+ output.write(" </letter>\n")
+ letter = id[0]
+ output.write(" <letter name='%s'>\n" % (letter))
+ output.write(" <ref name='%s'/>\n" % (id))
+ if letter != None:
+ output.write(" </letter>\n")
def serialize_xrefs_references(self, output):
typ = self.idx.identifiers.keys()
- typ.sort()
- for id in typ:
- idf = self.idx.identifiers[id]
- module = idf.header
- output.write(" <reference name='%s' href='%s'/>\n" % (id,
- 'html/' + self.basename + '-' +
- self.modulename_file(module) + '.html#' +
- id))
+ typ.sort()
+ for id in typ:
+ idf = self.idx.identifiers[id]
+ module = idf.header
+ output.write(" <reference name='%s' href='%s'/>\n" % (id,
+ 'html/' + self.basename + '-' +
+ self.modulename_file(module) + '.html#' +
+ id))
def serialize_xrefs_index(self, output):
index = self.xref
- typ = index.keys()
- typ.sort()
- letter = None
- count = 0
- chunk = 0
- chunks = []
- for id in typ:
- if len(index[id]) > 30:
- continue
- if id[0] != letter:
- if letter == None or count > 200:
- if letter != None:
- output.write(" </letter>\n")
- output.write(" </chunk>\n")
- count = 0
- chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
- output.write(" <chunk name='chunk%s'>\n" % (chunk))
- first_letter = id[0]
- chunk = chunk + 1
- elif letter != None:
- output.write(" </letter>\n")
- letter = id[0]
- output.write(" <letter name='%s'>\n" % (letter))
- output.write(" <word name='%s'>\n" % (id))
- tokens = index[id];
- tokens.sort()
- tok = None
- for token in tokens:
- if tok == token:
- continue
- tok = token
- output.write(" <ref name='%s'/>\n" % (token))
- count = count + 1
- output.write(" </word>\n")
- if letter != None:
- output.write(" </letter>\n")
- output.write(" </chunk>\n")
- if count != 0:
- chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
- output.write(" <chunks>\n")
- for ch in chunks:
- output.write(" <chunk name='%s' start='%s' end='%s'/>\n" % (
- ch[0], ch[1], ch[2]))
- output.write(" </chunks>\n")
+ typ = index.keys()
+ typ.sort()
+ letter = None
+ count = 0
+ chunk = 0
+ chunks = []
+ for id in typ:
+ if len(index[id]) > 30:
+ continue
+ if id[0] != letter:
+ if letter == None or count > 200:
+ if letter != None:
+ output.write(" </letter>\n")
+ output.write(" </chunk>\n")
+ count = 0
+ chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
+ output.write(" <chunk name='chunk%s'>\n" % (chunk))
+ first_letter = id[0]
+ chunk = chunk + 1
+ elif letter != None:
+ output.write(" </letter>\n")
+ letter = id[0]
+ output.write(" <letter name='%s'>\n" % (letter))
+ output.write(" <word name='%s'>\n" % (id))
+ tokens = index[id];
+ tokens.sort()
+ tok = None
+ for token in tokens:
+ if tok == token:
+ continue
+ tok = token
+ output.write(" <ref name='%s'/>\n" % (token))
+ count = count + 1
+ output.write(" </word>\n")
+ if letter != None:
+ output.write(" </letter>\n")
+ output.write(" </chunk>\n")
+ if count != 0:
+ chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
+ output.write(" <chunks>\n")
+ for ch in chunks:
+ output.write(" <chunk name='%s' start='%s' end='%s'/>\n" % (
+ ch[0], ch[1], ch[2]))
+ output.write(" </chunks>\n")
def serialize_xrefs(self, output):
- output.write(" <references>\n")
- self.serialize_xrefs_references(output)
- output.write(" </references>\n")
- output.write(" <alpha>\n")
- self.serialize_xrefs_alpha(output)
- output.write(" </alpha>\n")
- output.write(" <constructors>\n")
- self.serialize_xrefs_constructors(output)
- output.write(" </constructors>\n")
- output.write(" <functions>\n")
- self.serialize_xrefs_functions(output)
- output.write(" </functions>\n")
- output.write(" <files>\n")
- self.serialize_xrefs_files(output)
- output.write(" </files>\n")
- output.write(" <index>\n")
- self.serialize_xrefs_index(output)
- output.write(" </index>\n")
+ output.write(" <references>\n")
+ self.serialize_xrefs_references(output)
+ output.write(" </references>\n")
+ output.write(" <alpha>\n")
+ self.serialize_xrefs_alpha(output)
+ output.write(" </alpha>\n")
+ output.write(" <constructors>\n")
+ self.serialize_xrefs_constructors(output)
+ output.write(" </constructors>\n")
+ output.write(" <functions>\n")
+ self.serialize_xrefs_functions(output)
+ output.write(" </functions>\n")
+ output.write(" <files>\n")
+ self.serialize_xrefs_files(output)
+ output.write(" </files>\n")
+ output.write(" <index>\n")
+ self.serialize_xrefs_index(output)
+ output.write(" </index>\n")
def serialize(self):
filename = "%s/%s-api.xml" % (self.path, self.name)
@@ -2124,10 +2124,10 @@ def rebuild():
print "Rebuilding API description for libvirt"
builder = docBuilder("libvirt", srcdir,
["src", "src/util", "include/libvirt"],
- [])
+ [])
else:
print "rebuild() failed, unable to guess the module"
- return None
+ return None
builder.scan()
builder.analyze()
builder.serialize()
@@ -2146,4 +2146,4 @@ if __name__ == "__main__":
debug = 1
parse(sys.argv[1])
else:
- rebuild()
+ rebuild()
diff --git a/docs/index.py b/docs/index.py
index 17683c5..df6bd81 100755
--- a/docs/index.py
+++ b/docs/index.py
@@ -61,56 +61,56 @@ libxml2.registerErrorHandler(callback, None)
TABLES={
"symbols" : """CREATE TABLE symbols (
name varchar(255) BINARY NOT NULL,
- module varchar(255) BINARY NOT NULL,
+ module varchar(255) BINARY NOT NULL,
type varchar(25) NOT NULL,
- descr varchar(255),
- UNIQUE KEY name (name),
- KEY module (module))""",
+ descr varchar(255),
+ UNIQUE KEY name (name),
+ KEY module (module))""",
"words" : """CREATE TABLE words (
name varchar(50) BINARY NOT NULL,
- symbol varchar(255) BINARY NOT NULL,
+ symbol varchar(255) BINARY NOT NULL,
relevance int,
- KEY name (name),
- KEY symbol (symbol),
- UNIQUE KEY ID (name, symbol))""",
+ KEY name (name),
+ KEY symbol (symbol),
+ UNIQUE KEY ID (name, symbol))""",
"wordsHTML" : """CREATE TABLE wordsHTML (
name varchar(50) BINARY NOT NULL,
- resource varchar(255) BINARY NOT NULL,
- section varchar(255),
- id varchar(50),
+ resource varchar(255) BINARY NOT NULL,
+ section varchar(255),
+ id varchar(50),
relevance int,
- KEY name (name),
- KEY resource (resource),
- UNIQUE KEY ref (name, resource))""",
+ KEY name (name),
+ KEY resource (resource),
+ UNIQUE KEY ref (name, resource))""",
"wordsArchive" : """CREATE TABLE wordsArchive (
name varchar(50) BINARY NOT NULL,
- ID int(11) NOT NULL,
+ ID int(11) NOT NULL,
relevance int,
- KEY name (name),
- UNIQUE KEY ref (name, ID))""",
+ KEY name (name),
+ UNIQUE KEY ref (name, ID))""",
"pages" : """CREATE TABLE pages (
resource varchar(255) BINARY NOT NULL,
- title varchar(255) BINARY NOT NULL,
- UNIQUE KEY name (resource))""",
+ title varchar(255) BINARY NOT NULL,
+ UNIQUE KEY name (resource))""",
"archives" : """CREATE TABLE archives (
ID int(11) NOT NULL auto_increment,
resource varchar(255) BINARY NOT NULL,
- title varchar(255) BINARY NOT NULL,
- UNIQUE KEY id (ID,resource(255)),
- INDEX (ID),
- INDEX (resource))""",
+ title varchar(255) BINARY NOT NULL,
+ UNIQUE KEY id (ID,resource(255)),
+ INDEX (ID),
+ INDEX (resource))""",
"Queries" : """CREATE TABLE Queries (
ID int(11) NOT NULL auto_increment,
- Value varchar(50) NOT NULL,
- Count int(11) NOT NULL,
- UNIQUE KEY id (ID,Value(35)),
- INDEX (ID))""",
+ Value varchar(50) NOT NULL,
+ Count int(11) NOT NULL,
+ UNIQUE KEY id (ID,Value(35)),
+ INDEX (ID))""",
"AllQueries" : """CREATE TABLE AllQueries (
ID int(11) NOT NULL auto_increment,
- Value varchar(50) NOT NULL,
- Count int(11) NOT NULL,
- UNIQUE KEY id (ID,Value(35)),
- INDEX (ID))""",
+ Value varchar(50) NOT NULL,
+ Count int(11) NOT NULL,
+ UNIQUE KEY id (ID,Value(35)),
+ INDEX (ID))""",
}
#
@@ -120,9 +120,9 @@ API="libvirt-api.xml"
DB=None
#########################################################################
-# #
-# MySQL database interfaces #
-# #
+# #
+# MySQL database interfaces #
+# #
#########################################################################
def createTable(db, name):
global TABLES
@@ -141,7 +141,7 @@ def createTable(db, name):
ret = c.execute(TABLES[name])
except:
print "Failed to create table %s" % (name)
- return -1
+ return -1
return ret
def checkTables(db, verbose = 1):
@@ -152,38 +152,38 @@ def checkTables(db, verbose = 1):
c = db.cursor()
nbtables = c.execute("show tables")
if verbose:
- print "Found %d tables" % (nbtables)
+ print "Found %d tables" % (nbtables)
tables = {}
i = 0
while i < nbtables:
l = c.fetchone()
- name = l[0]
- tables[name] = {}
+ name = l[0]
+ tables[name] = {}
i = i + 1
for table in TABLES.keys():
if not tables.has_key(table):
- print "table %s missing" % (table)
- createTable(db, table)
- try:
- ret = c.execute("SELECT count(*) from %s" % table);
- row = c.fetchone()
- if verbose:
- print "Table %s contains %d records" % (table, row[0])
- except:
- print "Troubles with table %s : repairing" % (table)
- ret = c.execute("repair table %s" % table);
- print "repairing returned %d" % (ret)
- ret = c.execute("SELECT count(*) from %s" % table);
- row = c.fetchone()
- print "Table %s contains %d records" % (table, row[0])
+ print "table %s missing" % (table)
+ createTable(db, table)
+ try:
+ ret = c.execute("SELECT count(*) from %s" % table);
+ row = c.fetchone()
+ if verbose:
+ print "Table %s contains %d records" % (table, row[0])
+ except:
+ print "Troubles with table %s : repairing" % (table)
+ ret = c.execute("repair table %s" % table);
+ print "repairing returned %d" % (ret)
+ ret = c.execute("SELECT count(*) from %s" % table);
+ row = c.fetchone()
+ print "Table %s contains %d records" % (table, row[0])
if verbose:
- print "checkTables finished"
+ print "checkTables finished"
# make sure apache can access the tables read-only
try:
- ret = c.execute("GRANT SELECT ON libvir.* TO nobody@localhost")
- ret = c.execute("GRANT INSERT,SELECT,UPDATE ON libvir.Queries TO nobody@localhost")
+ ret = c.execute("GRANT SELECT ON libvir.* TO nobody@localhost")
+ ret = c.execute("GRANT INSERT,SELECT,UPDATE ON libvir.Queries TO nobody@localhost")
except:
pass
return 0
@@ -193,10 +193,10 @@ def openMySQL(db="libvir", passwd=None, verbose = 1):
if passwd == None:
try:
- passwd = os.environ["MySQL_PASS"]
- except:
- print "No password available, set environment MySQL_PASS"
- sys.exit(1)
+ passwd = os.environ["MySQL_PASS"]
+ except:
+ print "No password available, set environment MySQL_PASS"
+ sys.exit(1)
DB = MySQLdb.connect(passwd=passwd, db=db)
if DB == None:
@@ -218,19 +218,19 @@ def updateWord(name, symbol, relevance):
c = DB.cursor()
try:
- ret = c.execute(
+ ret = c.execute(
"""INSERT INTO words (name, symbol, relevance) VALUES ('%s','%s', %d)""" %
- (name, symbol, relevance))
+ (name, symbol, relevance))
except:
try:
- ret = c.execute(
+ ret = c.execute(
"""UPDATE words SET relevance = %d where name = '%s' and symbol = '%s'""" %
- (relevance, name, symbol))
- except:
- print "Update word (%s, %s, %s) failed command" % (name, symbol, relevance)
- print "UPDATE words SET relevance = %d where name = '%s' and symbol = '%s'" % (relevance, name, symbol)
- print sys.exc_type, sys.exc_value
- return -1
+ (relevance, name, symbol))
+ except:
+ print "Update word (%s, %s, %s) failed command" % (name, symbol, relevance)
+ print "UPDATE words SET relevance = %d where name = '%s' and symbol = '%s'" % (relevance, name, symbol)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
@@ -250,28 +250,28 @@ def updateSymbol(name, module, type, desc):
return -1
try:
- desc = string.replace(desc, "'", " ")
- l = string.split(desc, ".")
- desc = l[0]
- desc = desc[0:99]
+ desc = string.replace(desc, "'", " ")
+ l = string.split(desc, ".")
+ desc = l[0]
+ desc = desc[0:99]
except:
desc = ""
c = DB.cursor()
try:
- ret = c.execute(
+ ret = c.execute(
"""INSERT INTO symbols (name, module, type, descr) VALUES ('%s','%s', '%s', '%s')""" %
(name, module, type, desc))
except:
try:
- ret = c.execute(
+ ret = c.execute(
"""UPDATE symbols SET module='%s', type='%s', descr='%s' where name='%s'""" %
(module, type, desc, name))
except:
- print "Update symbol (%s, %s, %s) failed command" % (name, module, type)
- print """UPDATE symbols SET module='%s', type='%s', descr='%s' where name='%s'""" % (module, type, desc, name)
- print sys.exc_type, sys.exc_value
- return -1
+ print "Update symbol (%s, %s, %s) failed command" % (name, module, type)
+ print """UPDATE symbols SET module='%s', type='%s', descr='%s' where name='%s'""" % (module, type, desc, name)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
@@ -308,19 +308,19 @@ def addPage(resource, title):
c = DB.cursor()
try:
- ret = c.execute(
- """INSERT INTO pages (resource, title) VALUES ('%s','%s')""" %
+ ret = c.execute(
+ """INSERT INTO pages (resource, title) VALUES ('%s','%s')""" %
(resource, title))
except:
try:
- ret = c.execute(
- """UPDATE pages SET title='%s' WHERE resource='%s'""" %
+ ret = c.execute(
+ """UPDATE pages SET title='%s' WHERE resource='%s'""" %
(title, resource))
except:
- print "Update symbol (%s, %s, %s) failed command" % (name, module, type)
- print """UPDATE pages SET title='%s' WHERE resource='%s'""" % (title, resource)
- print sys.exc_type, sys.exc_value
- return -1
+ print "Update symbol (%s, %s, %s) failed command" % (name, module, type)
+ print """UPDATE pages SET title='%s' WHERE resource='%s'""" % (title, resource)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
@@ -340,27 +340,27 @@ def updateWordHTML(name, resource, desc, id, relevance):
if desc == None:
desc = ""
else:
- try:
- desc = string.replace(desc, "'", " ")
- desc = desc[0:99]
- except:
- desc = ""
+ try:
+ desc = string.replace(desc, "'", " ")
+ desc = desc[0:99]
+ except:
+ desc = ""
c = DB.cursor()
try:
- ret = c.execute(
+ ret = c.execute(
"""INSERT INTO wordsHTML (name, resource, section, id, relevance) VALUES ('%s','%s', '%s', '%s', '%d')""" %
(name, resource, desc, id, relevance))
except:
try:
- ret = c.execute(
+ ret = c.execute(
"""UPDATE wordsHTML SET section='%s', id='%s', relevance='%d' where name='%s' and resource='%s'""" %
(desc, id, relevance, name, resource))
except:
- print "Update symbol (%s, %s, %d) failed command" % (name, resource, relevance)
- print """UPDATE wordsHTML SET section='%s', id='%s', relevance='%d' where name='%s' and resource='%s'""" % (desc, id, relevance, name, resource)
- print sys.exc_type, sys.exc_value
- return -1
+ print "Update symbol (%s, %s, %d) failed command" % (name, resource, relevance)
+ print """UPDATE wordsHTML SET section='%s', id='%s', relevance='%d' where name='%s' and resource='%s'""" % (desc, id, relevance, name, resource)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
@@ -376,13 +376,13 @@ def checkXMLMsgArchive(url):
c = DB.cursor()
try:
- ret = c.execute(
- """SELECT ID FROM archives WHERE resource='%s'""" % (url))
- row = c.fetchone()
- if row == None:
- return -1
+ ret = c.execute(
+ """SELECT ID FROM archives WHERE resource='%s'""" % (url))
+ row = c.fetchone()
+ if row == None:
+ return -1
except:
- return -1
+ return -1
return row[0]
@@ -398,22 +398,22 @@ def addXMLMsgArchive(url, title):
if title == None:
title = ""
else:
- title = string.replace(title, "'", " ")
- title = title[0:99]
+ title = string.replace(title, "'", " ")
+ title = title[0:99]
c = DB.cursor()
try:
cmd = """INSERT INTO archives (resource, title) VALUES ('%s','%s')""" % (url, title)
ret = c.execute(cmd)
- cmd = """SELECT ID FROM archives WHERE resource='%s'""" % (url)
+ cmd = """SELECT ID FROM archives WHERE resource='%s'""" % (url)
ret = c.execute(cmd)
- row = c.fetchone()
- if row == None:
- print "addXMLMsgArchive failed to get the ID: %s" % (url)
- return -1
+ row = c.fetchone()
+ if row == None:
+ print "addXMLMsgArchive failed to get the ID: %s" % (url)
+ return -1
except:
print "addXMLMsgArchive failed command: %s" % (cmd)
- return -1
+ return -1
return((int)(row[0]))
@@ -431,26 +431,26 @@ def updateWordArchive(name, id, relevance):
c = DB.cursor()
try:
- ret = c.execute(
+ ret = c.execute(
"""INSERT INTO wordsArchive (name, id, relevance) VALUES ('%s', '%d', '%d')""" %
(name, id, relevance))
except:
try:
- ret = c.execute(
+ ret = c.execute(
"""UPDATE wordsArchive SET relevance='%d' where name='%s' and ID='%d'""" %
(relevance, name, id))
except:
- print "Update word archive (%s, %d, %d) failed command" % (name, id, relevance)
- print """UPDATE wordsArchive SET relevance='%d' where name='%s' and ID='%d'""" % (relevance, name, id)
- print sys.exc_type, sys.exc_value
- return -1
+ print "Update word archive (%s, %d, %d) failed command" % (name, id, relevance)
+ print """UPDATE wordsArchive SET relevance='%d' where name='%s' and ID='%d'""" % (relevance, name, id)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
#########################################################################
-# #
-# Word dictionary and analysis routines #
-# #
+# #
+# Word dictionary and analysis routines #
+# #
#########################################################################
#
@@ -516,18 +516,18 @@ def splitIdentifier(str):
ret = []
while str != "":
cur = string.lower(str[0])
- str = str[1:]
- if ((cur < 'a') or (cur > 'z')):
- continue
- while (str != "") and (str[0] >= 'A') and (str[0] <= 'Z'):
- cur = cur + string.lower(str[0])
- str = str[1:]
- while (str != "") and (str[0] >= 'a') and (str[0] <= 'z'):
- cur = cur + str[0]
- str = str[1:]
- while (str != "") and (str[0] >= '0') and (str[0] <= '9'):
- str = str[1:]
- ret.append(cur)
+ str = str[1:]
+ if ((cur < 'a') or (cur > 'z')):
+ continue
+ while (str != "") and (str[0] >= 'A') and (str[0] <= 'Z'):
+ cur = cur + string.lower(str[0])
+ str = str[1:]
+ while (str != "") and (str[0] >= 'a') and (str[0] <= 'z'):
+ cur = cur + str[0]
+ str = str[1:]
+ while (str != "") and (str[0] >= '0') and (str[0] <= '9'):
+ str = str[1:]
+ ret.append(cur)
return ret
def addWord(word, module, symbol, relevance):
@@ -544,15 +544,15 @@ def addWord(word, module, symbol, relevance):
if wordsDict.has_key(word):
d = wordsDict[word]
- if d == None:
- return 0
- if len(d) > 500:
- wordsDict[word] = None
- return 0
- try:
- relevance = relevance + d[(module, symbol)]
- except:
- pass
+ if d == None:
+ return 0
+ if len(d) > 500:
+ wordsDict[word] = None
+ return 0
+ try:
+ relevance = relevance + d[(module, symbol)]
+ except:
+ pass
else:
wordsDict[word] = {}
wordsDict[word][(module, symbol)] = relevance
@@ -565,8 +565,8 @@ def addString(str, module, symbol, relevance):
str = cleanupWordsString(str)
l = string.split(str)
for word in l:
- if len(word) > 2:
- ret = ret + addWord(word, module, symbol, 5)
+ if len(word) > 2:
+ ret = ret + addWord(word, module, symbol, 5)
return ret
@@ -586,18 +586,18 @@ def addWordHTML(word, resource, id, section, relevance):
if wordsDictHTML.has_key(word):
d = wordsDictHTML[word]
- if d == None:
- print "skipped %s" % (word)
- return 0
- try:
- (r,i,s) = d[resource]
- if i != None:
- id = i
- if s != None:
- section = s
- relevance = relevance + r
- except:
- pass
+ if d == None:
+ print "skipped %s" % (word)
+ return 0
+ try:
+ (r,i,s) = d[resource]
+ if i != None:
+ id = i
+ if s != None:
+ section = s
+ relevance = relevance + r
+ except:
+ pass
else:
wordsDictHTML[word] = {}
d = wordsDictHTML[word];
@@ -611,15 +611,15 @@ def addStringHTML(str, resource, id, section, relevance):
str = cleanupWordsString(str)
l = string.split(str)
for word in l:
- if len(word) > 2:
- try:
- r = addWordHTML(word, resource, id, section, relevance)
- if r < 0:
- print "addWordHTML failed: %s %s" % (word, resource)
- ret = ret + r
- except:
- print "addWordHTML failed: %s %s %d" % (word, resource, relevance)
- print sys.exc_type, sys.exc_value
+ if len(word) > 2:
+ try:
+ r = addWordHTML(word, resource, id, section, relevance)
+ if r < 0:
+ print "addWordHTML failed: %s %s" % (word, resource)
+ ret = ret + r
+ except:
+ print "addWordHTML failed: %s %s %d" % (word, resource, relevance)
+ print sys.exc_type, sys.exc_value
return ret
@@ -637,14 +637,14 @@ def addWordArchive(word, id, relevance):
if wordsDictArchive.has_key(word):
d = wordsDictArchive[word]
- if d == None:
- print "skipped %s" % (word)
- return 0
- try:
- r = d[id]
- relevance = relevance + r
- except:
- pass
+ if d == None:
+ print "skipped %s" % (word)
+ return 0
+ try:
+ r = d[id]
+ relevance = relevance + r
+ except:
+ pass
else:
wordsDictArchive[word] = {}
d = wordsDictArchive[word];
@@ -659,22 +659,22 @@ def addStringArchive(str, id, relevance):
l = string.split(str)
for word in l:
i = len(word)
- if i > 2:
- try:
- r = addWordArchive(word, id, relevance)
- if r < 0:
- print "addWordArchive failed: %s %s" % (word, id)
- else:
- ret = ret + r
- except:
- print "addWordArchive failed: %s %s %d" % (word, id, relevance)
- print sys.exc_type, sys.exc_value
+ if i > 2:
+ try:
+ r = addWordArchive(word, id, relevance)
+ if r < 0:
+ print "addWordArchive failed: %s %s" % (word, id)
+ else:
+ ret = ret + r
+ except:
+ print "addWordArchive failed: %s %s %d" % (word, id, relevance)
+ print sys.exc_type, sys.exc_value
return ret
#########################################################################
-# #
-# XML API description analysis #
-# #
+# #
+# XML API description analysis #
+# #
#########################################################################
def loadAPI(filename):
@@ -690,7 +690,7 @@ def foundExport(file, symbol):
addFunction(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
def analyzeAPIFile(top):
@@ -699,12 +699,12 @@ def analyzeAPIFile(top):
cur = top.children
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "exports":
- count = count + foundExport(name, cur.prop("symbol"))
- else:
- print "unexpected element %s in API doc <file name='%s'>" % (name)
+ cur = cur.next
+ continue
+ if cur.name == "exports":
+ count = count + foundExport(name, cur.prop("symbol"))
+ else:
+ print "unexpected element %s in API doc <file name='%s'>" % (name)
cur = cur.next
return count
@@ -714,12 +714,12 @@ def analyzeAPIFiles(top):
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "file":
- count = count + analyzeAPIFile(cur)
- else:
- print "unexpected element %s in API doc <files>" % (cur.name)
+ cur = cur.next
+ continue
+ if cur.name == "file":
+ count = count + analyzeAPIFile(cur)
+ else:
+ print "unexpected element %s in API doc <files>" % (cur.name)
cur = cur.next
return count
@@ -734,7 +734,7 @@ def analyzeAPIEnum(top):
addEnum(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
@@ -749,7 +749,7 @@ def analyzeAPIConst(top):
addConst(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
@@ -764,7 +764,7 @@ def analyzeAPIType(top):
addType(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
def analyzeAPIFunctype(top):
@@ -778,7 +778,7 @@ def analyzeAPIFunctype(top):
addFunctype(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
def analyzeAPIStruct(top):
@@ -792,16 +792,16 @@ def analyzeAPIStruct(top):
addStruct(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
info = top.prop("info")
if info != None:
- info = string.replace(info, "'", " ")
- info = string.strip(info)
- l = string.split(info)
- for word in l:
- if len(word) > 2:
- addWord(word, file, symbol, 5)
+ info = string.replace(info, "'", " ")
+ info = string.strip(info)
+ l = string.split(info)
+ for word in l:
+ if len(word) > 2:
+ addWord(word, file, symbol, 5)
return 1
def analyzeAPIMacro(top):
@@ -818,19 +818,19 @@ def analyzeAPIMacro(top):
cur = top.children
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "info":
- info = cur.content
- break
+ cur = cur.next
+ continue
+ if cur.name == "info":
+ info = cur.content
+ break
cur = cur.next
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
if info == None:
- addMacro(symbol, file)
+ addMacro(symbol, file)
print "Macro %s description has no <info>" % (symbol)
return 0
@@ -839,8 +839,8 @@ def analyzeAPIMacro(top):
addMacro(symbol, file, info)
l = string.split(info)
for word in l:
- if len(word) > 2:
- addWord(word, file, symbol, 5)
+ if len(word) > 2:
+ addWord(word, file, symbol, 5)
return 1
def analyzeAPIFunction(top):
@@ -857,40 +857,40 @@ def analyzeAPIFunction(top):
cur = top.children
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "info":
- info = cur.content
- elif cur.name == "return":
- rinfo = cur.prop("info")
- if rinfo != None:
- rinfo = string.replace(rinfo, "'", " ")
- rinfo = string.strip(rinfo)
- addString(rinfo, file, symbol, 7)
- elif cur.name == "arg":
- ainfo = cur.prop("info")
- if ainfo != None:
- ainfo = string.replace(ainfo, "'", " ")
- ainfo = string.strip(ainfo)
- addString(ainfo, file, symbol, 5)
- name = cur.prop("name")
- if name != None:
- name = string.replace(name, "'", " ")
- name = string.strip(name)
- addWord(name, file, symbol, 7)
+ cur = cur.next
+ continue
+ if cur.name == "info":
+ info = cur.content
+ elif cur.name == "return":
+ rinfo = cur.prop("info")
+ if rinfo != None:
+ rinfo = string.replace(rinfo, "'", " ")
+ rinfo = string.strip(rinfo)
+ addString(rinfo, file, symbol, 7)
+ elif cur.name == "arg":
+ ainfo = cur.prop("info")
+ if ainfo != None:
+ ainfo = string.replace(ainfo, "'", " ")
+ ainfo = string.strip(ainfo)
+ addString(ainfo, file, symbol, 5)
+ name = cur.prop("name")
+ if name != None:
+ name = string.replace(name, "'", " ")
+ name = string.strip(name)
+ addWord(name, file, symbol, 7)
cur = cur.next
if info == None:
print "Function %s description has no <info>" % (symbol)
- addFunction(symbol, file, "")
+ addFunction(symbol, file, "")
else:
info = string.replace(info, "'", " ")
- info = string.strip(info)
- addFunction(symbol, file, info)
+ info = string.strip(info)
+ addFunction(symbol, file, info)
addString(info, file, symbol, 5)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
@@ -900,24 +900,24 @@ def analyzeAPISymbols(top):
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "macro":
- count = count + analyzeAPIMacro(cur)
- elif cur.name == "function":
- count = count + analyzeAPIFunction(cur)
- elif cur.name == "const":
- count = count + analyzeAPIConst(cur)
- elif cur.name == "typedef":
- count = count + analyzeAPIType(cur)
- elif cur.name == "struct":
- count = count + analyzeAPIStruct(cur)
- elif cur.name == "enum":
- count = count + analyzeAPIEnum(cur)
- elif cur.name == "functype":
- count = count + analyzeAPIFunctype(cur)
- else:
- print "unexpected element %s in API doc <files>" % (cur.name)
+ cur = cur.next
+ continue
+ if cur.name == "macro":
+ count = count + analyzeAPIMacro(cur)
+ elif cur.name == "function":
+ count = count + analyzeAPIFunction(cur)
+ elif cur.name == "const":
+ count = count + analyzeAPIConst(cur)
+ elif cur.name == "typedef":
+ count = count + analyzeAPIType(cur)
+ elif cur.name == "struct":
+ count = count + analyzeAPIStruct(cur)
+ elif cur.name == "enum":
+ count = count + analyzeAPIEnum(cur)
+ elif cur.name == "functype":
+ count = count + analyzeAPIFunctype(cur)
+ else:
+ print "unexpected element %s in API doc <files>" % (cur.name)
cur = cur.next
return count
@@ -932,22 +932,22 @@ def analyzeAPI(doc):
cur = root.children
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "files":
- pass
-# count = count + analyzeAPIFiles(cur)
- elif cur.name == "symbols":
- count = count + analyzeAPISymbols(cur)
- else:
- print "unexpected element %s in API doc" % (cur.name)
+ cur = cur.next
+ continue
+ if cur.name == "files":
+ pass
+# count = count + analyzeAPIFiles(cur)
+ elif cur.name == "symbols":
+ count = count + analyzeAPISymbols(cur)
+ else:
+ print "unexpected element %s in API doc" % (cur.name)
cur = cur.next
return count
#########################################################################
-# #
-# Web pages parsing and analysis #
-# #
+# #
+# Web pages parsing and analysis #
+# #
#########################################################################
import glob
@@ -955,8 +955,8 @@ import glob
def analyzeHTMLText(doc, resource, p, section, id):
words = 0
try:
- content = p.content
- words = words + addStringHTML(content, resource, id, section, 5)
+ content = p.content
+ words = words + addStringHTML(content, resource, id, section, 5)
except:
return -1
return words
@@ -964,8 +964,8 @@ def analyzeHTMLText(doc, resource, p, section, id):
def analyzeHTMLPara(doc, resource, p, section, id):
words = 0
try:
- content = p.content
- words = words + addStringHTML(content, resource, id, section, 5)
+ content = p.content
+ words = words + addStringHTML(content, resource, id, section, 5)
except:
return -1
return words
@@ -973,8 +973,8 @@ def analyzeHTMLPara(doc, resource, p, section, id):
def analyzeHTMLPre(doc, resource, p, section, id):
words = 0
try:
- content = p.content
- words = words + addStringHTML(content, resource, id, section, 5)
+ content = p.content
+ words = words + addStringHTML(content, resource, id, section, 5)
except:
return -1
return words
@@ -982,8 +982,8 @@ def analyzeHTMLPre(doc, resource, p, section, id):
def analyzeHTML(doc, resource, p, section, id):
words = 0
try:
- content = p.content
- words = words + addStringHTML(content, resource, id, section, 5)
+ content = p.content
+ words = words + addStringHTML(content, resource, id, section, 5)
except:
return -1
return words
@@ -992,36 +992,36 @@ def analyzeHTML(doc, resource):
para = 0;
ctxt = doc.xpathNewContext()
try:
- res = ctxt.xpathEval("//head/title")
- title = res[0].content
+ res = ctxt.xpathEval("//head/title")
+ title = res[0].content
except:
title = "Page %s" % (resource)
addPage(resource, title)
try:
- items = ctxt.xpathEval("//h1 | //h2 | //h3 | //text()")
- section = title
- id = ""
- for item in items:
- if item.name == 'h1' or item.name == 'h2' or item.name == 'h3':
- section = item.content
- if item.prop("id"):
- id = item.prop("id")
- elif item.prop("name"):
- id = item.prop("name")
- elif item.type == 'text':
- analyzeHTMLText(doc, resource, item, section, id)
- para = para + 1
- elif item.name == 'p':
- analyzeHTMLPara(doc, resource, item, section, id)
- para = para + 1
- elif item.name == 'pre':
- analyzeHTMLPre(doc, resource, item, section, id)
- para = para + 1
- else:
- print "Page %s, unexpected %s element" % (resource, item.name)
+ items = ctxt.xpathEval("//h1 | //h2 | //h3 | //text()")
+ section = title
+ id = ""
+ for item in items:
+ if item.name == 'h1' or item.name == 'h2' or item.name == 'h3':
+ section = item.content
+ if item.prop("id"):
+ id = item.prop("id")
+ elif item.prop("name"):
+ id = item.prop("name")
+ elif item.type == 'text':
+ analyzeHTMLText(doc, resource, item, section, id)
+ para = para + 1
+ elif item.name == 'p':
+ analyzeHTMLPara(doc, resource, item, section, id)
+ para = para + 1
+ elif item.name == 'pre':
+ analyzeHTMLPre(doc, resource, item, section, id)
+ para = para + 1
+ else:
+ print "Page %s, unexpected %s element" % (resource, item.name)
except:
print "Page %s: problem analyzing" % (resource)
- print sys.exc_type, sys.exc_value
+ print sys.exc_type, sys.exc_value
return para
@@ -1029,35 +1029,35 @@ def analyzeHTMLPages():
ret = 0
HTMLfiles = glob.glob("*.html") + glob.glob("tutorial/*.html") + \
glob.glob("CIM/*.html") + glob.glob("ocaml/*.html") + \
- glob.glob("ruby/*.html")
+ glob.glob("ruby/*.html")
for html in HTMLfiles:
- if html[0:3] == "API":
- continue
- if html == "xml.html":
- continue
- try:
- doc = libxml2.parseFile(html)
- except:
- doc = libxml2.htmlParseFile(html, None)
- try:
- res = analyzeHTML(doc, html)
- print "Parsed %s : %d paragraphs" % (html, res)
- ret = ret + 1
- except:
- print "could not parse %s" % (html)
+ if html[0:3] == "API":
+ continue
+ if html == "xml.html":
+ continue
+ try:
+ doc = libxml2.parseFile(html)
+ except:
+ doc = libxml2.htmlParseFile(html, None)
+ try:
+ res = analyzeHTML(doc, html)
+ print "Parsed %s : %d paragraphs" % (html, res)
+ ret = ret + 1
+ except:
+ print "could not parse %s" % (html)
return ret
#########################################################################
-# #
-# Mail archives parsing and analysis #
-# #
+# #
+# Mail archives parsing and analysis #
+# #
#########################################################################
import time
def getXMLDateArchive(t = None):
if t == None:
- t = time.time()
+ t = time.time()
T = time.gmtime(t)
month = time.strftime("%B", T)
year = T[0]
@@ -1073,9 +1073,9 @@ def scanXMLMsgArchive(url, title, force = 0):
return 0
if ID == -1:
- ID = addXMLMsgArchive(url, title)
- if ID == -1:
- return 0
+ ID = addXMLMsgArchive(url, title)
+ if ID == -1:
+ return 0
try:
print "Loading %s" % (url)
@@ -1084,7 +1084,7 @@ def scanXMLMsgArchive(url, title, force = 0):
doc = None
if doc == None:
print "Failed to parse %s" % (url)
- return 0
+ return 0
addStringArchive(title, ID, 20)
ctxt = doc.xpathNewContext()
@@ -1102,41 +1102,41 @@ def scanXMLDateArchive(t = None, force = 0):
url = getXMLDateArchive(t)
print "loading %s" % (url)
try:
- doc = libxml2.htmlParseFile(url, None);
+ doc = libxml2.htmlParseFile(url, None);
except:
doc = None
if doc == None:
print "Failed to parse %s" % (url)
- return -1
+ return -1
ctxt = doc.xpathNewContext()
anchors = ctxt.xpathEval("//a[@href]")
links = 0
newmsg = 0
for anchor in anchors:
- href = anchor.prop("href")
- if href == None or href[0:3] != "msg":
- continue
+ href = anchor.prop("href")
+ if href == None or href[0:3] != "msg":
+ continue
try:
- links = links + 1
+ links = links + 1
- msg = libxml2.buildURI(href, url)
- title = anchor.content
- if title != None and title[0:4] == 'Re: ':
- title = title[4:]
- if title != None and title[0:6] == '[xml] ':
- title = title[6:]
- newmsg = newmsg + scanXMLMsgArchive(msg, title, force)
+ msg = libxml2.buildURI(href, url)
+ title = anchor.content
+ if title != None and title[0:4] == 'Re: ':
+ title = title[4:]
+ if title != None and title[0:6] == '[xml] ':
+ title = title[6:]
+ newmsg = newmsg + scanXMLMsgArchive(msg, title, force)
- except:
- pass
+ except:
+ pass
return newmsg
#########################################################################
-# #
-# Main code: open the DB, the API XML and analyze it #
-# #
+# #
+# Main code: open the DB, the API XML and analyze it #
+# #
#########################################################################
def analyzeArchives(t = None, force = 0):
global wordsDictArchive
@@ -1147,14 +1147,14 @@ def analyzeArchives(t = None, force = 0):
i = 0
skipped = 0
for word in wordsDictArchive.keys():
- refs = wordsDictArchive[word]
- if refs == None:
- skipped = skipped + 1
- continue;
- for id in refs.keys():
- relevance = refs[id]
- updateWordArchive(word, id, relevance)
- i = i + 1
+ refs = wordsDictArchive[word]
+ if refs == None:
+ skipped = skipped + 1
+ continue;
+ for id in refs.keys():
+ relevance = refs[id]
+ updateWordArchive(word, id, relevance)
+ i = i + 1
print "Found %d associations in HTML pages" % (i)
@@ -1167,14 +1167,14 @@ def analyzeHTMLTop():
i = 0
skipped = 0
for word in wordsDictHTML.keys():
- refs = wordsDictHTML[word]
- if refs == None:
- skipped = skipped + 1
- continue;
- for resource in refs.keys():
- (relevance, id, section) = refs[resource]
- updateWordHTML(word, resource, section, id, relevance)
- i = i + 1
+ refs = wordsDictHTML[word]
+ if refs == None:
+ skipped = skipped + 1
+ continue;
+ for resource in refs.keys():
+ (relevance, id, section) = refs[resource]
+ updateWordHTML(word, resource, section, id, relevance)
+ i = i + 1
print "Found %d associations in HTML pages" % (i)
@@ -1183,26 +1183,26 @@ def analyzeAPITop():
global API
try:
- doc = loadAPI(API)
- ret = analyzeAPI(doc)
- print "Analyzed %d blocs" % (ret)
- doc.freeDoc()
+ doc = loadAPI(API)
+ ret = analyzeAPI(doc)
+ print "Analyzed %d blocs" % (ret)
+ doc.freeDoc()
except:
- print "Failed to parse and analyze %s" % (API)
- print sys.exc_type, sys.exc_value
- sys.exit(1)
+ print "Failed to parse and analyze %s" % (API)
+ print sys.exc_type, sys.exc_value
+ sys.exit(1)
print "Indexed %d words" % (len(wordsDict))
i = 0
skipped = 0
for word in wordsDict.keys():
- refs = wordsDict[word]
- if refs == None:
- skipped = skipped + 1
- continue;
- for (module, symbol) in refs.keys():
- updateWord(word, symbol, refs[(module, symbol)])
- i = i + 1
+ refs = wordsDict[word]
+ if refs == None:
+ skipped = skipped + 1
+ continue;
+ for (module, symbol) in refs.keys():
+ updateWord(word, symbol, refs[(module, symbol)])
+ i = i + 1
print "Found %d associations, skipped %d words" % (i, skipped)
@@ -1212,53 +1212,53 @@ def usage():
def main():
try:
- openMySQL()
+ openMySQL()
except:
- print "Failed to open the database"
- print sys.exc_type, sys.exc_value
- sys.exit(1)
+ print "Failed to open the database"
+ print sys.exc_type, sys.exc_value
+ sys.exit(1)
args = sys.argv[1:]
force = 0
if args:
i = 0
- while i < len(args):
- if args[i] == '--force':
- force = 1
- elif args[i] == '--archive':
- analyzeArchives(None, force)
- elif args[i] == '--archive-year':
- i = i + 1;
- year = args[i]
- months = ["January" , "February", "March", "April", "May",
- "June", "July", "August", "September", "October",
- "November", "December"];
- for month in months:
- try:
- str = "%s-%s" % (year, month)
- T = time.strptime(str, "%Y-%B")
- t = time.mktime(T) + 3600 * 24 * 10;
- analyzeArchives(t, force)
- except:
- print "Failed to index month archive:"
- print sys.exc_type, sys.exc_value
- elif args[i] == '--archive-month':
- i = i + 1;
- month = args[i]
- try:
- T = time.strptime(month, "%Y-%B")
- t = time.mktime(T) + 3600 * 24 * 10;
- analyzeArchives(t, force)
- except:
- print "Failed to index month archive:"
- print sys.exc_type, sys.exc_value
- elif args[i] == '--API':
- analyzeAPITop()
- elif args[i] == '--docs':
- analyzeHTMLTop()
- else:
- usage()
- i = i + 1
+ while i < len(args):
+ if args[i] == '--force':
+ force = 1
+ elif args[i] == '--archive':
+ analyzeArchives(None, force)
+ elif args[i] == '--archive-year':
+ i = i + 1;
+ year = args[i]
+ months = ["January" , "February", "March", "April", "May",
+ "June", "July", "August", "September", "October",
+ "November", "December"];
+ for month in months:
+ try:
+ str = "%s-%s" % (year, month)
+ T = time.strptime(str, "%Y-%B")
+ t = time.mktime(T) + 3600 * 24 * 10;
+ analyzeArchives(t, force)
+ except:
+ print "Failed to index month archive:"
+ print sys.exc_type, sys.exc_value
+ elif args[i] == '--archive-month':
+ i = i + 1;
+ month = args[i]
+ try:
+ T = time.strptime(month, "%Y-%B")
+ t = time.mktime(T) + 3600 * 24 * 10;
+ analyzeArchives(t, force)
+ except:
+ print "Failed to index month archive:"
+ print sys.exc_type, sys.exc_value
+ elif args[i] == '--API':
+ analyzeAPITop()
+ elif args[i] == '--docs':
+ analyzeHTMLTop()
+ else:
+ usage()
+ i = i + 1
else:
usage()
diff --git a/python/generator.py b/python/generator.py
index 15751bd..ee9dfe4 100755
--- a/python/generator.py
+++ b/python/generator.py
@@ -90,8 +90,8 @@ class docParser(xml.sax.handler.ContentHandler):
self.function_arg_info = None
if attrs.has_key('name'):
self.function_arg_name = attrs['name']
- if self.function_arg_name == 'from':
- self.function_arg_name = 'frm'
+ if self.function_arg_name == 'from':
+ self.function_arg_name = 'frm'
if attrs.has_key('type'):
self.function_arg_type = attrs['type']
if attrs.has_key('info'):
@@ -409,8 +409,8 @@ def print_function_wrapper(name, output, export, include):
if name in skip_function:
return 0
if name in skip_impl:
- # Don't delete the function entry in the caller.
- return 1
+ # Don't delete the function entry in the caller.
+ return 1
c_call = "";
format=""
@@ -426,8 +426,8 @@ def print_function_wrapper(name, output, export, include):
c_args = c_args + " %s %s;\n" % (arg[1], arg[0])
if py_types.has_key(arg[1]):
(f, t, n, c) = py_types[arg[1]]
- if (f == 'z') and (name in foreign_encoding_args) and (num_bufs == 0):
- f = 't#'
+ if (f == 'z') and (name in foreign_encoding_args) and (num_bufs == 0):
+ f = 't#'
if f != None:
format = format + f
if t != None:
@@ -438,10 +438,10 @@ def print_function_wrapper(name, output, export, include):
arg[1], t, arg[0]);
else:
format_args = format_args + ", &%s" % (arg[0])
- if f == 't#':
- format_args = format_args + ", &py_buffsize%d" % num_bufs
- c_args = c_args + " int py_buffsize%d;\n" % num_bufs
- num_bufs = num_bufs + 1
+ if f == 't#':
+ format_args = format_args + ", &py_buffsize%d" % num_bufs
+ c_args = c_args + " int py_buffsize%d;\n" % num_bufs
+ num_bufs = num_bufs + 1
if c_call != "":
c_call = c_call + ", ";
c_call = c_call + "%s" % (arg[0])
@@ -459,14 +459,14 @@ def print_function_wrapper(name, output, export, include):
if ret[0] == 'void':
if file == "python_accessor":
- if args[1][1] == "char *":
- c_call = "\n free(%s->%s);\n" % (
- args[0][0], args[1][0], args[0][0], args[1][0])
- c_call = c_call + " %s->%s = (%s)strdup((const xmlChar *)%s);\n" % (args[0][0],
- args[1][0], args[1][1], args[1][0])
- else:
- c_call = "\n %s->%s = %s;\n" % (args[0][0], args[1][0],
- args[1][0])
+ if args[1][1] == "char *":
+ c_call = "\n free(%s->%s);\n" % (
+ args[0][0], args[1][0], args[0][0], args[1][0])
+ c_call = c_call + " %s->%s = (%s)strdup((const xmlChar *)%s);\n" % (args[0][0],
+ args[1][0], args[1][1], args[1][0])
+ else:
+ c_call = "\n %s->%s = %s;\n" % (args[0][0], args[1][0],
+ args[1][0])
else:
c_call = "\n %s(%s);\n" % (name, c_call);
ret_convert = " Py_INCREF(Py_None);\n return(Py_None);\n"
@@ -508,24 +508,24 @@ def print_function_wrapper(name, output, export, include):
if file == "python":
# Those have been manually generated
- if cond != None and cond != "":
- include.write("#endif\n");
- export.write("#endif\n");
- output.write("#endif\n");
+ if cond != None and cond != "":
+ include.write("#endif\n");
+ export.write("#endif\n");
+ output.write("#endif\n");
return 1
if file == "python_accessor" and ret[0] != "void" and ret[2] is None:
# Those have been manually generated
- if cond != None and cond != "":
- include.write("#endif\n");
- export.write("#endif\n");
- output.write("#endif\n");
+ if cond != None and cond != "":
+ include.write("#endif\n");
+ export.write("#endif\n");
+ output.write("#endif\n");
return 1
output.write("PyObject *\n")
output.write("libvirt_%s(PyObject *self ATTRIBUTE_UNUSED," % (name))
output.write(" PyObject *args")
if format == "":
- output.write(" ATTRIBUTE_UNUSED")
+ output.write(" ATTRIBUTE_UNUSED")
output.write(") {\n")
if ret[0] != 'void':
output.write(" PyObject *py_retval;\n")
@@ -557,38 +557,38 @@ def buildStubs():
global unknown_types
try:
- f = open(os.path.join(srcPref,"libvirt-api.xml"))
- data = f.read()
- (parser, target) = getparser()
- parser.feed(data)
- parser.close()
+ f = open(os.path.join(srcPref,"libvirt-api.xml"))
+ data = f.read()
+ (parser, target) = getparser()
+ parser.feed(data)
+ parser.close()
except IOError, msg:
- try:
- f = open(os.path.join(srcPref,"..","docs","libvirt-api.xml"))
- data = f.read()
- (parser, target) = getparser()
- parser.feed(data)
- parser.close()
- except IOError, msg:
- print file, ":", msg
- sys.exit(1)
+ try:
+ f = open(os.path.join(srcPref,"..","docs","libvirt-api.xml"))
+ data = f.read()
+ (parser, target) = getparser()
+ parser.feed(data)
+ parser.close()
+ except IOError, msg:
+ print file, ":", msg
+ sys.exit(1)
n = len(functions.keys())
print "Found %d functions in libvirt-api.xml" % (n)
py_types['pythonObject'] = ('O', "pythonObject", "pythonObject", "pythonObject")
try:
- f = open(os.path.join(srcPref,"libvirt-override-api.xml"))
- data = f.read()
- (parser, target) = getparser()
- parser.feed(data)
- parser.close()
+ f = open(os.path.join(srcPref,"libvirt-override-api.xml"))
+ data = f.read()
+ (parser, target) = getparser()
+ parser.feed(data)
+ parser.close()
except IOError, msg:
- print file, ":", msg
+ print file, ":", msg
print "Found %d functions in libvirt-override-api.xml" % (
- len(functions.keys()) - n)
+ len(functions.keys()) - n)
nb_wrap = 0
failed = 0
skipped = 0
@@ -604,17 +604,17 @@ def buildStubs():
wrapper.write("#include \"typewrappers.h\"\n")
wrapper.write("#include \"libvirt.h\"\n\n")
for function in functions.keys():
- ret = print_function_wrapper(function, wrapper, export, include)
- if ret < 0:
- failed = failed + 1
- functions_failed.append(function)
- del functions[function]
- if ret == 0:
- skipped = skipped + 1
- functions_skipped.append(function)
- del functions[function]
- if ret == 1:
- nb_wrap = nb_wrap + 1
+ ret = print_function_wrapper(function, wrapper, export, include)
+ if ret < 0:
+ failed = failed + 1
+ functions_failed.append(function)
+ del functions[function]
+ if ret == 0:
+ skipped = skipped + 1
+ functions_skipped.append(function)
+ del functions[function]
+ if ret == 1:
+ nb_wrap = nb_wrap + 1
include.close()
export.close()
wrapper.close()
@@ -623,7 +623,7 @@ def buildStubs():
print "Missing type converters: "
for type in unknown_types.keys():
- print "%s:%d " % (type, len(unknown_types[type])),
+ print "%s:%d " % (type, len(unknown_types[type])),
print
for f in functions_failed:
@@ -955,7 +955,7 @@ def buildWrappers():
global functions_noexcept
for type in classes_type.keys():
- function_classes[classes_type[type][2]] = []
+ function_classes[classes_type[type][2]] = []
#
# Build the list of C types to look for ordered to start
@@ -966,46 +966,46 @@ def buildWrappers():
ctypes_processed = {}
classes_processed = {}
for classe in primary_classes:
- classes_list.append(classe)
- classes_processed[classe] = ()
- for type in classes_type.keys():
- tinfo = classes_type[type]
- if tinfo[2] == classe:
- ctypes.append(type)
- ctypes_processed[type] = ()
+ classes_list.append(classe)
+ classes_processed[classe] = ()
+ for type in classes_type.keys():
+ tinfo = classes_type[type]
+ if tinfo[2] == classe:
+ ctypes.append(type)
+ ctypes_processed[type] = ()
for type in classes_type.keys():
- if ctypes_processed.has_key(type):
- continue
- tinfo = classes_type[type]
- if not classes_processed.has_key(tinfo[2]):
- classes_list.append(tinfo[2])
- classes_processed[tinfo[2]] = ()
+ if ctypes_processed.has_key(type):
+ continue
+ tinfo = classes_type[type]
+ if not classes_processed.has_key(tinfo[2]):
+ classes_list.append(tinfo[2])
+ classes_processed[tinfo[2]] = ()
- ctypes.append(type)
- ctypes_processed[type] = ()
+ ctypes.append(type)
+ ctypes_processed[type] = ()
for name in functions.keys():
- found = 0;
- (desc, ret, args, file, cond) = functions[name]
- for type in ctypes:
- classe = classes_type[type][2]
-
- if name[0:3] == "vir" and len(args) >= 1 and args[0][1] == type:
- found = 1
- func = nameFixup(name, classe, type, file)
- info = (0, func, name, ret, args, file)
- function_classes[classe].append(info)
- elif name[0:3] == "vir" and len(args) >= 2 and args[1][1] == type \
- and file != "python_accessor" and not name in function_skip_index_one:
- found = 1
- func = nameFixup(name, classe, type, file)
- info = (1, func, name, ret, args, file)
- function_classes[classe].append(info)
- if found == 1:
- continue
- func = nameFixup(name, "None", file, file)
- info = (0, func, name, ret, args, file)
- function_classes['None'].append(info)
+ found = 0;
+ (desc, ret, args, file, cond) = functions[name]
+ for type in ctypes:
+ classe = classes_type[type][2]
+
+ if name[0:3] == "vir" and len(args) >= 1 and args[0][1] == type:
+ found = 1
+ func = nameFixup(name, classe, type, file)
+ info = (0, func, name, ret, args, file)
+ function_classes[classe].append(info)
+ elif name[0:3] == "vir" and len(args) >= 2 and args[1][1] == type \
+ and file != "python_accessor" and not name in function_skip_index_one:
+ found = 1
+ func = nameFixup(name, classe, type, file)
+ info = (1, func, name, ret, args, file)
+ function_classes[classe].append(info)
+ if found == 1:
+ continue
+ func = nameFixup(name, "None", file, file)
+ info = (0, func, name, ret, args, file)
+ function_classes['None'].append(info)
classes = open("libvirt.py", "w")
@@ -1032,60 +1032,60 @@ def buildWrappers():
extra.close()
if function_classes.has_key("None"):
- flist = function_classes["None"]
- flist.sort(functionCompare)
- oldfile = ""
- for info in flist:
- (index, func, name, ret, args, file) = info
- if file != oldfile:
- classes.write("#\n# Functions from module %s\n#\n\n" % file)
- oldfile = file
- classes.write("def %s(" % func)
- n = 0
- for arg in args:
- if n != 0:
- classes.write(", ")
- classes.write("%s" % arg[0])
- n = n + 1
- classes.write("):\n")
- writeDoc(name, args, ' ', classes);
-
- for arg in args:
- if classes_type.has_key(arg[1]):
- classes.write(" if %s is None: %s__o = None\n" %
- (arg[0], arg[0]))
- classes.write(" else: %s__o = %s%s\n" %
- (arg[0], arg[0], classes_type[arg[1]][0]))
- if ret[0] != "void":
- classes.write(" ret = ");
- else:
- classes.write(" ");
- classes.write("libvirtmod.%s(" % name)
- n = 0
- for arg in args:
- if n != 0:
- classes.write(", ");
- classes.write("%s" % arg[0])
- if classes_type.has_key(arg[1]):
- classes.write("__o");
- n = n + 1
- classes.write(")\n");
+ flist = function_classes["None"]
+ flist.sort(functionCompare)
+ oldfile = ""
+ for info in flist:
+ (index, func, name, ret, args, file) = info
+ if file != oldfile:
+ classes.write("#\n# Functions from module %s\n#\n\n" % file)
+ oldfile = file
+ classes.write("def %s(" % func)
+ n = 0
+ for arg in args:
+ if n != 0:
+ classes.write(", ")
+ classes.write("%s" % arg[0])
+ n = n + 1
+ classes.write("):\n")
+ writeDoc(name, args, ' ', classes);
+
+ for arg in args:
+ if classes_type.has_key(arg[1]):
+ classes.write(" if %s is None: %s__o = None\n" %
+ (arg[0], arg[0]))
+ classes.write(" else: %s__o = %s%s\n" %
+ (arg[0], arg[0], classes_type[arg[1]][0]))
+ if ret[0] != "void":
+ classes.write(" ret = ");
+ else:
+ classes.write(" ");
+ classes.write("libvirtmod.%s(" % name)
+ n = 0
+ for arg in args:
+ if n != 0:
+ classes.write(", ");
+ classes.write("%s" % arg[0])
+ if classes_type.has_key(arg[1]):
+ classes.write("__o");
+ n = n + 1
+ classes.write(")\n");
if ret[0] != "void":
- if classes_type.has_key(ret[0]):
- #
- # Raise an exception
- #
- if functions_noexcept.has_key(name):
- classes.write(" if ret is None:return None\n");
- else:
- classes.write(
- " if ret is None:raise libvirtError('%s() failed')\n" %
- (name))
-
- classes.write(" return ");
- classes.write(classes_type[ret[0]][1] % ("ret"));
- classes.write("\n");
+ if classes_type.has_key(ret[0]):
+ #
+ # Raise an exception
+ #
+ if functions_noexcept.has_key(name):
+ classes.write(" if ret is None:return None\n");
+ else:
+ classes.write(
+ " if ret is None:raise libvirtError('%s() failed')\n" %
+ (name))
+
+ classes.write(" return ");
+ classes.write(classes_type[ret[0]][1] % ("ret"));
+ classes.write("\n");
# For functions returning an integral type there are
# several things that we can do, depending on the
@@ -1099,7 +1099,7 @@ def buildWrappers():
classes.write ((" if " + test +
": raise libvirtError ('%s() failed')\n") %
("ret", name))
- classes.write(" return ret\n")
+ classes.write(" return ret\n")
elif is_list_type (ret[0]):
if not functions_noexcept.has_key (name):
@@ -1110,30 +1110,30 @@ def buildWrappers():
classes.write ((" if " + test +
": raise libvirtError ('%s() failed')\n") %
("ret", name))
- classes.write(" return ret\n")
+ classes.write(" return ret\n")
- else:
- classes.write(" return ret\n")
+ else:
+ classes.write(" return ret\n")
- classes.write("\n");
+ classes.write("\n");
for classname in classes_list:
- if classname == "None":
- pass
- else:
- if classes_ancestor.has_key(classname):
- classes.write("class %s(%s):\n" % (classname,
- classes_ancestor[classname]))
- classes.write(" def __init__(self, _obj=None):\n")
- if reference_keepers.has_key(classname):
- rlist = reference_keepers[classname]
- for ref in rlist:
- classes.write(" self.%s = None\n" % ref[1])
- classes.write(" self._o = _obj\n")
- classes.write(" %s.__init__(self, _obj=_obj)\n\n" % (
- classes_ancestor[classname]))
- else:
- classes.write("class %s:\n" % (classname))
+ if classname == "None":
+ pass
+ else:
+ if classes_ancestor.has_key(classname):
+ classes.write("class %s(%s):\n" % (classname,
+ classes_ancestor[classname]))
+ classes.write(" def __init__(self, _obj=None):\n")
+ if reference_keepers.has_key(classname):
+ rlist = reference_keepers[classname]
+ for ref in rlist:
+ classes.write(" self.%s = None\n" % ref[1])
+ classes.write(" self._o = _obj\n")
+ classes.write(" %s.__init__(self, _obj=_obj)\n\n" % (
+ classes_ancestor[classname]))
+ else:
+ classes.write("class %s:\n" % (classname))
if classname in [ "virDomain", "virNetwork", "virInterface", "virStoragePool",
"virStorageVol", "virNodeDevice", "virSecret","virStream",
"virNWFilter" ]:
@@ -1142,10 +1142,10 @@ def buildWrappers():
classes.write(" def __init__(self, dom, _obj=None):\n")
else:
classes.write(" def __init__(self, _obj=None):\n")
- if reference_keepers.has_key(classname):
- list = reference_keepers[classname]
- for ref in list:
- classes.write(" self.%s = None\n" % ref[1])
+ if reference_keepers.has_key(classname):
+ list = reference_keepers[classname]
+ for ref in list:
+ classes.write(" self.%s = None\n" % ref[1])
if classname in [ "virDomain", "virNetwork", "virInterface",
"virNodeDevice", "virSecret", "virStream",
"virNWFilter" ]:
@@ -1156,16 +1156,16 @@ def buildWrappers():
" self._conn = conn._conn\n")
elif classname in [ "virDomainSnapshot" ]:
classes.write(" self._dom = dom\n")
- classes.write(" if _obj != None:self._o = _obj;return\n")
- classes.write(" self._o = None\n\n");
- destruct=None
- if classes_destructors.has_key(classname):
- classes.write(" def __del__(self):\n")
- classes.write(" if self._o != None:\n")
- classes.write(" libvirtmod.%s(self._o)\n" %
- classes_destructors[classname]);
- classes.write(" self._o = None\n\n");
- destruct=classes_destructors[classname]
+ classes.write(" if _obj != None:self._o = _obj;return\n")
+ classes.write(" self._o = None\n\n");
+ destruct=None
+ if classes_destructors.has_key(classname):
+ classes.write(" def __del__(self):\n")
+ classes.write(" if self._o != None:\n")
+ classes.write(" libvirtmod.%s(self._o)\n" %
+ classes_destructors[classname]);
+ classes.write(" self._o = None\n\n");
+ destruct=classes_destructors[classname]
if not class_skip_connect_impl.has_key(classname):
# Build python safe 'connect' method
@@ -1176,99 +1176,99 @@ def buildWrappers():
classes.write(" def domain(self):\n")
classes.write(" return self._dom\n\n")
- flist = function_classes[classname]
- flist.sort(functionCompare)
- oldfile = ""
- for info in flist:
- (index, func, name, ret, args, file) = info
- #
- # Do not provide as method the destructors for the class
- # to avoid double free
- #
- if name == destruct:
- continue;
- if file != oldfile:
- if file == "python_accessor":
- classes.write(" # accessors for %s\n" % (classname))
- else:
- classes.write(" #\n")
- classes.write(" # %s functions from module %s\n" % (
- classname, file))
- classes.write(" #\n\n")
- oldfile = file
- classes.write(" def %s(self" % func)
- n = 0
- for arg in args:
- if n != index:
- classes.write(", %s" % arg[0])
- n = n + 1
- classes.write("):\n")
- writeDoc(name, args, ' ', classes);
- n = 0
- for arg in args:
- if classes_type.has_key(arg[1]):
- if n != index:
- classes.write(" if %s is None: %s__o = None\n" %
- (arg[0], arg[0]))
- classes.write(" else: %s__o = %s%s\n" %
- (arg[0], arg[0], classes_type[arg[1]][0]))
- n = n + 1
- if ret[0] != "void":
- classes.write(" ret = ");
- else:
- classes.write(" ");
- classes.write("libvirtmod.%s(" % name)
- n = 0
- for arg in args:
- if n != 0:
- classes.write(", ");
- if n != index:
- classes.write("%s" % arg[0])
- if classes_type.has_key(arg[1]):
- classes.write("__o");
- else:
- classes.write("self");
- if classes_type.has_key(arg[1]):
- classes.write(classes_type[arg[1]][0])
- n = n + 1
- classes.write(")\n");
+ flist = function_classes[classname]
+ flist.sort(functionCompare)
+ oldfile = ""
+ for info in flist:
+ (index, func, name, ret, args, file) = info
+ #
+ # Do not provide as method the destructors for the class
+ # to avoid double free
+ #
+ if name == destruct:
+ continue;
+ if file != oldfile:
+ if file == "python_accessor":
+ classes.write(" # accessors for %s\n" % (classname))
+ else:
+ classes.write(" #\n")
+ classes.write(" # %s functions from module %s\n" % (
+ classname, file))
+ classes.write(" #\n\n")
+ oldfile = file
+ classes.write(" def %s(self" % func)
+ n = 0
+ for arg in args:
+ if n != index:
+ classes.write(", %s" % arg[0])
+ n = n + 1
+ classes.write("):\n")
+ writeDoc(name, args, ' ', classes);
+ n = 0
+ for arg in args:
+ if classes_type.has_key(arg[1]):
+ if n != index:
+ classes.write(" if %s is None: %s__o = None\n" %
+ (arg[0], arg[0]))
+ classes.write(" else: %s__o = %s%s\n" %
+ (arg[0], arg[0], classes_type[arg[1]][0]))
+ n = n + 1
+ if ret[0] != "void":
+ classes.write(" ret = ");
+ else:
+ classes.write(" ");
+ classes.write("libvirtmod.%s(" % name)
+ n = 0
+ for arg in args:
+ if n != 0:
+ classes.write(", ");
+ if n != index:
+ classes.write("%s" % arg[0])
+ if classes_type.has_key(arg[1]):
+ classes.write("__o");
+ else:
+ classes.write("self");
+ if classes_type.has_key(arg[1]):
+ classes.write(classes_type[arg[1]][0])
+ n = n + 1
+ classes.write(")\n");
if name == "virConnectClose":
classes.write(" self._o = None\n")
# For functions returning object types:
if ret[0] != "void":
- if classes_type.has_key(ret[0]):
- #
- # Raise an exception
- #
- if functions_noexcept.has_key(name):
- classes.write(
- " if ret is None:return None\n");
- else:
+ if classes_type.has_key(ret[0]):
+ #
+ # Raise an exception
+ #
+ if functions_noexcept.has_key(name):
+ classes.write(
+ " if ret is None:return None\n");
+ else:
if classname == "virConnect":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', conn=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', conn=self)\n" %
(name))
elif classname == "virDomain":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', dom=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', dom=self)\n" %
(name))
elif classname == "virNetwork":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', net=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', net=self)\n" %
(name))
elif classname == "virInterface":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', net=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', net=self)\n" %
(name))
elif classname == "virStoragePool":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', pool=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', pool=self)\n" %
(name))
elif classname == "virStorageVol":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', vol=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', vol=self)\n" %
(name))
elif classname == "virDomainSnapshot":
classes.write(
@@ -1276,54 +1276,54 @@ def buildWrappers():
(name))
else:
classes.write(
- " if ret is None:raise libvirtError('%s() failed')\n" %
+ " if ret is None:raise libvirtError('%s() failed')\n" %
(name))
- #
- # generate the returned class wrapper for the object
- #
- classes.write(" __tmp = ");
- classes.write(classes_type[ret[0]][1] % ("ret"));
- classes.write("\n");
+ #
+ # generate the returned class wrapper for the object
+ #
+ classes.write(" __tmp = ");
+ classes.write(classes_type[ret[0]][1] % ("ret"));
+ classes.write("\n");
#
- # Sometime one need to keep references of the source
- # class in the returned class object.
- # See reference_keepers for the list
- #
- tclass = classes_type[ret[0]][2]
- if reference_keepers.has_key(tclass):
- list = reference_keepers[tclass]
- for pref in list:
- if pref[0] == classname:
- classes.write(" __tmp.%s = self\n" %
- pref[1])
+ # Sometime one need to keep references of the source
+ # class in the returned class object.
+ # See reference_keepers for the list
+ #
+ tclass = classes_type[ret[0]][2]
+ if reference_keepers.has_key(tclass):
+ list = reference_keepers[tclass]
+ for pref in list:
+ if pref[0] == classname:
+ classes.write(" __tmp.%s = self\n" %
+ pref[1])
# Post-processing - just before we return.
if function_post.has_key(name):
classes.write(" %s\n" %
(function_post[name]));
- #
- # return the class
- #
- classes.write(" return __tmp\n");
- elif converter_type.has_key(ret[0]):
- #
- # Raise an exception
- #
- if functions_noexcept.has_key(name):
- classes.write(
- " if ret is None:return None");
+ #
+ # return the class
+ #
+ classes.write(" return __tmp\n");
+ elif converter_type.has_key(ret[0]):
+ #
+ # Raise an exception
+ #
+ if functions_noexcept.has_key(name):
+ classes.write(
+ " if ret is None:return None");
# Post-processing - just before we return.
if function_post.has_key(name):
classes.write(" %s\n" %
(function_post[name]));
- classes.write(" return ");
- classes.write(converter_type[ret[0]] % ("ret"));
- classes.write("\n");
+ classes.write(" return ");
+ classes.write(converter_type[ret[0]] % ("ret"));
+ classes.write("\n");
# For functions returning an integral type there
# are several things that we can do, depending on
@@ -1412,15 +1412,15 @@ def buildWrappers():
classes.write (" return ret\n")
- else:
+ else:
# Post-processing - just before we return.
if function_post.has_key(name):
classes.write(" %s\n" %
(function_post[name]));
- classes.write(" return ret\n");
+ classes.write(" return ret\n");
- classes.write("\n");
+ classes.write("\n");
# Append "<classname>.py" to class def, iff it exists
try:
extra = open(os.path.join(srcPref,"libvirt-override-" + classname + ".py"), "r")
diff --git a/python/tests/create.py b/python/tests/create.py
index d2c434e..4e5f644 100755
--- a/python/tests/create.py
+++ b/python/tests/create.py
@@ -23,7 +23,7 @@ osroot = None
for root in osroots:
if os.access(root, os.R_OK):
osroot = root
- break
+ break
if osroot == None:
print "Could not find a guest OS root, edit to add the path in osroots"
@@ -124,11 +124,11 @@ while i < 30:
time.sleep(1)
i = i + 1
try:
- t = dom.info()[4]
+ t = dom.info()[4]
except:
okay = 0
- t = -1
- break;
+ t = -1
+ break;
if t == 0:
break
--
1.7.4.1
2
2
[libvirt] [PATCH v2] virsh: freecell --all getting wrong NUMA nodes count
by Michal Privoznik 18 Feb '11
by Michal Privoznik 18 Feb '11
18 Feb '11
Virsh freecell --all was not olny getting wrong NUMA nodes count, but
even the NUMA nodes ids. They doesn't have to be continuous, as I've
found out during testing this. Therefore a modification of
nodeGetCellsFreeMemory() error message.
---
src/nodeinfo.c | 3 +-
tools/virsh.c | 70 ++++++++++++++++++++++++++++++++++++++++++++------------
2 files changed, 57 insertions(+), 16 deletions(-)
diff --git a/src/nodeinfo.c b/src/nodeinfo.c
index 22d53e5..ba2f793 100644
--- a/src/nodeinfo.c
+++ b/src/nodeinfo.c
@@ -468,7 +468,8 @@ nodeGetCellsFreeMemory(virConnectPtr conn ATTRIBUTE_UNUSED,
long long mem;
if (numa_node_size64(n, &mem) < 0) {
nodeReportError(VIR_ERR_INTERNAL_ERROR,
- "%s", _("Failed to query NUMA free memory"));
+ "%s: %d", _("Failed to query NUMA free memory "
+ "for node"), n);
goto cleanup;
}
freeMems[numCells++] = mem;
diff --git a/tools/virsh.c b/tools/virsh.c
index 50d5e33..165a791 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -2283,9 +2283,15 @@ cmdFreecell(vshControl *ctl, const vshCmd *cmd)
int ret;
int cell, cell_given;
unsigned long long memory;
- unsigned long long *nodes = NULL;
+ xmlNodePtr *nodes = NULL;
+ unsigned long nodes_cnt;
+ unsigned long *nodes_id = NULL;
+ unsigned long long *nodes_free = NULL;
int all_given;
- virNodeInfo info;
+ int i;
+ char *cap_xml = NULL;
+ xmlDocPtr xml = NULL;
+ xmlXPathContextPtr ctxt = NULL;
if (!vshConnectionUsability(ctl, ctl->conn))
@@ -2301,32 +2307,60 @@ cmdFreecell(vshControl *ctl, const vshCmd *cmd)
}
if (all_given) {
- if (virNodeGetInfo(ctl->conn, &info) < 0) {
- vshError(ctl, "%s", _("failed to get NUMA nodes count"));
+ cap_xml = virConnectGetCapabilities(ctl->conn);
+ if (!cap_xml) {
+ vshError(ctl, "%s", _("unable to get node capabilities"));
goto cleanup;
}
- if (!info.nodes) {
- vshError(ctl, "%s", _("no NUMA nodes present"));
+ xml = xmlReadDoc((const xmlChar *) cap_xml, "node.xml", NULL,
+ XML_PARSE_NOENT | XML_PARSE_NONET |
+ XML_PARSE_NOWARNING);
+
+ if (!xml) {
+ vshError(ctl, "%s", _("unable to get node capabilities"));
goto cleanup;
}
- if (VIR_ALLOC_N(nodes, info.nodes) < 0) {
- vshError(ctl, "%s", _("could not allocate memory"));
+ ctxt = xmlXPathNewContext(xml);
+ nodes_cnt = virXPathNodeSet("/capabilities/host/topology/cells/cell",
+ ctxt, &nodes);
+
+ if (nodes_cnt == -1) {
+ vshError(ctl, "%s", _("could not get information about "
+ "NUMA topology"));
goto cleanup;
}
- ret = virNodeGetCellsFreeMemory(ctl->conn, nodes, 0, info.nodes);
- if (ret != info.nodes) {
- vshError(ctl, "%s", _("could not get information about "
- "all NUMA nodes"));
+ if ((VIR_ALLOC_N(nodes_free, nodes_cnt) < 0) ||
+ (VIR_ALLOC_N(nodes_id, nodes_cnt) < 0)){
+ vshError(ctl, "%s", _("could not allocate memory"));
goto cleanup;
}
+ for (i = 0; i < nodes_cnt; i++) {
+ unsigned long id;
+ char *val = virXMLPropString(nodes[i], "id");
+ char *conv = NULL;
+ id = strtoul(val, &conv, 10);
+ if (*conv !='\0') {
+ vshError(ctl, "%s", _("conversion from string failed"));
+ goto cleanup;
+ }
+ nodes_id[i]=id;
+ ret = virNodeGetCellsFreeMemory(ctl->conn, &(nodes_free[i]), id, 1);
+ if (ret != 1) {
+ vshError(ctl, "%s %lu", _("failed to get free memory for "
+ "NUMA node nuber:"), id);
+ goto cleanup;
+ }
+ }
+
memory = 0;
- for (cell = 0; cell < info.nodes; cell++) {
- vshPrint(ctl, "%5d: %10llu kB\n", cell, (nodes[cell]/1024));
- memory += nodes[cell];
+ for (cell = 0; cell < nodes_cnt; cell++) {
+ vshPrint(ctl, "%5lu: %10llu kB\n", nodes_id[cell],
+ (nodes_free[cell]/1024));
+ memory += nodes_free[cell];
}
vshPrintExtra(ctl, "--------------------\n");
@@ -2351,7 +2385,13 @@ cmdFreecell(vshControl *ctl, const vshCmd *cmd)
func_ret = TRUE;
cleanup:
+ xmlXPathFreeContext(ctxt);
+ if (xml)
+ xmlFreeDoc(xml);
VIR_FREE(nodes);
+ VIR_FREE(nodes_free);
+ VIR_FREE(nodes_id);
+ VIR_FREE(cap_xml);
return func_ret;
}
--
1.7.3.5
2
1
As virDomainDefParseString already reported the error if it
fails, and the redundant error reports codes will override
error reported by virDomainDefParseString with some unclear
messages, removed them.
* src/qemu/qemu_driver.c
---
src/qemu/qemu_driver.c | 33 +++++++++++++--------------------
1 files changed, 13 insertions(+), 20 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ab664a0..fd8e401 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -3600,8 +3600,10 @@ static virDomainPtr qemudDomainCreate(virConnectPtr conn, const char *xml,
virCheckFlags(VIR_DOMAIN_START_PAUSED, NULL);
qemuDriverLock(driver);
- if (!(def = virDomainDefParseString(driver->caps, xml,
- VIR_DOMAIN_XML_INACTIVE)))
+
+ def = virDomainDefParseString(driver->caps, xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
+ /* virDomainDefParseString reports the error. */
goto cleanup;
if (virSecurityManagerVerify(driver->securityManager, def) < 0)
@@ -5746,12 +5748,9 @@ qemudDomainSaveImageOpen(struct qemud_driver *driver,
}
/* Create a domain from this XML */
- if (!(def = virDomainDefParseString(driver->caps, xml,
- VIR_DOMAIN_XML_INACTIVE))) {
- qemuReportError(VIR_ERR_OPERATION_FAILED,
- "%s", _("failed to parse XML"));
+ def = virDomainDefParseString(driver->caps, xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
goto error;
- }
VIR_FREE(xml);
@@ -6412,8 +6411,9 @@ static virDomainPtr qemudDomainDefine(virConnectPtr conn, const char *xml) {
int dupVM;
qemuDriverLock(driver);
- if (!(def = virDomainDefParseString(driver->caps, xml,
- VIR_DOMAIN_XML_INACTIVE)))
+
+ def = virDomainDefParseString(driver->caps, xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
goto cleanup;
if (virSecurityManagerVerify(driver->securityManager, def) < 0)
@@ -8046,13 +8046,9 @@ qemudDomainMigratePrepareTunnel(virConnectPtr dconn,
}
/* Parse the domain XML. */
- if (!(def = virDomainDefParseString(driver->caps, dom_xml,
- VIR_DOMAIN_XML_INACTIVE))) {
- qemuReportError(VIR_ERR_OPERATION_FAILED,
- "%s", _("failed to parse XML, libvirt version may be "
- "different between source and destination host"));
+ def = virDomainDefParseString(driver->caps, dom_xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
goto cleanup;
- }
if (!qemuDomainIsMigratable(def))
goto cleanup;
@@ -8320,12 +8316,9 @@ qemudDomainMigratePrepare2 (virConnectPtr dconn,
VIR_DEBUG("Generated uri_out=%s", *uri_out);
/* Parse the domain XML. */
- if (!(def = virDomainDefParseString(driver->caps, dom_xml,
- VIR_DOMAIN_XML_INACTIVE))) {
- qemuReportError(VIR_ERR_OPERATION_FAILED,
- "%s", _("failed to parse XML"));
+ def = virDomainDefParseString(driver->caps, dom_xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
goto cleanup;
- }
if (!qemuDomainIsMigratable(def))
goto cleanup;
--
1.7.4
2
3
[libvirt] [PATCH 0/2] virsh: freecell --all getting wrong NUMA nodes count
by Michal Privoznik 18 Feb '11
by Michal Privoznik 18 Feb '11
18 Feb '11
This is fix for virsh freecell command. I was using virNodeGetInfo()
instead of virConnectGetCapabilities() which returns XML with the right
number. Therefore we need XML/XPath function to pick node values from XML
kept in buffer. Moreover, this function might be used to replace bad
practise in cmdVcpucount().
Michal Privoznik (2):
virsh.c: new xPathULong() to parse XML returning ulong value of
selected node.
virsh: fix wrong NUMA nodes count getting
tools/virsh.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++++--------
1 files changed, 65 insertions(+), 11 deletions(-)
--
1.7.3.5
4
5
[libvirt] [PATCH] Do not add drive 'boot=on' param when a kernel is specified
by Jim Fehlig 18 Feb '11
by Jim Fehlig 18 Feb '11
18 Feb '11
libvirt-tck was failing several domain tests [1] with qemu 0.14, which
is now less tolerable of specifying 2 bootroms with the same boot index [2].
Drop the 'boot=on' param if kernel has been specfied.
[1] https://www.redhat.com/archives/libvir-list/2011-February/msg00559.html
[2] http://lists.nongnu.org/archive/html/qemu-devel/2011-02/msg01892.html
---
src/qemu/qemu_command.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 618d3a9..a9cc23b 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -3173,7 +3173,7 @@ qemuBuildCommandLine(virConnectPtr conn,
int bootCD = 0, bootFloppy = 0, bootDisk = 0;
/* If QEMU supports boot=on for -drive param... */
- if (qemuCmdFlags & QEMUD_CMD_FLAG_DRIVE_BOOT) {
+ if (qemuCmdFlags & QEMUD_CMD_FLAG_DRIVE_BOOT && !def->os.kernel) {
for (i = 0 ; i < def->os.nBootDevs ; i++) {
switch (def->os.bootDevs[i]) {
case VIR_DOMAIN_BOOT_CDROM:
--
1.7.3.1
2
2
Hi,
I refactored the XM and SEXPR parsing routines from the xen-unified
driver into a seperate directory (xenxs; x for xm and s for sexpr).
This way different xen-drivers besides xen-unified are able to
use the parsing functionality. For example the upcoming
XenLight (libxl) driver.
To use the XM parsing functions one includes "xen_xm.h" and for SEXPR parsing "xen_sxpr.h".
The patch is rather big, but most of it are code movements.
Some parsing functions required a driver object to fetch the tty path and vncport.
I removed all references to a specific driver and added
additional parameters as a replacement.
The tests sexpr2xml, xmconfig and xml2sexpr are adapted and show no error.
Thanks in advance for your comments about this.
Cheers,
Markus
Markus Groß (2):
Moved Xen SEXPR and XM parsing functionality to seperate directory.
Added to authors to please make syntax-check
AUTHORS | 1 +
configure.ac | 4 +
include/libvirt/virterror.h | 2 +-
src/Makefile.am | 15 +-
src/util/virterror.c | 4 +-
src/xen/sexpr.c | 568 ----
src/xen/sexpr.h | 55 -
src/xen/xen_driver.c | 19 +-
src/xen/xend_internal.c | 6673 ++++++++++++++-----------------------------
src/xen/xend_internal.h | 30 -
src/xen/xm_internal.c | 1706 +-----------
src/xen/xm_internal.h | 3 -
src/xenxs/sexpr.c | 647 +++++
src/xenxs/sexpr.h | 63 +
src/xenxs/xen_sxpr.c | 2149 ++++++++++++++
src/xenxs/xen_sxpr.h | 62 +
src/xenxs/xen_xm.c | 1716 +++++++++++
src/xenxs/xen_xm.h | 41 +
src/xenxs/xenxs_private.h | 63 +
tests/sexpr2xmltest.c | 12 +-
tests/xmconfigtest.c | 5 +-
tests/xml2sexprtest.c | 4 +-
22 files changed, 7012 insertions(+), 6830 deletions(-)
delete mode 100644 src/xen/sexpr.c
delete mode 100644 src/xen/sexpr.h
create mode 100644 src/xenxs/sexpr.c
create mode 100644 src/xenxs/sexpr.h
create mode 100644 src/xenxs/xen_sxpr.c
create mode 100644 src/xenxs/xen_sxpr.h
create mode 100644 src/xenxs/xen_xm.c
create mode 100644 src/xenxs/xen_xm.h
create mode 100644 src/xenxs/xenxs_private.h
--
1.7.4.1
4
5
Hi all,
This used to work on libvirt-0.8.7:
<console type='pty'>
<target type='virtio'/>
</console>
Using the same configuration with libvirt-0.8.8:
[root@ev004 ~]# virsh create /data/4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355.xml
error: Failed to create domain from
/data/4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355.xml
error: internal error no assigned pty for device console0
I'm using qemu-kvm-0.14.0-0.1.201102107aa8c46.
Here's the log from starting a guest with 0.8.7:
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none
/usr/bin/qemu-kvm -S -M pc-0.14 -cpu
core2duo,+lahf_lm,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme
-enable-kvm -m 512 -smp 2,sockets=2,cores=1,threads=1 -name
4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355 -uuid
4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355 -nodefconfig -nodefaults -chardev
socket,id=monitor,path=/var/lib/libvirt/qemu/4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355.monitor,server,nowait
-mon chardev=monitor,mode=control -rtc base=utc -boot c -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/dev/vgdata/4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355-root,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none
-device virtio-blk-pci,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0
-drive file=/dev/vgdata/4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355-swap,if=none,id=drive-virtio-disk1,format=raw,cache=none
-device virtio-blk-pci,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1
-netdev tap,fd=64,id=hostnet0,vhost=on,vhostfd=65 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:50:a4:55,bus=pci.0,addr=0x3
-netdev tap,fd=66,id=hostnet1,vhost=on,vhostfd=67 -device
virtio-net-pci,netdev=hostnet1,id=net1,mac=00:16:3e:71:c1:49,bus=pci.0,addr=0x4
-chardev pty,id=console0 -device virtconsole,chardev=console0 -usb
-device usb-tablet,id=input0 -vnc 0.0.0.0:1,password -vga std
char device redirected to /dev/pts/14
And the same guest with 0.8.8:
LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin QEMU_AUDIO_DRV=none
/usr/bin/qemu-kvm -S -M pc-0.14 -cpu
core2duo,+lahf_lm,+popcnt,+sse4.2,+sse4.1,+cx16,-monitor,-vme
-enable-kvm -m 512 -smp 2,sockets=2,cores=1,threads=1 -name
4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355 -uuid
4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -boot c
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/dev/vgdata/4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355-root,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none
-device virtio-blk-pci,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0
-drive file=/dev/vgdata/4d5d8a24-bb70-4eff-b1b5-3d8e5bd5c355-swap,if=none,id=drive-virtio-disk1,format=raw,cache=none
-device virtio-blk-pci,bus=pci.0,addr=0x7,drive=drive-virtio-disk1,id=virtio-disk1
-netdev tap,fd=33,id=hostnet0,vhost=on,vhostfd=34 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:16:3e:50:a4:55,bus=pci.0,addr=0x3
-netdev tap,fd=35,id=hostnet1,vhost=on,vhostfd=36 -device
virtio-net-pci,netdev=hostnet1,id=net1,mac=00:16:3e:71:c1:49,bus=pci.0,addr=0x4
-chardev pty,id=charconsole0 -device
virtconsole,chardev=charconsole0,id=console0 -usb -device
usb-tablet,id=input0 -vnc 0.0.0.0:1,password -vga std
char device redirected to /dev/pts/14
Kind regards,
Ruben Kerkhof
1
0
Thanks everybody for the testing feedback and fixes, in retrospect
I should really had done this in previous releases ! So the third
rc tarball is out, it's likely to be the last one before the release
(within 48 hours):
ftp://libvirt.org/libvirt/libvirt-0.8.8-rc3.tar.gz
give it a try !
thanks,
Daniel
--
Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/
daniel(a)veillard.com | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library http://libvirt.org/
8
11
17 Feb '11
When we attach a disk, but we specify a wrong format of disk image,
qemu monitor command drive_add will fail, but libvirt does not detect
this error.
Signed-off-by: Wen Congyang <wency(a)cn.fujitsu.com>
---
src/qemu/qemu_monitor_text.c | 6 ++++++
1 files changed, 6 insertions(+), 0 deletions(-)
diff --git a/src/qemu/qemu_monitor_text.c b/src/qemu/qemu_monitor_text.c
index 6d0ba4c..0fd7546 100644
--- a/src/qemu/qemu_monitor_text.c
+++ b/src/qemu/qemu_monitor_text.c
@@ -2453,6 +2453,12 @@ int qemuMonitorTextAddDrive(qemuMonitorPtr mon,
goto cleanup;
}
+ if (strstr(reply, "could not open disk image")) {
+ qemuReportError(VIR_ERR_OPERATION_FAILED, "%s",
+ _("open disk image file failed"));
+ goto cleanup;
+ }
+
ret = 0;
cleanup:
--
1.7.1
2
1
* src/util/cgroup.c (virCgroupSetValueStr, virCgroupGetValueStr)
(virCgroupRemoveRecursively): VIR_DEBUG can clobber errno.
(virCgroupRemove): Use VIR_DEBUG rather than DEBUG.
---
src/util/cgroup.c | 12 ++++++------
1 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/src/util/cgroup.c b/src/util/cgroup.c
index 47c4633..b71eef9 100644
--- a/src/util/cgroup.c
+++ b/src/util/cgroup.c
@@ -290,8 +290,8 @@ static int virCgroupSetValueStr(virCgroupPtr group,
VIR_DEBUG("Set value '%s' to '%s'", keypath, value);
rc = virFileWriteStr(keypath, value, 0);
if (rc < 0) {
- DEBUG("Failed to write value '%s': %m", value);
rc = -errno;
+ VIR_DEBUG("Failed to write value '%s': %m", value);
} else {
rc = 0;
}
@@ -313,7 +313,7 @@ static int virCgroupGetValueStr(virCgroupPtr group,
rc = virCgroupPathOfController(group, controller, key, &keypath);
if (rc != 0) {
- DEBUG("No path of %s, %s", group->path, key);
+ VIR_DEBUG("No path of %s, %s", group->path, key);
return rc;
}
@@ -321,8 +321,8 @@ static int virCgroupGetValueStr(virCgroupPtr group,
rc = virFileReadAll(keypath, 1024, value);
if (rc < 0) {
- DEBUG("Failed to read %s: %m\n", keypath);
rc = -errno;
+ VIR_DEBUG("Failed to read %s: %m\n", keypath);
} else {
/* Terminated with '\n' has sometimes harmful effects to the caller */
char *p = strchr(*value, '\n');
@@ -635,8 +635,8 @@ static int virCgroupRemoveRecursively(char *grppath)
if (grpdir == NULL) {
if (errno == ENOENT)
return 0;
- VIR_ERROR(_("Unable to open %s (%d)"), grppath, errno);
rc = -errno;
+ VIR_ERROR(_("Unable to open %s (%d)"), grppath, errno);
return rc;
}
@@ -665,7 +665,7 @@ static int virCgroupRemoveRecursively(char *grppath)
}
closedir(grpdir);
- DEBUG("Removing cgroup %s", grppath);
+ VIR_DEBUG("Removing cgroup %s", grppath);
if (rmdir(grppath) != 0 && errno != ENOENT) {
rc = -errno;
VIR_ERROR(_("Unable to remove %s (%d)"), grppath, errno);
@@ -710,7 +710,7 @@ int virCgroupRemove(virCgroupPtr group)
&grppath) != 0)
continue;
- DEBUG("Removing cgroup %s and all child cgroups", grppath);
+ VIR_DEBUG("Removing cgroup %s and all child cgroups", grppath);
rc = virCgroupRemoveRecursively(grppath);
VIR_FREE(grppath);
}
--
1.7.4
4
8
As scheduled the release is out, and available from the site:
ftp://libvirt.org/libvirt/
Thanks everybody for the earlier testing on the release candidate series
that was I think quite useful !
So this release is again mostly consisting of a very large batch of small
improvements and bug fixes, and a few new features:
Features:
- sysinfo: expose new API (Eric Blake)
- cgroup blkio weight support. (Gui Jianfeng)
- smartcard device support (Eric Blake)
- qemu: Support per-device boot ordering (Jiri Denemark)
Documentation:
- docs: fix typos (Eric Blake)
- docs: added link for nimbus to apps page (Justin Clift)
- Update src/README (Matthias Bolte)
- docs: Add information about libvirt-php new location (Michal Novotny)
- Add libvirt-php information page (Michal Novotny)
- cgroup: Add documentation for blkiotune elements. (Gui Jianfeng)
- docs/index.html.in: update KVM url (Niels de Vos)
- docs/index.html.in: update QEMU url (Alon Levy)
- docs: more on qemu locking patterns (Eric Blake)
- docs: renamed hudson project link to jenkins, matching project rename (Justin Clift)
- docs: Update docs for cpu_shares setting (Osier Yang)
- docs: replace CRLF with LF (Juerg Haefliger)
- docs: Add docs for new extra parameter pkipath (Osier Yang)
- docs: expand the man page text for virsh setmaxmem (Justin Clift)
- docs: fix incorrect XML element mentioned by setmem text (Justin Clift)
- docs: add a link to the bindings page under the downloads menu item (Justin Clift)
- docs: document <controller> element (Eric Blake)
- docs: move the apps page to the top level as its good promo (Justin Clift)
- docs: added new entries to apps page, plus adjusted a few existing (Justin Clift)
- docs: document <sysinfo> and <smbios> elements (Eric Blake)
- datatypes: Fix outdated function names in the documentation (Matthias Bolte)
- Add documentation for VIR_DOMAIN_MEMORY_PARAM_UNLIMITED (Matthias Bolte)
- docs: Move the "Network Filtering" page one level up in the hierarchy (Matthias Bolte)
- docs: add buildbot to the apps page (Justin Clift)
- docs: add new conversion heading to the apps listing (Justin Clift)
- docs: updated windows page for new 0.8.7 installer (Justin Clift)
- docs: clarify virsh setvcpus and setmem usage with active domains (Justin Clift)
- Document HAP domain feature (Jim Fehlig)
- docs: fix trivial typos in currentMemory description (Justin Clift)
- doc: improve the documentation of desturi (Wen Congyang)
- docs: reorder apps page alphabetically, plus add libguestfs entries (Justin Clift)
- docs: add entry for archipel to the apps page (Justin Clift)
- docs: use xml entity encoding for extended character last name (Justin Clift)
- docs: updated memtune info again in virsh command reference (Justin Clift)
- docs: updated release of virsh cmd reference, with memtune info (Justin Clift)
- maint: document dislike of mismatched if/else bracing (Eric Blake)
- docs: added libvirt-announce to contact page (Justin Clift)
Portability:
- qemu: ignore failure of qemu -M ? on older qemu (Eric Blake)
- virsh: avoid mingw compiler warnings (Eric Blake)
- build: avoid problems with autogen.sh runs from tarball (Eric Blake)
- build: fix cygwin strerror_r failure (Eric Blake)
- Avoid pthread_sigmask on Win32 platforms (Daniel P. Berrange)
- Fix compilation when building without sasl (Daniel Veillard)
- build: fix parted detection at configure time (Eric Blake)
- Fix setup of lib directory with autogen.sh --system (Daniel P. Berrange)
- build: fix 'make check' with older git (Eric Blake)
- maint: support --no-git option during autogen.sh (Eric Blake)
- libvirt-guests: remove bashisms (Laurent Léonard)
- build: restore mingw build (Eric Blake)
- commandtest: avoid printing loader-control variables from commandhelper (Diego Elio Pettenò)
Bug Fixes:
- cgroup: preserve correct errno on failure (Eric Blake)
- qemu: Fix command line generation with faked host CPU (Jiri Denemark)
- tests: Fake host capabilities properly (Jiri Denemark)
- build: address clang reports about virCommand (Eric Blake)
- qemu: don't mask real error with oom report (Eric Blake)
- qemu: avoid NULL derefs (Eric Blake)
- virDomainMemoryStats: avoid null dereference (Eric Blake)
- Fix leak of mutex attributes in POSIX threads impl (Daniel P. Berrange)
- Fix leak in SCSI storage backend (Daniel P. Berrange)
- storage: Create enough volumes for mpath pool (Osier Yang)
- qemu: avoid NULL deref on error (Eric Blake)
- conf: Fix XML generation for smartcards (Jiri Denemark)
- Fix cleanup on VM state after failed QEMU startup (Daniel P. Berrange)
- libvirt-qemu: Fix enum type declaration (Jiri Denemark)
- xen: Prevent updating device when attaching a device (Osier Yang)
- qemu: Fix escape_monitor(escape_shell(command)) (Philipp Hahn)
- qemu: fix attach-interface regression (Wen Congyang)
- Fix typo in parsing of spice 'auth' data (Michal Privoznik)
- Reset logging filter function when forking (Daniel P. Berrange)
- Block SIGPIPE around virExec hook functions (Daniel P. Berrange)
- Only initialize/cleanup libpciaccess once (Daniel P. Berrange)
- macvtap: fix 2 nla_put expressions (non-serious bug) (Stefan Berger)
- qemu: avoid double shutdown (Eric Blake)
- Fix conflicts with glibc globals (Davidlohr Bueso)
- qemuBuildDeviceAddressStr() checks for QEMUD_CMD_FLAG_PCI_MULTIBUS (Niels de Vos)
- Don't sleep in poll() if there is existing SASL decoded data (Daniel P. Berrange)
- Initialization error of controller in QEmu SCSI hotplug (Wen Congyang)
- esx: Ensure max-memory has 4 megabyte granularity (Matthias Bolte)
- Remove double close of qemu monitor (Daniel P. Berrange)
- Prevent overfilling of self-pipe in python event loop (Daniel P. Berrange)
- avoid vm to be deleted if qemuConnectMonitor failed (Wen Congyang)
- tests: Fix virtio channel tests (Jiri Denemark)
- event: fix event-handling allocation crash (Eric Blake)
- storage: Round up capacity for LVM volume creation (Osier Yang)
- Do not use virtio-serial port 0 for generic ports (David Allan)
- Manually kill gzip if restore fails before starting qemu (Laine Stump)
- Set SELinux context label of pipes used for qemu migration (Laine Stump)
- virsh: require --mac to avoid detach-interface ambiguity (Michal Privoznik)
- dispatch error before return (Wen Congyang)
- event: fix event-handling data race (Eric Blake)
- qemu: Retry JSON monitor cont cmd on MigrationExpected error (Jim Fehlig)
- Fix startup with VNC password expiry on old QEMU (Daniel P. Berrange)
- Fix error reporting when machine type probe fails (Daniel P. Berrange)
- Avoid crash in security driver if model is NULL (Daniel P. Berrange)
- qemu: Fix a possible deadlock in p2p migration (Wen Congyang)
- qemu: Avoid sending STOPPED event twice (Jiri Denemark)
- spec: Start libvirt-guests only if it's on in current runlevel (Jiri Denemark)
- Increase size of driver table to make UML work again (Daniel P. Berrange)
- qemu: don't fail capabilities check on 0.12.x (Eric Blake)
- Fix 'make check' after commit 04197350 (Jim Fehlig)
- esx: Fix memory leak in HostSystem managed object free function (Matthias Bolte)
- qemu: Watchdog IB700 is not a PCI device (RHBZ#667091). (Richard W.M. Jones)
- cpu: plug memory leak (Eric Blake)
- network: plug memory leak (Eric Blake)
- network: plug unininitialized read found by valgrind (Eric Blake)
- remote: Don't lose track of events when callbacks are slow (Cole Robinson)
- conf: Report error if invalid type specified for character device (Osier Yang)
- daemon: Fix core dumps if unix_sock_group is set (Jiri Denemark)
- vbox: Use correct VRAM size unit (Matthias Bolte)
- bridge: Fix generation of dnsmasq's --dhcp-hostsfile option (Kay Schubert)
- qemu: Fix bogus warning about uninitialized saveptr (Jiri Denemark)
- Don't chown qemu saved image back to root after save if dynamic_ownership=0 (Laine Stump)
Improvements:
- maint: delete unused 'make install' step (Eric Blake)
- Update czech localization (Zdenek Styblik)
- Avoid empty strings when --with-packager(-version) is not specified (Matthias Bolte)
- Output commandline on status != 0 in virCommandWait (Matthias Bolte)
- add missing error handling to virGetDomain (Christophe Fergeau)
- call virReportOOMError when appropriate in hash.c (Christophe Fergeau)
- xml: avoid compiler warning (Eric Blake)
- nwfilter: reorder match extensions relative to state match (Stefan Berger)
- fix OOM handling in hash routines (Christophe Fergeau)
- docs: Distribute XSLT files to generate HACKING (Matthias Bolte)
- qemu: Report a more informative error for missing cgroup controllers (Matthias Bolte)
- Imprint all logs with version + package build information (Daniel P. Berrange)
- Reduce log level when cgroups aren't mounted (Daniel P. Berrange)
- Avoid warnings from nwfilter driver when run non-root (Daniel P. Berrange)
- build: distribute 'make syntax-check' tweaks (Eric Blake)
- Adjust some log levels in udev driver (Daniel P. Berrange)
- Add check for binary existing in machine type probe (Daniel P. Berrange)
- Add a little more debugging for async events (Daniel P. Berrange)
- Move connection driver modules directory (Daniel P. Berrange)
- Support SCSI RAID type & lower log level for unknown types (Daniel P. Berrange)
- Don't use CLONE_NEWUSER for now (Serge E. Hallyn)
- sysinfo: implement qemu support (Eric Blake)
- sysinfo: refactor xml formatting (Eric Blake)
- sysinfo: implement virsh support (Eric Blake)
- sysinfo: implement the remote protocol (Eric Blake)
- sysinfo: implement the public API (Eric Blake)
- sysinfo: define internal driver API (Eric Blake)
- LXC: LXC Blkio weight configuration support. (Gui Jianfeng)
- qemu: Implement blkio tunable XML configuration and parsing. (Gui Jianfeng)
- cgroup: Update XML Schema for new entries. (Gui Jianfeng)
- cgroup: Implement blkio.weight tuning API. (Gui Jianfeng)
- cgroup: Enable cgroup hierarchy for blkio cgroup (Gui Jianfeng)
- Update Dutch and Polish localizations (Daniel Veillard)
- Vietnamese translations for libvirt (Hero Phương)
- spicevmc: support older -device spicevmc of qemu 0.13.0 (Eric Blake)
- smartcard: add spicevmc support (Eric Blake)
- spicevmc: support new qemu chardev (Daniel P. Berrange)
- smartcard: turn on qemu support (Eric Blake)
- smartcard: enable SELinux support (Eric Blake)
- smartcard: check for qemu capability (Eric Blake)
- smartcard: add domain conf support (Eric Blake)
- smartcard: add XML support for <smartcard> device (Eric Blake)
- qemu: Support booting from hostdev PCI devices (Jiri Denemark)
- Support booting from hostdev devices (Jiri Denemark)
- qemu: Add shortcut for HMP pass through (Jiri Denemark)
- macvtap: fix variable in debugging output (Stefan Berger)
- qemu: Build command line for incoming tunneled migration (Osier Yang)
- bridge_driver: handle DNS over IPv6 (Paweł Krześniak)
- tests: handle backspace-newline pairs in test input files (Juerg Haefliger)
- qemu: More clear error parsing domain def failure of tunneled migration (Osier Yang)
- maint: reject raw close, popen in 'make syntax-check' (Eric Blake)
- build: avoid close, system (Eric Blake)
- Add VIR_DIV_UP to divide memory or storage request sizes with round up (Matthias Bolte)
- qemu: fix augeas support for vnc_auto_unix_socket (Eric Blake)
- virsh: added --all flag to freecell command (Michal Privoznik)
- esx: Don't try to change max-memory of an active domain (Matthias Bolte)
- qemu aio: enable support (Eric Blake)
- qemu aio: parse aio support from qemu -help (Matthias Dahl)
- qemu aio: add XML parsing (Matthias Dahl)
- Remove bogus log warning lines when launching QEMU (Daniel P. Berrange)
- qemu: fix error messages (Eric Blake)
- qemu: Report more accurate error on failure to attach device. (Hu Tao)
- Force guest suspend at timeout (Wen Congyang)
- Show migration progress. (Wen Congyang)
- Cancel migration if user presses Ctrl-C when migration is in progress (Hu Tao)
- qemu: use separate alias for chardev and associated device (Eric Blake)
- remote: Add extra parameter pkipath for URI (Osier Yang)
- Update localization files from Fedora i10n (Daniel Veillard)
- Add check for poll error events in monitor (Daniel P. Berrange)
- Filter out certain expected error messages from libvirtd (Daniel P. Berrange)
- Add a function to the security driver API that sets the label of an open fd. (Laine Stump)
- qemu: Error prompt when managed save a shutoff domain (Osier Yang)
- build: avoid corrupted gnulib/tests/Makefile (Eric Blake)
- qemu: sound: Support intel 'ich6' model (Cole Robinson)
- vmx: Use VIR_ERR_CONFIG_UNSUPPORTED when appropriated (Matthias Bolte)
- Push unapplied fixups for previous patch (Cole Robinson)
- qemu: Add conf option to auto setup VNC unix sockets (Cole Robinson)
- qemu: Allow serving VNC over a unix domain socket (Cole Robinson)
- qemu: Set domain def transient at beginning of startup process (Cole Robinson)
- qemu: report more proper error for unsupported graphics (Osier Yang)
- qemu: Fail if per-device boot is used but deviceboot is not supported (Jiri Denemark)
- Turn libvirt.c error reporting functions into macros (Daniel P. Berrange)
- build: use more gnulib modules for simpler code (Eric Blake)
- Remove two unused PATH_MAX-sized char arrays from the stack (Matthias Bolte)
- Use VIR_ERR_OPERATION_INVALID when appropriated (Matthias Bolte)
- Fix misuse of VIR_ERR_INVALID_* error code (Matthias Bolte)
- Simplify "NWFilterPool" to "NWFilter" (Matthias Bolte)
- datatypes: Get virSecretFreeName in sync with the other free functions (Matthias Bolte)
- qemu: use -incoming fd:n to avoid qemu holding fd indefinitely (Eric Blake)
- tests: Add tests for per-device boot elements (Jiri Denemark)
- Introduce per-device boot element (Jiri Denemark)
- conf: Move boot parsing into a separate function (Jiri Denemark)
- build: let xgettext see strings in libvirt-guests (Eric Blake)
- A couple of fixes for the search PHP code (Daniel Veillard)
- virsh: Use WITH_SECDRIVER_APPARMOR to detect AppArmor support (Matthias Bolte)
- memtune: Let virsh know the unlimited value for memory tunables (Nikunj A. Dadhania)
- maint: improve sc_prohibit_strncmp syntax check (Eric Blake)
- Enable tuning of qemu network tap device "sndbuf" size (Laine Stump)
- Add XML config switch to enable/disable vhost-net support (Laine Stump)
- Use the new set_password monitor command to set password. (Marc-André Lureau)
- qemu: add set_password and expire_password monitor commands (Marc-André Lureau)
- qemu: move monitor device out of domain_conf common code (Eric Blake)
- domain_conf: split source data out from ChrDef (Eric Blake)
- cpu: Add support for Westmere CPU model (Jiri Denemark)
- qemu: improve device flag parsing (Eric Blake)
- util: add missing string->integer conversion functions (Eric Blake)
- qemu: convert capabilities to use virCommand (Eric Blake)
- virsh: ensure --maximum flag used only with --config for setvcpus (Justin Clift)
- Add HAP to xen hypervisor capabilities (Jim Fehlig)
- Add support for HAP feature to xen drivers (Jim Fehlig)
- Add HAP to virDomainFeature enum (Jim Fehlig)
- tests: virsh is no longer in builddir/src (Eric Blake)
- virFindFileInPath: only find executable non-directory (Eric Blake)
- Fix old PHP syntax in the search online form (Daniel Veillard)
- report error when specifying wrong desturi (Wen Congyang)
- qemu: Reject SDL graphic if it's not supported by qemu (Osier Yang)
- vbox: Silently ignore missing registry key on Windows (Matthias Bolte)
- python: Use PyCapsule API if available (Cole Robinson)
- event-test: Simplify debug on/off (Cole Robinson)
- Refactor the security drivers to simplify usage (Daniel P. Berrange)
- Add AM_MAINTAINER_MODE (Guido Günther)
- esx: Move occurrence check into esxVI_LookupObjectContentByType (Matthias Bolte)
- esx: Add domain autostart support (Matthias Bolte)
- vmx: Add support for video device VRAM size (Matthias Bolte)
- API: Improve log for domain related APIs (Osier Yang)
- schema: tighten <serial><protocol type=...> relaxNG (Eric Blake)
- Log an error on attempts to add a NAT rule for non-IPv4 addresses (Laine Stump)
- Improve error reporting when parsing dhcp info for virtual networks (Laine Stump)
- qemu driver: fix positioning to end of log file (Stefan Berger)
- build: satisfy 'make syntax-check' regarding year change (Eric Blake)
Cleanups:
- build: silence some clang warnings (Eric Blake)
- maint: kill dead assignments (Eric Blake)
- build: silence false positive clang report (Eric Blake)
- maint: whitespace cleanup (Eric Blake)
- maint: update AUTHORS (Eric Blake)
- Prefer C style comments over C++ ones (Matthias Bolte)
- Revert all previous error log priority hacks (Daniel P. Berrange)
- Cleanup code style in logging APIs (Daniel P. Berrange)
- Remove redundant brackets around return values (Daniel P. Berrange)
- tests: Remove obsolete secaatest (Matthias Bolte)
- datatypes: avoid redundant __FUNCTION__ (Eric Blake)
Thanks everybody for the patches, reports and documentation improvements !
Daniel
--
Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/
daniel(a)veillard.com | Rpmfind RPM search engine http://rpmfind.net/
http://veillard.com/ | virtualization library http://libvirt.org/
4
5
[libvirt] [PATCH 0/2] Add tx_alg attribute to interface XML for virtio backend
by Laine Stump 17 Feb '11
by Laine Stump 17 Feb '11
17 Feb '11
These two patches provide the ability to configure which of two
algorithms is used on the TX side of the virtio-net-pci
device. Details are in the comments for PATCH 2/2.
There are also at least a couple of open questions about the patch,
which are posed in the comments of 2/2. It's highly unlikely this
patch will be pushed as-is - I'm sending it mostly to solicit comments
on those questions.
2
11
17 Feb '11
---
docs/apibuild.py | 3194 +++++++++++++++++++++++++++---------------------------
1 files changed, 1597 insertions(+), 1597 deletions(-)
diff --git a/docs/apibuild.py b/docs/apibuild.py
index 895a313..506932d 100755
--- a/docs/apibuild.py
+++ b/docs/apibuild.py
@@ -72,34 +72,34 @@ class identifier:
def __init__(self, name, header=None, module=None, type=None, lineno = 0,
info=None, extra=None, conditionals = None):
self.name = name
- self.header = header
- self.module = module
- self.type = type
- self.info = info
- self.extra = extra
- self.lineno = lineno
- self.static = 0
- if conditionals == None or len(conditionals) == 0:
- self.conditionals = None
- else:
- self.conditionals = conditionals[:]
- if self.name == debugsym:
- print "=> define %s : %s" % (debugsym, (module, type, info,
- extra, conditionals))
+ self.header = header
+ self.module = module
+ self.type = type
+ self.info = info
+ self.extra = extra
+ self.lineno = lineno
+ self.static = 0
+ if conditionals == None or len(conditionals) == 0:
+ self.conditionals = None
+ else:
+ self.conditionals = conditionals[:]
+ if self.name == debugsym:
+ print "=> define %s : %s" % (debugsym, (module, type, info,
+ extra, conditionals))
def __repr__(self):
r = "%s %s:" % (self.type, self.name)
- if self.static:
- r = r + " static"
- if self.module != None:
- r = r + " from %s" % (self.module)
- if self.info != None:
- r = r + " " + `self.info`
- if self.extra != None:
- r = r + " " + `self.extra`
- if self.conditionals != None:
- r = r + " " + `self.conditionals`
- return r
+ if self.static:
+ r = r + " static"
+ if self.module != None:
+ r = r + " from %s" % (self.module)
+ if self.info != None:
+ r = r + " " + `self.info`
+ if self.extra != None:
+ r = r + " " + `self.extra`
+ if self.conditionals != None:
+ r = r + " " + `self.conditionals`
+ return r
def set_header(self, header):
@@ -117,10 +117,10 @@ class identifier:
def set_static(self, static):
self.static = static
def set_conditionals(self, conditionals):
- if conditionals == None or len(conditionals) == 0:
- self.conditionals = None
- else:
- self.conditionals = conditionals[:]
+ if conditionals == None or len(conditionals) == 0:
+ self.conditionals = None
+ else:
+ self.conditionals = conditionals[:]
def get_name(self):
return self.name
@@ -143,96 +143,96 @@ class identifier:
def update(self, header, module, type = None, info = None, extra=None,
conditionals=None):
- if self.name == debugsym:
- print "=> update %s : %s" % (debugsym, (module, type, info,
- extra, conditionals))
+ if self.name == debugsym:
+ print "=> update %s : %s" % (debugsym, (module, type, info,
+ extra, conditionals))
if header != None and self.header == None:
- self.set_header(module)
+ self.set_header(module)
if module != None and (self.module == None or self.header == self.module):
- self.set_module(module)
+ self.set_module(module)
if type != None and self.type == None:
- self.set_type(type)
+ self.set_type(type)
if info != None:
- self.set_info(info)
+ self.set_info(info)
if extra != None:
- self.set_extra(extra)
+ self.set_extra(extra)
if conditionals != None:
- self.set_conditionals(conditionals)
+ self.set_conditionals(conditionals)
class index:
def __init__(self, name = "noname"):
self.name = name
self.identifiers = {}
self.functions = {}
- self.variables = {}
- self.includes = {}
- self.structs = {}
- self.enums = {}
- self.typedefs = {}
- self.macros = {}
- self.references = {}
- self.info = {}
+ self.variables = {}
+ self.includes = {}
+ self.structs = {}
+ self.enums = {}
+ self.typedefs = {}
+ self.macros = {}
+ self.references = {}
+ self.info = {}
def add_ref(self, name, header, module, static, type, lineno, info=None, extra=None, conditionals = None):
if name[0:2] == '__':
- return None
+ return None
d = None
try:
- d = self.identifiers[name]
- d.update(header, module, type, lineno, info, extra, conditionals)
- except:
- d = identifier(name, header, module, type, lineno, info, extra, conditionals)
- self.identifiers[name] = d
+ d = self.identifiers[name]
+ d.update(header, module, type, lineno, info, extra, conditionals)
+ except:
+ d = identifier(name, header, module, type, lineno, info, extra, conditionals)
+ self.identifiers[name] = d
- if d != None and static == 1:
- d.set_static(1)
+ if d != None and static == 1:
+ d.set_static(1)
- if d != None and name != None and type != None:
- self.references[name] = d
+ if d != None and name != None and type != None:
+ self.references[name] = d
- if name == debugsym:
- print "New ref: %s" % (d)
+ if name == debugsym:
+ print "New ref: %s" % (d)
- return d
+ return d
def add(self, name, header, module, static, type, lineno, info=None, extra=None, conditionals = None):
if name[0:2] == '__':
- return None
+ return None
d = None
try:
- d = self.identifiers[name]
- d.update(header, module, type, lineno, info, extra, conditionals)
- except:
- d = identifier(name, header, module, type, lineno, info, extra, conditionals)
- self.identifiers[name] = d
-
- if d != None and static == 1:
- d.set_static(1)
-
- if d != None and name != None and type != None:
- if type == "function":
- self.functions[name] = d
- elif type == "functype":
- self.functions[name] = d
- elif type == "variable":
- self.variables[name] = d
- elif type == "include":
- self.includes[name] = d
- elif type == "struct":
- self.structs[name] = d
- elif type == "enum":
- self.enums[name] = d
- elif type == "typedef":
- self.typedefs[name] = d
- elif type == "macro":
- self.macros[name] = d
- else:
- print "Unable to register type ", type
-
- if name == debugsym:
- print "New symbol: %s" % (d)
-
- return d
+ d = self.identifiers[name]
+ d.update(header, module, type, lineno, info, extra, conditionals)
+ except:
+ d = identifier(name, header, module, type, lineno, info, extra, conditionals)
+ self.identifiers[name] = d
+
+ if d != None and static == 1:
+ d.set_static(1)
+
+ if d != None and name != None and type != None:
+ if type == "function":
+ self.functions[name] = d
+ elif type == "functype":
+ self.functions[name] = d
+ elif type == "variable":
+ self.variables[name] = d
+ elif type == "include":
+ self.includes[name] = d
+ elif type == "struct":
+ self.structs[name] = d
+ elif type == "enum":
+ self.enums[name] = d
+ elif type == "typedef":
+ self.typedefs[name] = d
+ elif type == "macro":
+ self.macros[name] = d
+ else:
+ print "Unable to register type ", type
+
+ if name == debugsym:
+ print "New symbol: %s" % (d)
+
+ return d
def merge(self, idx):
for id in idx.functions.keys():
@@ -240,41 +240,41 @@ class index:
# macro might be used to override functions or variables
# definitions
#
- if self.macros.has_key(id):
- del self.macros[id]
- if self.functions.has_key(id):
- print "function %s from %s redeclared in %s" % (
- id, self.functions[id].header, idx.functions[id].header)
- else:
- self.functions[id] = idx.functions[id]
- self.identifiers[id] = idx.functions[id]
+ if self.macros.has_key(id):
+ del self.macros[id]
+ if self.functions.has_key(id):
+ print "function %s from %s redeclared in %s" % (
+ id, self.functions[id].header, idx.functions[id].header)
+ else:
+ self.functions[id] = idx.functions[id]
+ self.identifiers[id] = idx.functions[id]
for id in idx.variables.keys():
#
# macro might be used to override functions or variables
# definitions
#
- if self.macros.has_key(id):
- del self.macros[id]
- if self.variables.has_key(id):
- print "variable %s from %s redeclared in %s" % (
- id, self.variables[id].header, idx.variables[id].header)
- else:
- self.variables[id] = idx.variables[id]
- self.identifiers[id] = idx.variables[id]
+ if self.macros.has_key(id):
+ del self.macros[id]
+ if self.variables.has_key(id):
+ print "variable %s from %s redeclared in %s" % (
+ id, self.variables[id].header, idx.variables[id].header)
+ else:
+ self.variables[id] = idx.variables[id]
+ self.identifiers[id] = idx.variables[id]
for id in idx.structs.keys():
- if self.structs.has_key(id):
- print "struct %s from %s redeclared in %s" % (
- id, self.structs[id].header, idx.structs[id].header)
- else:
- self.structs[id] = idx.structs[id]
- self.identifiers[id] = idx.structs[id]
+ if self.structs.has_key(id):
+ print "struct %s from %s redeclared in %s" % (
+ id, self.structs[id].header, idx.structs[id].header)
+ else:
+ self.structs[id] = idx.structs[id]
+ self.identifiers[id] = idx.structs[id]
for id in idx.typedefs.keys():
- if self.typedefs.has_key(id):
- print "typedef %s from %s redeclared in %s" % (
- id, self.typedefs[id].header, idx.typedefs[id].header)
- else:
- self.typedefs[id] = idx.typedefs[id]
- self.identifiers[id] = idx.typedefs[id]
+ if self.typedefs.has_key(id):
+ print "typedef %s from %s redeclared in %s" % (
+ id, self.typedefs[id].header, idx.typedefs[id].header)
+ else:
+ self.typedefs[id] = idx.typedefs[id]
+ self.identifiers[id] = idx.typedefs[id]
for id in idx.macros.keys():
#
# macro might be used to override functions or variables
@@ -286,88 +286,88 @@ class index:
continue
if self.enums.has_key(id):
continue
- if self.macros.has_key(id):
- print "macro %s from %s redeclared in %s" % (
- id, self.macros[id].header, idx.macros[id].header)
- else:
- self.macros[id] = idx.macros[id]
- self.identifiers[id] = idx.macros[id]
+ if self.macros.has_key(id):
+ print "macro %s from %s redeclared in %s" % (
+ id, self.macros[id].header, idx.macros[id].header)
+ else:
+ self.macros[id] = idx.macros[id]
+ self.identifiers[id] = idx.macros[id]
for id in idx.enums.keys():
- if self.enums.has_key(id):
- print "enum %s from %s redeclared in %s" % (
- id, self.enums[id].header, idx.enums[id].header)
- else:
- self.enums[id] = idx.enums[id]
- self.identifiers[id] = idx.enums[id]
+ if self.enums.has_key(id):
+ print "enum %s from %s redeclared in %s" % (
+ id, self.enums[id].header, idx.enums[id].header)
+ else:
+ self.enums[id] = idx.enums[id]
+ self.identifiers[id] = idx.enums[id]
def merge_public(self, idx):
for id in idx.functions.keys():
- if self.functions.has_key(id):
- # check that function condition agrees with header
- if idx.functions[id].conditionals != \
- self.functions[id].conditionals:
- print "Header condition differs from Function for %s:" \
- % id
- print " H: %s" % self.functions[id].conditionals
- print " C: %s" % idx.functions[id].conditionals
- up = idx.functions[id]
- self.functions[id].update(None, up.module, up.type, up.info, up.extra)
- # else:
- # print "Function %s from %s is not declared in headers" % (
- # id, idx.functions[id].module)
- # TODO: do the same for variables.
+ if self.functions.has_key(id):
+ # check that function condition agrees with header
+ if idx.functions[id].conditionals != \
+ self.functions[id].conditionals:
+ print "Header condition differs from Function for %s:" \
+ % id
+ print " H: %s" % self.functions[id].conditionals
+ print " C: %s" % idx.functions[id].conditionals
+ up = idx.functions[id]
+ self.functions[id].update(None, up.module, up.type, up.info, up.extra)
+ # else:
+ # print "Function %s from %s is not declared in headers" % (
+ # id, idx.functions[id].module)
+ # TODO: do the same for variables.
def analyze_dict(self, type, dict):
count = 0
- public = 0
+ public = 0
for name in dict.keys():
- id = dict[name]
- count = count + 1
- if id.static == 0:
- public = public + 1
+ id = dict[name]
+ count = count + 1
+ if id.static == 0:
+ public = public + 1
if count != public:
- print " %d %s , %d public" % (count, type, public)
- elif count != 0:
- print " %d public %s" % (count, type)
+ print " %d %s , %d public" % (count, type, public)
+ elif count != 0:
+ print " %d public %s" % (count, type)
def analyze(self):
- self.analyze_dict("functions", self.functions)
- self.analyze_dict("variables", self.variables)
- self.analyze_dict("structs", self.structs)
- self.analyze_dict("typedefs", self.typedefs)
- self.analyze_dict("macros", self.macros)
+ self.analyze_dict("functions", self.functions)
+ self.analyze_dict("variables", self.variables)
+ self.analyze_dict("structs", self.structs)
+ self.analyze_dict("typedefs", self.typedefs)
+ self.analyze_dict("macros", self.macros)
class CLexer:
"""A lexer for the C language, tokenize the input by reading and
analyzing it line by line"""
def __init__(self, input):
self.input = input
- self.tokens = []
- self.line = ""
- self.lineno = 0
+ self.tokens = []
+ self.line = ""
+ self.lineno = 0
def getline(self):
line = ''
- while line == '':
- line = self.input.readline()
- if not line:
- return None
- self.lineno = self.lineno + 1
- line = string.lstrip(line)
- line = string.rstrip(line)
- if line == '':
- continue
- while line[-1] == '\\':
- line = line[:-1]
- n = self.input.readline()
- self.lineno = self.lineno + 1
- n = string.lstrip(n)
- n = string.rstrip(n)
- if not n:
- break
- else:
- line = line + n
+ while line == '':
+ line = self.input.readline()
+ if not line:
+ return None
+ self.lineno = self.lineno + 1
+ line = string.lstrip(line)
+ line = string.rstrip(line)
+ if line == '':
+ continue
+ while line[-1] == '\\':
+ line = line[:-1]
+ n = self.input.readline()
+ self.lineno = self.lineno + 1
+ n = string.lstrip(n)
+ n = string.rstrip(n)
+ if not n:
+ break
+ else:
+ line = line + n
return line
def getlineno(self):
@@ -378,193 +378,193 @@ class CLexer:
def debug(self):
print "Last token: ", self.last
- print "Token queue: ", self.tokens
- print "Line %d end: " % (self.lineno), self.line
+ print "Token queue: ", self.tokens
+ print "Line %d end: " % (self.lineno), self.line
def token(self):
while self.tokens == []:
- if self.line == "":
- line = self.getline()
- else:
- line = self.line
- self.line = ""
- if line == None:
- return None
-
- if line[0] == '#':
- self.tokens = map((lambda x: ('preproc', x)),
- string.split(line))
- break;
- l = len(line)
- if line[0] == '"' or line[0] == "'":
- end = line[0]
- line = line[1:]
- found = 0
- tok = ""
- while found == 0:
- i = 0
- l = len(line)
- while i < l:
- if line[i] == end:
- self.line = line[i+1:]
- line = line[:i]
- l = i
- found = 1
- break
- if line[i] == '\\':
- i = i + 1
- i = i + 1
- tok = tok + line
- if found == 0:
- line = self.getline()
- if line == None:
- return None
- self.last = ('string', tok)
- return self.last
-
- if l >= 2 and line[0] == '/' and line[1] == '*':
- line = line[2:]
- found = 0
- tok = ""
- while found == 0:
- i = 0
- l = len(line)
- while i < l:
- if line[i] == '*' and i+1 < l and line[i+1] == '/':
- self.line = line[i+2:]
- line = line[:i-1]
- l = i
- found = 1
- break
- i = i + 1
- if tok != "":
- tok = tok + "\n"
- tok = tok + line
- if found == 0:
- line = self.getline()
- if line == None:
- return None
- self.last = ('comment', tok)
- return self.last
- if l >= 2 and line[0] == '/' and line[1] == '/':
- line = line[2:]
- self.last = ('comment', line)
- return self.last
- i = 0
- while i < l:
- if line[i] == '/' and i+1 < l and line[i+1] == '/':
- self.line = line[i:]
- line = line[:i]
- break
- if line[i] == '/' and i+1 < l and line[i+1] == '*':
- self.line = line[i:]
- line = line[:i]
- break
- if line[i] == '"' or line[i] == "'":
- self.line = line[i:]
- line = line[:i]
- break
- i = i + 1
- l = len(line)
- i = 0
- while i < l:
- if line[i] == ' ' or line[i] == '\t':
- i = i + 1
- continue
- o = ord(line[i])
- if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
- (o >= 48 and o <= 57):
- s = i
- while i < l:
- o = ord(line[i])
- if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
- (o >= 48 and o <= 57) or string.find(
- " \t(){}:;,+-*/%&!|[]=><", line[i]) == -1:
- i = i + 1
- else:
- break
- self.tokens.append(('name', line[s:i]))
- continue
- if string.find("(){}:;,[]", line[i]) != -1:
+ if self.line == "":
+ line = self.getline()
+ else:
+ line = self.line
+ self.line = ""
+ if line == None:
+ return None
+
+ if line[0] == '#':
+ self.tokens = map((lambda x: ('preproc', x)),
+ string.split(line))
+ break;
+ l = len(line)
+ if line[0] == '"' or line[0] == "'":
+ end = line[0]
+ line = line[1:]
+ found = 0
+ tok = ""
+ while found == 0:
+ i = 0
+ l = len(line)
+ while i < l:
+ if line[i] == end:
+ self.line = line[i+1:]
+ line = line[:i]
+ l = i
+ found = 1
+ break
+ if line[i] == '\\':
+ i = i + 1
+ i = i + 1
+ tok = tok + line
+ if found == 0:
+ line = self.getline()
+ if line == None:
+ return None
+ self.last = ('string', tok)
+ return self.last
+
+ if l >= 2 and line[0] == '/' and line[1] == '*':
+ line = line[2:]
+ found = 0
+ tok = ""
+ while found == 0:
+ i = 0
+ l = len(line)
+ while i < l:
+ if line[i] == '*' and i+1 < l and line[i+1] == '/':
+ self.line = line[i+2:]
+ line = line[:i-1]
+ l = i
+ found = 1
+ break
+ i = i + 1
+ if tok != "":
+ tok = tok + "\n"
+ tok = tok + line
+ if found == 0:
+ line = self.getline()
+ if line == None:
+ return None
+ self.last = ('comment', tok)
+ return self.last
+ if l >= 2 and line[0] == '/' and line[1] == '/':
+ line = line[2:]
+ self.last = ('comment', line)
+ return self.last
+ i = 0
+ while i < l:
+ if line[i] == '/' and i+1 < l and line[i+1] == '/':
+ self.line = line[i:]
+ line = line[:i]
+ break
+ if line[i] == '/' and i+1 < l and line[i+1] == '*':
+ self.line = line[i:]
+ line = line[:i]
+ break
+ if line[i] == '"' or line[i] == "'":
+ self.line = line[i:]
+ line = line[:i]
+ break
+ i = i + 1
+ l = len(line)
+ i = 0
+ while i < l:
+ if line[i] == ' ' or line[i] == '\t':
+ i = i + 1
+ continue
+ o = ord(line[i])
+ if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
+ (o >= 48 and o <= 57):
+ s = i
+ while i < l:
+ o = ord(line[i])
+ if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
+ (o >= 48 and o <= 57) or string.find(
+ " \t(){}:;,+-*/%&!|[]=><", line[i]) == -1:
+ i = i + 1
+ else:
+ break
+ self.tokens.append(('name', line[s:i]))
+ continue
+ if string.find("(){}:;,[]", line[i]) != -1:
# if line[i] == '(' or line[i] == ')' or line[i] == '{' or \
-# line[i] == '}' or line[i] == ':' or line[i] == ';' or \
-# line[i] == ',' or line[i] == '[' or line[i] == ']':
- self.tokens.append(('sep', line[i]))
- i = i + 1
- continue
- if string.find("+-*><=/%&!|.", line[i]) != -1:
+# line[i] == '}' or line[i] == ':' or line[i] == ';' or \
+# line[i] == ',' or line[i] == '[' or line[i] == ']':
+ self.tokens.append(('sep', line[i]))
+ i = i + 1
+ continue
+ if string.find("+-*><=/%&!|.", line[i]) != -1:
# if line[i] == '+' or line[i] == '-' or line[i] == '*' or \
-# line[i] == '>' or line[i] == '<' or line[i] == '=' or \
-# line[i] == '/' or line[i] == '%' or line[i] == '&' or \
-# line[i] == '!' or line[i] == '|' or line[i] == '.':
- if line[i] == '.' and i + 2 < l and \
- line[i+1] == '.' and line[i+2] == '.':
- self.tokens.append(('name', '...'))
- i = i + 3
- continue
-
- j = i + 1
- if j < l and (
- string.find("+-*><=/%&!|", line[j]) != -1):
-# line[j] == '+' or line[j] == '-' or line[j] == '*' or \
-# line[j] == '>' or line[j] == '<' or line[j] == '=' or \
-# line[j] == '/' or line[j] == '%' or line[j] == '&' or \
-# line[j] == '!' or line[j] == '|'):
- self.tokens.append(('op', line[i:j+1]))
- i = j + 1
- else:
- self.tokens.append(('op', line[i]))
- i = i + 1
- continue
- s = i
- while i < l:
- o = ord(line[i])
- if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
- (o >= 48 and o <= 57) or (
- string.find(" \t(){}:;,+-*/%&!|[]=><", line[i]) == -1):
-# line[i] != ' ' and line[i] != '\t' and
-# line[i] != '(' and line[i] != ')' and
-# line[i] != '{' and line[i] != '}' and
-# line[i] != ':' and line[i] != ';' and
-# line[i] != ',' and line[i] != '+' and
-# line[i] != '-' and line[i] != '*' and
-# line[i] != '/' and line[i] != '%' and
-# line[i] != '&' and line[i] != '!' and
-# line[i] != '|' and line[i] != '[' and
-# line[i] != ']' and line[i] != '=' and
-# line[i] != '*' and line[i] != '>' and
-# line[i] != '<'):
- i = i + 1
- else:
- break
- self.tokens.append(('name', line[s:i]))
-
- tok = self.tokens[0]
- self.tokens = self.tokens[1:]
- self.last = tok
- return tok
+# line[i] == '>' or line[i] == '<' or line[i] == '=' or \
+# line[i] == '/' or line[i] == '%' or line[i] == '&' or \
+# line[i] == '!' or line[i] == '|' or line[i] == '.':
+ if line[i] == '.' and i + 2 < l and \
+ line[i+1] == '.' and line[i+2] == '.':
+ self.tokens.append(('name', '...'))
+ i = i + 3
+ continue
+
+ j = i + 1
+ if j < l and (
+ string.find("+-*><=/%&!|", line[j]) != -1):
+# line[j] == '+' or line[j] == '-' or line[j] == '*' or \
+# line[j] == '>' or line[j] == '<' or line[j] == '=' or \
+# line[j] == '/' or line[j] == '%' or line[j] == '&' or \
+# line[j] == '!' or line[j] == '|'):
+ self.tokens.append(('op', line[i:j+1]))
+ i = j + 1
+ else:
+ self.tokens.append(('op', line[i]))
+ i = i + 1
+ continue
+ s = i
+ while i < l:
+ o = ord(line[i])
+ if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
+ (o >= 48 and o <= 57) or (
+ string.find(" \t(){}:;,+-*/%&!|[]=><", line[i]) == -1):
+# line[i] != ' ' and line[i] != '\t' and
+# line[i] != '(' and line[i] != ')' and
+# line[i] != '{' and line[i] != '}' and
+# line[i] != ':' and line[i] != ';' and
+# line[i] != ',' and line[i] != '+' and
+# line[i] != '-' and line[i] != '*' and
+# line[i] != '/' and line[i] != '%' and
+# line[i] != '&' and line[i] != '!' and
+# line[i] != '|' and line[i] != '[' and
+# line[i] != ']' and line[i] != '=' and
+# line[i] != '*' and line[i] != '>' and
+# line[i] != '<'):
+ i = i + 1
+ else:
+ break
+ self.tokens.append(('name', line[s:i]))
+
+ tok = self.tokens[0]
+ self.tokens = self.tokens[1:]
+ self.last = tok
+ return tok
class CParser:
"""The C module parser"""
def __init__(self, filename, idx = None):
self.filename = filename
- if len(filename) > 2 and filename[-2:] == '.h':
- self.is_header = 1
- else:
- self.is_header = 0
+ if len(filename) > 2 and filename[-2:] == '.h':
+ self.is_header = 1
+ else:
+ self.is_header = 0
self.input = open(filename)
- self.lexer = CLexer(self.input)
- if idx == None:
- self.index = index()
- else:
- self.index = idx
- self.top_comment = ""
- self.last_comment = ""
- self.comment = None
- self.collect_ref = 0
- self.no_error = 0
- self.conditionals = []
- self.defines = []
+ self.lexer = CLexer(self.input)
+ if idx == None:
+ self.index = index()
+ else:
+ self.index = idx
+ self.top_comment = ""
+ self.last_comment = ""
+ self.comment = None
+ self.collect_ref = 0
+ self.no_error = 0
+ self.conditionals = []
+ self.defines = []
def collect_references(self):
self.collect_ref = 1
@@ -579,203 +579,203 @@ class CParser:
return self.lexer.getlineno()
def index_add(self, name, module, static, type, info=None, extra = None):
- if self.is_header == 1:
- self.index.add(name, module, module, static, type, self.lineno(),
- info, extra, self.conditionals)
- else:
- self.index.add(name, None, module, static, type, self.lineno(),
- info, extra, self.conditionals)
+ if self.is_header == 1:
+ self.index.add(name, module, module, static, type, self.lineno(),
+ info, extra, self.conditionals)
+ else:
+ self.index.add(name, None, module, static, type, self.lineno(),
+ info, extra, self.conditionals)
def index_add_ref(self, name, module, static, type, info=None,
extra = None):
- if self.is_header == 1:
- self.index.add_ref(name, module, module, static, type,
- self.lineno(), info, extra, self.conditionals)
- else:
- self.index.add_ref(name, None, module, static, type, self.lineno(),
- info, extra, self.conditionals)
+ if self.is_header == 1:
+ self.index.add_ref(name, module, module, static, type,
+ self.lineno(), info, extra, self.conditionals)
+ else:
+ self.index.add_ref(name, None, module, static, type, self.lineno(),
+ info, extra, self.conditionals)
def warning(self, msg):
if self.no_error:
- return
- print msg
+ return
+ print msg
def error(self, msg, token=-1):
if self.no_error:
- return
+ return
print "Parse Error: " + msg
- if token != -1:
- print "Got token ", token
- self.lexer.debug()
- sys.exit(1)
+ if token != -1:
+ print "Got token ", token
+ self.lexer.debug()
+ sys.exit(1)
def debug(self, msg, token=-1):
print "Debug: " + msg
- if token != -1:
- print "Got token ", token
- self.lexer.debug()
+ if token != -1:
+ print "Got token ", token
+ self.lexer.debug()
def parseTopComment(self, comment):
- res = {}
- lines = string.split(comment, "\n")
- item = None
- for line in lines:
- while line != "" and (line[0] == ' ' or line[0] == '\t'):
- line = line[1:]
- while line != "" and line[0] == '*':
- line = line[1:]
- while line != "" and (line[0] == ' ' or line[0] == '\t'):
- line = line[1:]
- try:
- (it, line) = string.split(line, ":", 1)
- item = it
- while line != "" and (line[0] == ' ' or line[0] == '\t'):
- line = line[1:]
- if res.has_key(item):
- res[item] = res[item] + " " + line
- else:
- res[item] = line
- except:
- if item != None:
- if res.has_key(item):
- res[item] = res[item] + " " + line
- else:
- res[item] = line
- self.index.info = res
+ res = {}
+ lines = string.split(comment, "\n")
+ item = None
+ for line in lines:
+ while line != "" and (line[0] == ' ' or line[0] == '\t'):
+ line = line[1:]
+ while line != "" and line[0] == '*':
+ line = line[1:]
+ while line != "" and (line[0] == ' ' or line[0] == '\t'):
+ line = line[1:]
+ try:
+ (it, line) = string.split(line, ":", 1)
+ item = it
+ while line != "" and (line[0] == ' ' or line[0] == '\t'):
+ line = line[1:]
+ if res.has_key(item):
+ res[item] = res[item] + " " + line
+ else:
+ res[item] = line
+ except:
+ if item != None:
+ if res.has_key(item):
+ res[item] = res[item] + " " + line
+ else:
+ res[item] = line
+ self.index.info = res
def parseComment(self, token):
if self.top_comment == "":
- self.top_comment = token[1]
- if self.comment == None or token[1][0] == '*':
- self.comment = token[1];
- else:
- self.comment = self.comment + token[1]
- token = self.lexer.token()
+ self.top_comment = token[1]
+ if self.comment == None or token[1][0] == '*':
+ self.comment = token[1];
+ else:
+ self.comment = self.comment + token[1]
+ token = self.lexer.token()
if string.find(self.comment, "DOC_DISABLE") != -1:
- self.stop_error()
+ self.stop_error()
if string.find(self.comment, "DOC_ENABLE") != -1:
- self.start_error()
+ self.start_error()
- return token
+ return token
#
# Parse a comment block associate to a typedef
#
def parseTypeComment(self, name, quiet = 0):
if name[0:2] == '__':
- quiet = 1
+ quiet = 1
args = []
- desc = ""
+ desc = ""
if self.comment == None:
- if not quiet:
- self.warning("Missing comment for type %s" % (name))
- return((args, desc))
+ if not quiet:
+ self.warning("Missing comment for type %s" % (name))
+ return((args, desc))
if self.comment[0] != '*':
- if not quiet:
- self.warning("Missing * in type comment for %s" % (name))
- return((args, desc))
- lines = string.split(self.comment, '\n')
- if lines[0] == '*':
- del lines[0]
- if lines[0] != "* %s:" % (name):
- if not quiet:
- self.warning("Misformatted type comment for %s" % (name))
- self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
- return((args, desc))
- del lines[0]
- while len(lines) > 0 and lines[0] == '*':
- del lines[0]
- desc = ""
- while len(lines) > 0:
- l = lines[0]
- while len(l) > 0 and l[0] == '*':
- l = l[1:]
- l = string.strip(l)
- desc = desc + " " + l
- del lines[0]
-
- desc = string.strip(desc)
-
- if quiet == 0:
- if desc == "":
- self.warning("Type comment for %s lack description of the macro" % (name))
-
- return(desc)
+ if not quiet:
+ self.warning("Missing * in type comment for %s" % (name))
+ return((args, desc))
+ lines = string.split(self.comment, '\n')
+ if lines[0] == '*':
+ del lines[0]
+ if lines[0] != "* %s:" % (name):
+ if not quiet:
+ self.warning("Misformatted type comment for %s" % (name))
+ self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
+ return((args, desc))
+ del lines[0]
+ while len(lines) > 0 and lines[0] == '*':
+ del lines[0]
+ desc = ""
+ while len(lines) > 0:
+ l = lines[0]
+ while len(l) > 0 and l[0] == '*':
+ l = l[1:]
+ l = string.strip(l)
+ desc = desc + " " + l
+ del lines[0]
+
+ desc = string.strip(desc)
+
+ if quiet == 0:
+ if desc == "":
+ self.warning("Type comment for %s lack description of the macro" % (name))
+
+ return(desc)
#
# Parse a comment block associate to a macro
#
def parseMacroComment(self, name, quiet = 0):
if name[0:2] == '__':
- quiet = 1
+ quiet = 1
args = []
- desc = ""
+ desc = ""
if self.comment == None:
- if not quiet:
- self.warning("Missing comment for macro %s" % (name))
- return((args, desc))
+ if not quiet:
+ self.warning("Missing comment for macro %s" % (name))
+ return((args, desc))
if self.comment[0] != '*':
- if not quiet:
- self.warning("Missing * in macro comment for %s" % (name))
- return((args, desc))
- lines = string.split(self.comment, '\n')
- if lines[0] == '*':
- del lines[0]
- if lines[0] != "* %s:" % (name):
- if not quiet:
- self.warning("Misformatted macro comment for %s" % (name))
- self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
- return((args, desc))
- del lines[0]
- while lines[0] == '*':
- del lines[0]
- while len(lines) > 0 and lines[0][0:3] == '* @':
- l = lines[0][3:]
- try:
- (arg, desc) = string.split(l, ':', 1)
- desc=string.strip(desc)
- arg=string.strip(arg)
+ if not quiet:
+ self.warning("Missing * in macro comment for %s" % (name))
+ return((args, desc))
+ lines = string.split(self.comment, '\n')
+ if lines[0] == '*':
+ del lines[0]
+ if lines[0] != "* %s:" % (name):
+ if not quiet:
+ self.warning("Misformatted macro comment for %s" % (name))
+ self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
+ return((args, desc))
+ del lines[0]
+ while lines[0] == '*':
+ del lines[0]
+ while len(lines) > 0 and lines[0][0:3] == '* @':
+ l = lines[0][3:]
+ try:
+ (arg, desc) = string.split(l, ':', 1)
+ desc=string.strip(desc)
+ arg=string.strip(arg)
except:
- if not quiet:
- self.warning("Misformatted macro comment for %s" % (name))
- self.warning(" problem with '%s'" % (lines[0]))
- del lines[0]
- continue
- del lines[0]
- l = string.strip(lines[0])
- while len(l) > 2 and l[0:3] != '* @':
- while l[0] == '*':
- l = l[1:]
- desc = desc + ' ' + string.strip(l)
- del lines[0]
- if len(lines) == 0:
- break
- l = lines[0]
+ if not quiet:
+ self.warning("Misformatted macro comment for %s" % (name))
+ self.warning(" problem with '%s'" % (lines[0]))
+ del lines[0]
+ continue
+ del lines[0]
+ l = string.strip(lines[0])
+ while len(l) > 2 and l[0:3] != '* @':
+ while l[0] == '*':
+ l = l[1:]
+ desc = desc + ' ' + string.strip(l)
+ del lines[0]
+ if len(lines) == 0:
+ break
+ l = lines[0]
args.append((arg, desc))
- while len(lines) > 0 and lines[0] == '*':
- del lines[0]
- desc = ""
- while len(lines) > 0:
- l = lines[0]
- while len(l) > 0 and l[0] == '*':
- l = l[1:]
- l = string.strip(l)
- desc = desc + " " + l
- del lines[0]
+ while len(lines) > 0 and lines[0] == '*':
+ del lines[0]
+ desc = ""
+ while len(lines) > 0:
+ l = lines[0]
+ while len(l) > 0 and l[0] == '*':
+ l = l[1:]
+ l = string.strip(l)
+ desc = desc + " " + l
+ del lines[0]
- desc = string.strip(desc)
+ desc = string.strip(desc)
- if quiet == 0:
- if desc == "":
- self.warning("Macro comment for %s lack description of the macro" % (name))
+ if quiet == 0:
+ if desc == "":
+ self.warning("Macro comment for %s lack description of the macro" % (name))
- return((args, desc))
+ return((args, desc))
#
# Parse a comment block and merge the information found in the
@@ -786,219 +786,219 @@ class CParser:
global ignored_functions
if name == 'main':
- quiet = 1
+ quiet = 1
if name[0:2] == '__':
- quiet = 1
+ quiet = 1
if ignored_functions.has_key(name):
quiet = 1
- (ret, args) = description
- desc = ""
- retdesc = ""
+ (ret, args) = description
+ desc = ""
+ retdesc = ""
if self.comment == None:
- if not quiet:
- self.warning("Missing comment for function %s" % (name))
- return(((ret[0], retdesc), args, desc))
+ if not quiet:
+ self.warning("Missing comment for function %s" % (name))
+ return(((ret[0], retdesc), args, desc))
if self.comment[0] != '*':
- if not quiet:
- self.warning("Missing * in function comment for %s" % (name))
- return(((ret[0], retdesc), args, desc))
- lines = string.split(self.comment, '\n')
- if lines[0] == '*':
- del lines[0]
- if lines[0] != "* %s:" % (name):
- if not quiet:
- self.warning("Misformatted function comment for %s" % (name))
- self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
- return(((ret[0], retdesc), args, desc))
- del lines[0]
- while lines[0] == '*':
- del lines[0]
- nbargs = len(args)
- while len(lines) > 0 and lines[0][0:3] == '* @':
- l = lines[0][3:]
- try:
- (arg, desc) = string.split(l, ':', 1)
- desc=string.strip(desc)
- arg=string.strip(arg)
+ if not quiet:
+ self.warning("Missing * in function comment for %s" % (name))
+ return(((ret[0], retdesc), args, desc))
+ lines = string.split(self.comment, '\n')
+ if lines[0] == '*':
+ del lines[0]
+ if lines[0] != "* %s:" % (name):
+ if not quiet:
+ self.warning("Misformatted function comment for %s" % (name))
+ self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
+ return(((ret[0], retdesc), args, desc))
+ del lines[0]
+ while lines[0] == '*':
+ del lines[0]
+ nbargs = len(args)
+ while len(lines) > 0 and lines[0][0:3] == '* @':
+ l = lines[0][3:]
+ try:
+ (arg, desc) = string.split(l, ':', 1)
+ desc=string.strip(desc)
+ arg=string.strip(arg)
except:
- if not quiet:
- self.warning("Misformatted function comment for %s" % (name))
- self.warning(" problem with '%s'" % (lines[0]))
- del lines[0]
- continue
- del lines[0]
- l = string.strip(lines[0])
- while len(l) > 2 and l[0:3] != '* @':
- while l[0] == '*':
- l = l[1:]
- desc = desc + ' ' + string.strip(l)
- del lines[0]
- if len(lines) == 0:
- break
- l = lines[0]
- i = 0
- while i < nbargs:
- if args[i][1] == arg:
- args[i] = (args[i][0], arg, desc)
- break;
- i = i + 1
- if i >= nbargs:
- if not quiet:
- self.warning("Unable to find arg %s from function comment for %s" % (
- arg, name))
- while len(lines) > 0 and lines[0] == '*':
- del lines[0]
- desc = None
- while len(lines) > 0:
- l = lines[0]
- i = 0
- # Remove all leading '*', followed by at most one ' ' character
- # since we need to preserve correct identation of code examples
- while i < len(l) and l[i] == '*':
- i = i + 1
- if i > 0:
- if i < len(l) and l[i] == ' ':
- i = i + 1
- l = l[i:]
- if len(l) >= 6 and l[0:7] == "returns" or l[0:7] == "Returns":
- try:
- l = string.split(l, ' ', 1)[1]
- except:
- l = ""
- retdesc = string.strip(l)
- del lines[0]
- while len(lines) > 0:
- l = lines[0]
- while len(l) > 0 and l[0] == '*':
- l = l[1:]
- l = string.strip(l)
- retdesc = retdesc + " " + l
- del lines[0]
- else:
- if desc is not None:
- desc = desc + "\n" + l
- else:
- desc = l
- del lines[0]
-
- if desc is None:
- desc = ""
- retdesc = string.strip(retdesc)
- desc = string.strip(desc)
-
- if quiet == 0:
- #
- # report missing comments
- #
- i = 0
- while i < nbargs:
- if args[i][2] == None and args[i][0] != "void" and args[i][1] != None:
- self.warning("Function comment for %s lacks description of arg %s" % (name, args[i][1]))
- i = i + 1
- if retdesc == "" and ret[0] != "void":
- self.warning("Function comment for %s lacks description of return value" % (name))
- if desc == "":
- self.warning("Function comment for %s lacks description of the function" % (name))
-
-
- return(((ret[0], retdesc), args, desc))
+ if not quiet:
+ self.warning("Misformatted function comment for %s" % (name))
+ self.warning(" problem with '%s'" % (lines[0]))
+ del lines[0]
+ continue
+ del lines[0]
+ l = string.strip(lines[0])
+ while len(l) > 2 and l[0:3] != '* @':
+ while l[0] == '*':
+ l = l[1:]
+ desc = desc + ' ' + string.strip(l)
+ del lines[0]
+ if len(lines) == 0:
+ break
+ l = lines[0]
+ i = 0
+ while i < nbargs:
+ if args[i][1] == arg:
+ args[i] = (args[i][0], arg, desc)
+ break;
+ i = i + 1
+ if i >= nbargs:
+ if not quiet:
+ self.warning("Unable to find arg %s from function comment for %s" % (
+ arg, name))
+ while len(lines) > 0 and lines[0] == '*':
+ del lines[0]
+ desc = None
+ while len(lines) > 0:
+ l = lines[0]
+ i = 0
+ # Remove all leading '*', followed by at most one ' ' character
+ # since we need to preserve correct identation of code examples
+ while i < len(l) and l[i] == '*':
+ i = i + 1
+ if i > 0:
+ if i < len(l) and l[i] == ' ':
+ i = i + 1
+ l = l[i:]
+ if len(l) >= 6 and l[0:7] == "returns" or l[0:7] == "Returns":
+ try:
+ l = string.split(l, ' ', 1)[1]
+ except:
+ l = ""
+ retdesc = string.strip(l)
+ del lines[0]
+ while len(lines) > 0:
+ l = lines[0]
+ while len(l) > 0 and l[0] == '*':
+ l = l[1:]
+ l = string.strip(l)
+ retdesc = retdesc + " " + l
+ del lines[0]
+ else:
+ if desc is not None:
+ desc = desc + "\n" + l
+ else:
+ desc = l
+ del lines[0]
+
+ if desc is None:
+ desc = ""
+ retdesc = string.strip(retdesc)
+ desc = string.strip(desc)
+
+ if quiet == 0:
+ #
+ # report missing comments
+ #
+ i = 0
+ while i < nbargs:
+ if args[i][2] == None and args[i][0] != "void" and args[i][1] != None:
+ self.warning("Function comment for %s lacks description of arg %s" % (name, args[i][1]))
+ i = i + 1
+ if retdesc == "" and ret[0] != "void":
+ self.warning("Function comment for %s lacks description of return value" % (name))
+ if desc == "":
+ self.warning("Function comment for %s lacks description of the function" % (name))
+
+
+ return(((ret[0], retdesc), args, desc))
def parsePreproc(self, token):
- if debug:
- print "=> preproc ", token, self.lexer.tokens
+ if debug:
+ print "=> preproc ", token, self.lexer.tokens
name = token[1]
- if name == "#include":
- token = self.lexer.token()
- if token == None:
- return None
- if token[0] == 'preproc':
- self.index_add(token[1], self.filename, not self.is_header,
- "include")
- return self.lexer.token()
- return token
- if name == "#define":
- token = self.lexer.token()
- if token == None:
- return None
- if token[0] == 'preproc':
- # TODO macros with arguments
- name = token[1]
- lst = []
- token = self.lexer.token()
- while token != None and token[0] == 'preproc' and \
- token[1][0] != '#':
- lst.append(token[1])
- token = self.lexer.token()
+ if name == "#include":
+ token = self.lexer.token()
+ if token == None:
+ return None
+ if token[0] == 'preproc':
+ self.index_add(token[1], self.filename, not self.is_header,
+ "include")
+ return self.lexer.token()
+ return token
+ if name == "#define":
+ token = self.lexer.token()
+ if token == None:
+ return None
+ if token[0] == 'preproc':
+ # TODO macros with arguments
+ name = token[1]
+ lst = []
+ token = self.lexer.token()
+ while token != None and token[0] == 'preproc' and \
+ token[1][0] != '#':
+ lst.append(token[1])
+ token = self.lexer.token()
try:
- name = string.split(name, '(') [0]
+ name = string.split(name, '(') [0]
except:
pass
info = self.parseMacroComment(name, not self.is_header)
- self.index_add(name, self.filename, not self.is_header,
- "macro", info)
- return token
-
- #
- # Processing of conditionals modified by Bill 1/1/05
- #
- # We process conditionals (i.e. tokens from #ifdef, #ifndef,
- # #if, #else and #endif) for headers and mainline code,
- # store the ones from the header in libxml2-api.xml, and later
- # (in the routine merge_public) verify that the two (header and
- # mainline code) agree.
- #
- # There is a small problem with processing the headers. Some of
- # the variables are not concerned with enabling / disabling of
- # library functions (e.g. '__XML_PARSER_H__'), and we don't want
- # them to be included in libxml2-api.xml, or involved in
- # the check between the header and the mainline code. To
- # accomplish this, we ignore any conditional which doesn't include
- # the string 'ENABLED'
- #
- if name == "#ifdef":
- apstr = self.lexer.tokens[0][1]
- try:
- self.defines.append(apstr)
- if string.find(apstr, 'ENABLED') != -1:
- self.conditionals.append("defined(%s)" % apstr)
- except:
- pass
- elif name == "#ifndef":
- apstr = self.lexer.tokens[0][1]
- try:
- self.defines.append(apstr)
- if string.find(apstr, 'ENABLED') != -1:
- self.conditionals.append("!defined(%s)" % apstr)
- except:
- pass
- elif name == "#if":
- apstr = ""
- for tok in self.lexer.tokens:
- if apstr != "":
- apstr = apstr + " "
- apstr = apstr + tok[1]
- try:
- self.defines.append(apstr)
- if string.find(apstr, 'ENABLED') != -1:
- self.conditionals.append(apstr)
- except:
- pass
- elif name == "#else":
- if self.conditionals != [] and \
- string.find(self.defines[-1], 'ENABLED') != -1:
- self.conditionals[-1] = "!(%s)" % self.conditionals[-1]
- elif name == "#endif":
- if self.conditionals != [] and \
- string.find(self.defines[-1], 'ENABLED') != -1:
- self.conditionals = self.conditionals[:-1]
- self.defines = self.defines[:-1]
- token = self.lexer.token()
- while token != None and token[0] == 'preproc' and \
- token[1][0] != '#':
- token = self.lexer.token()
- return token
+ self.index_add(name, self.filename, not self.is_header,
+ "macro", info)
+ return token
+
+ #
+ # Processing of conditionals modified by Bill 1/1/05
+ #
+ # We process conditionals (i.e. tokens from #ifdef, #ifndef,
+ # #if, #else and #endif) for headers and mainline code,
+ # store the ones from the header in libxml2-api.xml, and later
+ # (in the routine merge_public) verify that the two (header and
+ # mainline code) agree.
+ #
+ # There is a small problem with processing the headers. Some of
+ # the variables are not concerned with enabling / disabling of
+ # library functions (e.g. '__XML_PARSER_H__'), and we don't want
+ # them to be included in libxml2-api.xml, or involved in
+ # the check between the header and the mainline code. To
+ # accomplish this, we ignore any conditional which doesn't include
+ # the string 'ENABLED'
+ #
+ if name == "#ifdef":
+ apstr = self.lexer.tokens[0][1]
+ try:
+ self.defines.append(apstr)
+ if string.find(apstr, 'ENABLED') != -1:
+ self.conditionals.append("defined(%s)" % apstr)
+ except:
+ pass
+ elif name == "#ifndef":
+ apstr = self.lexer.tokens[0][1]
+ try:
+ self.defines.append(apstr)
+ if string.find(apstr, 'ENABLED') != -1:
+ self.conditionals.append("!defined(%s)" % apstr)
+ except:
+ pass
+ elif name == "#if":
+ apstr = ""
+ for tok in self.lexer.tokens:
+ if apstr != "":
+ apstr = apstr + " "
+ apstr = apstr + tok[1]
+ try:
+ self.defines.append(apstr)
+ if string.find(apstr, 'ENABLED') != -1:
+ self.conditionals.append(apstr)
+ except:
+ pass
+ elif name == "#else":
+ if self.conditionals != [] and \
+ string.find(self.defines[-1], 'ENABLED') != -1:
+ self.conditionals[-1] = "!(%s)" % self.conditionals[-1]
+ elif name == "#endif":
+ if self.conditionals != [] and \
+ string.find(self.defines[-1], 'ENABLED') != -1:
+ self.conditionals = self.conditionals[:-1]
+ self.defines = self.defines[:-1]
+ token = self.lexer.token()
+ while token != None and token[0] == 'preproc' and \
+ token[1][0] != '#':
+ token = self.lexer.token()
+ return token
#
# token acquisition on top of the lexer, it handle internally
@@ -1012,89 +1012,89 @@ class CParser:
global ignored_words
token = self.lexer.token()
- while token != None:
- if token[0] == 'comment':
- token = self.parseComment(token)
- continue
- elif token[0] == 'preproc':
- token = self.parsePreproc(token)
- continue
- elif token[0] == "name" and token[1] == "__const":
- token = ("name", "const")
- return token
- elif token[0] == "name" and token[1] == "__attribute":
- token = self.lexer.token()
- while token != None and token[1] != ";":
- token = self.lexer.token()
- return token
- elif token[0] == "name" and ignored_words.has_key(token[1]):
- (n, info) = ignored_words[token[1]]
- i = 0
- while i < n:
- token = self.lexer.token()
- i = i + 1
- token = self.lexer.token()
- continue
- else:
- if debug:
- print "=> ", token
- return token
- return None
+ while token != None:
+ if token[0] == 'comment':
+ token = self.parseComment(token)
+ continue
+ elif token[0] == 'preproc':
+ token = self.parsePreproc(token)
+ continue
+ elif token[0] == "name" and token[1] == "__const":
+ token = ("name", "const")
+ return token
+ elif token[0] == "name" and token[1] == "__attribute":
+ token = self.lexer.token()
+ while token != None and token[1] != ";":
+ token = self.lexer.token()
+ return token
+ elif token[0] == "name" and ignored_words.has_key(token[1]):
+ (n, info) = ignored_words[token[1]]
+ i = 0
+ while i < n:
+ token = self.lexer.token()
+ i = i + 1
+ token = self.lexer.token()
+ continue
+ else:
+ if debug:
+ print "=> ", token
+ return token
+ return None
#
# Parse a typedef, it records the type and its name.
#
def parseTypedef(self, token):
if token == None:
- return None
- token = self.parseType(token)
- if token == None:
- self.error("parsing typedef")
- return None
- base_type = self.type
- type = base_type
- #self.debug("end typedef type", token)
- while token != None:
- if token[0] == "name":
- name = token[1]
- signature = self.signature
- if signature != None:
- type = string.split(type, '(')[0]
- d = self.mergeFunctionComment(name,
- ((type, None), signature), 1)
- self.index_add(name, self.filename, not self.is_header,
- "functype", d)
- else:
- if base_type == "struct":
- self.index_add(name, self.filename, not self.is_header,
- "struct", type)
- base_type = "struct " + name
- else:
- # TODO report missing or misformatted comments
- info = self.parseTypeComment(name, 1)
- self.index_add(name, self.filename, not self.is_header,
- "typedef", type, info)
- token = self.token()
- else:
- self.error("parsing typedef: expecting a name")
- return token
- #self.debug("end typedef", token)
- if token != None and token[0] == 'sep' and token[1] == ',':
- type = base_type
- token = self.token()
- while token != None and token[0] == "op":
- type = type + token[1]
- token = self.token()
- elif token != None and token[0] == 'sep' and token[1] == ';':
- break;
- elif token != None and token[0] == 'name':
- type = base_type
- continue;
- else:
- self.error("parsing typedef: expecting ';'", token)
- return token
- token = self.token()
- return token
+ return None
+ token = self.parseType(token)
+ if token == None:
+ self.error("parsing typedef")
+ return None
+ base_type = self.type
+ type = base_type
+ #self.debug("end typedef type", token)
+ while token != None:
+ if token[0] == "name":
+ name = token[1]
+ signature = self.signature
+ if signature != None:
+ type = string.split(type, '(')[0]
+ d = self.mergeFunctionComment(name,
+ ((type, None), signature), 1)
+ self.index_add(name, self.filename, not self.is_header,
+ "functype", d)
+ else:
+ if base_type == "struct":
+ self.index_add(name, self.filename, not self.is_header,
+ "struct", type)
+ base_type = "struct " + name
+ else:
+ # TODO report missing or misformatted comments
+ info = self.parseTypeComment(name, 1)
+ self.index_add(name, self.filename, not self.is_header,
+ "typedef", type, info)
+ token = self.token()
+ else:
+ self.error("parsing typedef: expecting a name")
+ return token
+ #self.debug("end typedef", token)
+ if token != None and token[0] == 'sep' and token[1] == ',':
+ type = base_type
+ token = self.token()
+ while token != None and token[0] == "op":
+ type = type + token[1]
+ token = self.token()
+ elif token != None and token[0] == 'sep' and token[1] == ';':
+ break;
+ elif token != None and token[0] == 'name':
+ type = base_type
+ continue;
+ else:
+ self.error("parsing typedef: expecting ';'", token)
+ return token
+ token = self.token()
+ return token
#
# Parse a C code block, used for functions it parse till
@@ -1102,138 +1102,138 @@ class CParser:
#
def parseBlock(self, token):
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- self.comment = None
- token = self.token()
- return token
- else:
- if self.collect_ref == 1:
- oldtok = token
- token = self.token()
- if oldtok[0] == "name" and oldtok[1][0:3] == "vir":
- if token[0] == "sep" and token[1] == "(":
- self.index_add_ref(oldtok[1], self.filename,
- 0, "function")
- token = self.token()
- elif token[0] == "name":
- token = self.token()
- if token[0] == "sep" and (token[1] == ";" or
- token[1] == "," or token[1] == "="):
- self.index_add_ref(oldtok[1], self.filename,
- 0, "type")
- elif oldtok[0] == "name" and oldtok[1][0:4] == "XEN_":
- self.index_add_ref(oldtok[1], self.filename,
- 0, "typedef")
- elif oldtok[0] == "name" and oldtok[1][0:7] == "LIBXEN_":
- self.index_add_ref(oldtok[1], self.filename,
- 0, "typedef")
-
- else:
- token = self.token()
- return token
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ self.comment = None
+ token = self.token()
+ return token
+ else:
+ if self.collect_ref == 1:
+ oldtok = token
+ token = self.token()
+ if oldtok[0] == "name" and oldtok[1][0:3] == "vir":
+ if token[0] == "sep" and token[1] == "(":
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "function")
+ token = self.token()
+ elif token[0] == "name":
+ token = self.token()
+ if token[0] == "sep" and (token[1] == ";" or
+ token[1] == "," or token[1] == "="):
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "type")
+ elif oldtok[0] == "name" and oldtok[1][0:4] == "XEN_":
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "typedef")
+ elif oldtok[0] == "name" and oldtok[1][0:7] == "LIBXEN_":
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "typedef")
+
+ else:
+ token = self.token()
+ return token
#
# Parse a C struct definition till the balancing }
#
def parseStruct(self, token):
fields = []
- #self.debug("start parseStruct", token)
+ #self.debug("start parseStruct", token)
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- self.struct_fields = fields
- #self.debug("end parseStruct", token)
- #print fields
- token = self.token()
- return token
- else:
- base_type = self.type
- #self.debug("before parseType", token)
- token = self.parseType(token)
- #self.debug("after parseType", token)
- if token != None and token[0] == "name":
- fname = token[1]
- token = self.token()
- if token[0] == "sep" and token[1] == ";":
- self.comment = None
- token = self.token()
- fields.append((self.type, fname, self.comment))
- self.comment = None
- else:
- self.error("parseStruct: expecting ;", token)
- elif token != None and token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- if token != None and token[0] == "name":
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == ";":
- token = self.token()
- else:
- self.error("parseStruct: expecting ;", token)
- else:
- self.error("parseStruct: name", token)
- token = self.token()
- self.type = base_type;
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ self.struct_fields = fields
+ #self.debug("end parseStruct", token)
+ #print fields
+ token = self.token()
+ return token
+ else:
+ base_type = self.type
+ #self.debug("before parseType", token)
+ token = self.parseType(token)
+ #self.debug("after parseType", token)
+ if token != None and token[0] == "name":
+ fname = token[1]
+ token = self.token()
+ if token[0] == "sep" and token[1] == ";":
+ self.comment = None
+ token = self.token()
+ fields.append((self.type, fname, self.comment))
+ self.comment = None
+ else:
+ self.error("parseStruct: expecting ;", token)
+ elif token != None and token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ if token != None and token[0] == "name":
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == ";":
+ token = self.token()
+ else:
+ self.error("parseStruct: expecting ;", token)
+ else:
+ self.error("parseStruct: name", token)
+ token = self.token()
+ self.type = base_type;
self.struct_fields = fields
- #self.debug("end parseStruct", token)
- #print fields
- return token
+ #self.debug("end parseStruct", token)
+ #print fields
+ return token
#
# Parse a C enum block, parse till the balancing }
#
def parseEnumBlock(self, token):
self.enums = []
- name = None
- self.comment = None
- comment = ""
- value = "0"
+ name = None
+ self.comment = None
+ comment = ""
+ value = "0"
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- if name != None:
- if self.comment != None:
- comment = self.comment
- self.comment = None
- self.enums.append((name, value, comment))
- token = self.token()
- return token
- elif token[0] == "name":
- if name != None:
- if self.comment != None:
- comment = string.strip(self.comment)
- self.comment = None
- self.enums.append((name, value, comment))
- name = token[1]
- comment = ""
- token = self.token()
- if token[0] == "op" and token[1][0] == "=":
- value = ""
- if len(token[1]) > 1:
- value = token[1][1:]
- token = self.token()
- while token[0] != "sep" or (token[1] != ',' and
- token[1] != '}'):
- value = value + token[1]
- token = self.token()
- else:
- try:
- value = "%d" % (int(value) + 1)
- except:
- self.warning("Failed to compute value of enum %s" % (name))
- value=""
- if token[0] == "sep" and token[1] == ",":
- token = self.token()
- else:
- token = self.token()
- return token
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ if name != None:
+ if self.comment != None:
+ comment = self.comment
+ self.comment = None
+ self.enums.append((name, value, comment))
+ token = self.token()
+ return token
+ elif token[0] == "name":
+ if name != None:
+ if self.comment != None:
+ comment = string.strip(self.comment)
+ self.comment = None
+ self.enums.append((name, value, comment))
+ name = token[1]
+ comment = ""
+ token = self.token()
+ if token[0] == "op" and token[1][0] == "=":
+ value = ""
+ if len(token[1]) > 1:
+ value = token[1][1:]
+ token = self.token()
+ while token[0] != "sep" or (token[1] != ',' and
+ token[1] != '}'):
+ value = value + token[1]
+ token = self.token()
+ else:
+ try:
+ value = "%d" % (int(value) + 1)
+ except:
+ self.warning("Failed to compute value of enum %s" % (name))
+ value=""
+ if token[0] == "sep" and token[1] == ",":
+ token = self.token()
+ else:
+ token = self.token()
+ return token
#
# Parse a C definition block, used for structs it parse till
@@ -1241,15 +1241,15 @@ class CParser:
#
def parseTypeBlock(self, token):
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- token = self.token()
- return token
- else:
- token = self.token()
- return token
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ token = self.token()
+ return token
+ else:
+ token = self.token()
+ return token
#
# Parse a type: the fact that the type name can either occur after
@@ -1258,221 +1258,221 @@ class CParser:
#
def parseType(self, token):
self.type = ""
- self.struct_fields = []
+ self.struct_fields = []
self.signature = None
- if token == None:
- return token
-
- while token[0] == "name" and (
- token[1] == "const" or \
- token[1] == "unsigned" or \
- token[1] == "signed"):
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- token = self.token()
+ if token == None:
+ return token
+
+ while token[0] == "name" and (
+ token[1] == "const" or \
+ token[1] == "unsigned" or \
+ token[1] == "signed"):
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ token = self.token()
if token[0] == "name" and token[1] == "long":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
-
- # some read ahead for long long
- oldtmp = token
- token = self.token()
- if token[0] == "name" and token[1] == "long":
- self.type = self.type + " " + token[1]
- else:
- self.push(token)
- token = oldtmp
-
- if token[0] == "name" and token[1] == "int":
- if self.type == "":
- self.type = tmp[1]
- else:
- self.type = self.type + " " + tmp[1]
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+
+ # some read ahead for long long
+ oldtmp = token
+ token = self.token()
+ if token[0] == "name" and token[1] == "long":
+ self.type = self.type + " " + token[1]
+ else:
+ self.push(token)
+ token = oldtmp
+
+ if token[0] == "name" and token[1] == "int":
+ if self.type == "":
+ self.type = tmp[1]
+ else:
+ self.type = self.type + " " + tmp[1]
elif token[0] == "name" and token[1] == "short":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- if token[0] == "name" and token[1] == "int":
- if self.type == "":
- self.type = tmp[1]
- else:
- self.type = self.type + " " + tmp[1]
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ if token[0] == "name" and token[1] == "int":
+ if self.type == "":
+ self.type = tmp[1]
+ else:
+ self.type = self.type + " " + tmp[1]
elif token[0] == "name" and token[1] == "struct":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- token = self.token()
- nametok = None
- if token[0] == "name":
- nametok = token
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseStruct(token)
- elif token != None and token[0] == "op" and token[1] == "*":
- self.type = self.type + " " + nametok[1] + " *"
- token = self.token()
- while token != None and token[0] == "op" and token[1] == "*":
- self.type = self.type + " *"
- token = self.token()
- if token[0] == "name":
- nametok = token
- token = self.token()
- else:
- self.error("struct : expecting name", token)
- return token
- elif token != None and token[0] == "name" and nametok != None:
- self.type = self.type + " " + nametok[1]
- return token
-
- if nametok != None:
- self.lexer.push(token)
- token = nametok
- return token
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ token = self.token()
+ nametok = None
+ if token[0] == "name":
+ nametok = token
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseStruct(token)
+ elif token != None and token[0] == "op" and token[1] == "*":
+ self.type = self.type + " " + nametok[1] + " *"
+ token = self.token()
+ while token != None and token[0] == "op" and token[1] == "*":
+ self.type = self.type + " *"
+ token = self.token()
+ if token[0] == "name":
+ nametok = token
+ token = self.token()
+ else:
+ self.error("struct : expecting name", token)
+ return token
+ elif token != None and token[0] == "name" and nametok != None:
+ self.type = self.type + " " + nametok[1]
+ return token
+
+ if nametok != None:
+ self.lexer.push(token)
+ token = nametok
+ return token
elif token[0] == "name" and token[1] == "enum":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- self.enums = []
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseEnumBlock(token)
- else:
- self.error("parsing enum: expecting '{'", token)
- enum_type = None
- if token != None and token[0] != "name":
- self.lexer.push(token)
- token = ("name", "enum")
- else:
- enum_type = token[1]
- for enum in self.enums:
- self.index_add(enum[0], self.filename,
- not self.is_header, "enum",
- (enum[1], enum[2], enum_type))
- return token
-
- elif token[0] == "name":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- else:
- self.error("parsing type %s: expecting a name" % (self.type),
- token)
- return token
- token = self.token()
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ self.enums = []
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseEnumBlock(token)
+ else:
+ self.error("parsing enum: expecting '{'", token)
+ enum_type = None
+ if token != None and token[0] != "name":
+ self.lexer.push(token)
+ token = ("name", "enum")
+ else:
+ enum_type = token[1]
+ for enum in self.enums:
+ self.index_add(enum[0], self.filename,
+ not self.is_header, "enum",
+ (enum[1], enum[2], enum_type))
+ return token
+
+ elif token[0] == "name":
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ else:
+ self.error("parsing type %s: expecting a name" % (self.type),
+ token)
+ return token
+ token = self.token()
while token != None and (token[0] == "op" or
- token[0] == "name" and token[1] == "const"):
- self.type = self.type + " " + token[1]
- token = self.token()
-
- #
- # if there is a parenthesis here, this means a function type
- #
- if token != None and token[0] == "sep" and token[1] == '(':
- self.type = self.type + token[1]
- token = self.token()
- while token != None and token[0] == "op" and token[1] == '*':
- self.type = self.type + token[1]
- token = self.token()
- if token == None or token[0] != "name" :
- self.error("parsing function type, name expected", token);
- return token
- self.type = self.type + token[1]
- nametok = token
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == ')':
- self.type = self.type + token[1]
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == '(':
- token = self.token()
- type = self.type;
- token = self.parseSignature(token);
- self.type = type;
- else:
- self.error("parsing function type, '(' expected", token);
- return token
- else:
- self.error("parsing function type, ')' expected", token);
- return token
- self.lexer.push(token)
- token = nametok
- return token
+ token[0] == "name" and token[1] == "const"):
+ self.type = self.type + " " + token[1]
+ token = self.token()
#
- # do some lookahead for arrays
- #
- if token != None and token[0] == "name":
- nametok = token
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == '[':
- self.type = self.type + nametok[1]
- while token != None and token[0] == "sep" and token[1] == '[':
- self.type = self.type + token[1]
- token = self.token()
- while token != None and token[0] != 'sep' and \
- token[1] != ']' and token[1] != ';':
- self.type = self.type + token[1]
- token = self.token()
- if token != None and token[0] == 'sep' and token[1] == ']':
- self.type = self.type + token[1]
- token = self.token()
- else:
- self.error("parsing array type, ']' expected", token);
- return token
- elif token != None and token[0] == "sep" and token[1] == ':':
- # remove :12 in case it's a limited int size
- token = self.token()
- token = self.token()
- self.lexer.push(token)
- token = nametok
-
- return token
+ # if there is a parenthesis here, this means a function type
+ #
+ if token != None and token[0] == "sep" and token[1] == '(':
+ self.type = self.type + token[1]
+ token = self.token()
+ while token != None and token[0] == "op" and token[1] == '*':
+ self.type = self.type + token[1]
+ token = self.token()
+ if token == None or token[0] != "name" :
+ self.error("parsing function type, name expected", token);
+ return token
+ self.type = self.type + token[1]
+ nametok = token
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == ')':
+ self.type = self.type + token[1]
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == '(':
+ token = self.token()
+ type = self.type;
+ token = self.parseSignature(token);
+ self.type = type;
+ else:
+ self.error("parsing function type, '(' expected", token);
+ return token
+ else:
+ self.error("parsing function type, ')' expected", token);
+ return token
+ self.lexer.push(token)
+ token = nametok
+ return token
+
+ #
+ # do some lookahead for arrays
+ #
+ if token != None and token[0] == "name":
+ nametok = token
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == '[':
+ self.type = self.type + nametok[1]
+ while token != None and token[0] == "sep" and token[1] == '[':
+ self.type = self.type + token[1]
+ token = self.token()
+ while token != None and token[0] != 'sep' and \
+ token[1] != ']' and token[1] != ';':
+ self.type = self.type + token[1]
+ token = self.token()
+ if token != None and token[0] == 'sep' and token[1] == ']':
+ self.type = self.type + token[1]
+ token = self.token()
+ else:
+ self.error("parsing array type, ']' expected", token);
+ return token
+ elif token != None and token[0] == "sep" and token[1] == ':':
+ # remove :12 in case it's a limited int size
+ token = self.token()
+ token = self.token()
+ self.lexer.push(token)
+ token = nametok
+
+ return token
#
# Parse a signature: '(' has been parsed and we scan the type definition
# up to the ')' included
def parseSignature(self, token):
signature = []
- if token != None and token[0] == "sep" and token[1] == ')':
- self.signature = []
- token = self.token()
- return token
- while token != None:
- token = self.parseType(token)
- if token != None and token[0] == "name":
- signature.append((self.type, token[1], None))
- token = self.token()
- elif token != None and token[0] == "sep" and token[1] == ',':
- token = self.token()
- continue
- elif token != None and token[0] == "sep" and token[1] == ')':
- # only the type was provided
- if self.type == "...":
- signature.append((self.type, "...", None))
- else:
- signature.append((self.type, None, None))
- if token != None and token[0] == "sep":
- if token[1] == ',':
- token = self.token()
- continue
- elif token[1] == ')':
- token = self.token()
- break
- self.signature = signature
- return token
+ if token != None and token[0] == "sep" and token[1] == ')':
+ self.signature = []
+ token = self.token()
+ return token
+ while token != None:
+ token = self.parseType(token)
+ if token != None and token[0] == "name":
+ signature.append((self.type, token[1], None))
+ token = self.token()
+ elif token != None and token[0] == "sep" and token[1] == ',':
+ token = self.token()
+ continue
+ elif token != None and token[0] == "sep" and token[1] == ')':
+ # only the type was provided
+ if self.type == "...":
+ signature.append((self.type, "...", None))
+ else:
+ signature.append((self.type, None, None))
+ if token != None and token[0] == "sep":
+ if token[1] == ',':
+ token = self.token()
+ continue
+ elif token[1] == ')':
+ token = self.token()
+ break
+ self.signature = signature
+ return token
#
# Parse a global definition, be it a type, variable or function
@@ -1481,134 +1481,134 @@ class CParser:
def parseGlobal(self, token):
static = 0
if token[1] == 'extern':
- token = self.token()
- if token == None:
- return token
- if token[0] == 'string':
- if token[1] == 'C':
- token = self.token()
- if token == None:
- return token
- if token[0] == 'sep' and token[1] == "{":
- token = self.token()
-# print 'Entering extern "C line ', self.lineno()
- while token != None and (token[0] != 'sep' or
- token[1] != "}"):
- if token[0] == 'name':
- token = self.parseGlobal(token)
- else:
- self.error(
- "token %s %s unexpected at the top level" % (
- token[0], token[1]))
- token = self.parseGlobal(token)
-# print 'Exiting extern "C" line', self.lineno()
- token = self.token()
- return token
- else:
- return token
- elif token[1] == 'static':
- static = 1
- token = self.token()
- if token == None or token[0] != 'name':
- return token
-
- if token[1] == 'typedef':
- token = self.token()
- return self.parseTypedef(token)
- else:
- token = self.parseType(token)
- type_orig = self.type
- if token == None or token[0] != "name":
- return token
- type = type_orig
- self.name = token[1]
- token = self.token()
- while token != None and (token[0] == "sep" or token[0] == "op"):
- if token[0] == "sep":
- if token[1] == "[":
- type = type + token[1]
- token = self.token()
- while token != None and (token[0] != "sep" or \
- token[1] != ";"):
- type = type + token[1]
- token = self.token()
-
- if token != None and token[0] == "op" and token[1] == "=":
- #
- # Skip the initialization of the variable
- #
- token = self.token()
- if token[0] == 'sep' and token[1] == '{':
- token = self.token()
- token = self.parseBlock(token)
- else:
- self.comment = None
- while token != None and (token[0] != "sep" or \
- (token[1] != ';' and token[1] != ',')):
- token = self.token()
- self.comment = None
- if token == None or token[0] != "sep" or (token[1] != ';' and
- token[1] != ','):
- self.error("missing ';' or ',' after value")
-
- if token != None and token[0] == "sep":
- if token[1] == ";":
- self.comment = None
- token = self.token()
- if type == "struct":
- self.index_add(self.name, self.filename,
- not self.is_header, "struct", self.struct_fields)
- else:
- self.index_add(self.name, self.filename,
- not self.is_header, "variable", type)
- break
- elif token[1] == "(":
- token = self.token()
- token = self.parseSignature(token)
- if token == None:
- return None
- if token[0] == "sep" and token[1] == ";":
- d = self.mergeFunctionComment(self.name,
- ((type, None), self.signature), 1)
- self.index_add(self.name, self.filename, static,
- "function", d)
- token = self.token()
- elif token[0] == "sep" and token[1] == "{":
- d = self.mergeFunctionComment(self.name,
- ((type, None), self.signature), static)
- self.index_add(self.name, self.filename, static,
- "function", d)
- token = self.token()
- token = self.parseBlock(token);
- elif token[1] == ',':
- self.comment = None
- self.index_add(self.name, self.filename, static,
- "variable", type)
- type = type_orig
- token = self.token()
- while token != None and token[0] == "sep":
- type = type + token[1]
- token = self.token()
- if token != None and token[0] == "name":
- self.name = token[1]
- token = self.token()
- else:
- break
-
- return token
+ token = self.token()
+ if token == None:
+ return token
+ if token[0] == 'string':
+ if token[1] == 'C':
+ token = self.token()
+ if token == None:
+ return token
+ if token[0] == 'sep' and token[1] == "{":
+ token = self.token()
+# print 'Entering extern "C line ', self.lineno()
+ while token != None and (token[0] != 'sep' or
+ token[1] != "}"):
+ if token[0] == 'name':
+ token = self.parseGlobal(token)
+ else:
+ self.error(
+ "token %s %s unexpected at the top level" % (
+ token[0], token[1]))
+ token = self.parseGlobal(token)
+# print 'Exiting extern "C" line', self.lineno()
+ token = self.token()
+ return token
+ else:
+ return token
+ elif token[1] == 'static':
+ static = 1
+ token = self.token()
+ if token == None or token[0] != 'name':
+ return token
+
+ if token[1] == 'typedef':
+ token = self.token()
+ return self.parseTypedef(token)
+ else:
+ token = self.parseType(token)
+ type_orig = self.type
+ if token == None or token[0] != "name":
+ return token
+ type = type_orig
+ self.name = token[1]
+ token = self.token()
+ while token != None and (token[0] == "sep" or token[0] == "op"):
+ if token[0] == "sep":
+ if token[1] == "[":
+ type = type + token[1]
+ token = self.token()
+ while token != None and (token[0] != "sep" or \
+ token[1] != ";"):
+ type = type + token[1]
+ token = self.token()
+
+ if token != None and token[0] == "op" and token[1] == "=":
+ #
+ # Skip the initialization of the variable
+ #
+ token = self.token()
+ if token[0] == 'sep' and token[1] == '{':
+ token = self.token()
+ token = self.parseBlock(token)
+ else:
+ self.comment = None
+ while token != None and (token[0] != "sep" or \
+ (token[1] != ';' and token[1] != ',')):
+ token = self.token()
+ self.comment = None
+ if token == None or token[0] != "sep" or (token[1] != ';' and
+ token[1] != ','):
+ self.error("missing ';' or ',' after value")
+
+ if token != None and token[0] == "sep":
+ if token[1] == ";":
+ self.comment = None
+ token = self.token()
+ if type == "struct":
+ self.index_add(self.name, self.filename,
+ not self.is_header, "struct", self.struct_fields)
+ else:
+ self.index_add(self.name, self.filename,
+ not self.is_header, "variable", type)
+ break
+ elif token[1] == "(":
+ token = self.token()
+ token = self.parseSignature(token)
+ if token == None:
+ return None
+ if token[0] == "sep" and token[1] == ";":
+ d = self.mergeFunctionComment(self.name,
+ ((type, None), self.signature), 1)
+ self.index_add(self.name, self.filename, static,
+ "function", d)
+ token = self.token()
+ elif token[0] == "sep" and token[1] == "{":
+ d = self.mergeFunctionComment(self.name,
+ ((type, None), self.signature), static)
+ self.index_add(self.name, self.filename, static,
+ "function", d)
+ token = self.token()
+ token = self.parseBlock(token);
+ elif token[1] == ',':
+ self.comment = None
+ self.index_add(self.name, self.filename, static,
+ "variable", type)
+ type = type_orig
+ token = self.token()
+ while token != None and token[0] == "sep":
+ type = type + token[1]
+ token = self.token()
+ if token != None and token[0] == "name":
+ self.name = token[1]
+ token = self.token()
+ else:
+ break
+
+ return token
def parse(self):
self.warning("Parsing %s" % (self.filename))
token = self.token()
- while token != None:
+ while token != None:
if token[0] == 'name':
- token = self.parseGlobal(token)
+ token = self.parseGlobal(token)
else:
- self.error("token %s %s unexpected at the top level" % (
- token[0], token[1]))
- token = self.parseGlobal(token)
- return
- self.parseTopComment(self.top_comment)
+ self.error("token %s %s unexpected at the top level" % (
+ token[0], token[1]))
+ token = self.parseGlobal(token)
+ return
+ self.parseTopComment(self.top_comment)
return self.index
@@ -1618,449 +1618,449 @@ class docBuilder:
self.name = name
self.path = path
self.directories = directories
- self.includes = includes + included_files.keys()
- self.modules = {}
- self.headers = {}
- self.idx = index()
+ self.includes = includes + included_files.keys()
+ self.modules = {}
+ self.headers = {}
+ self.idx = index()
self.xref = {}
- self.index = {}
- self.basename = name
+ self.index = {}
+ self.basename = name
def indexString(self, id, str):
- if str == None:
- return
- str = string.replace(str, "'", ' ')
- str = string.replace(str, '"', ' ')
- str = string.replace(str, "/", ' ')
- str = string.replace(str, '*', ' ')
- str = string.replace(str, "[", ' ')
- str = string.replace(str, "]", ' ')
- str = string.replace(str, "(", ' ')
- str = string.replace(str, ")", ' ')
- str = string.replace(str, "<", ' ')
- str = string.replace(str, '>', ' ')
- str = string.replace(str, "&", ' ')
- str = string.replace(str, '#', ' ')
- str = string.replace(str, ",", ' ')
- str = string.replace(str, '.', ' ')
- str = string.replace(str, ';', ' ')
- tokens = string.split(str)
- for token in tokens:
- try:
- c = token[0]
- if string.find(string.letters, c) < 0:
- pass
- elif len(token) < 3:
- pass
- else:
- lower = string.lower(token)
- # TODO: generalize this a bit
- if lower == 'and' or lower == 'the':
- pass
- elif self.xref.has_key(token):
- self.xref[token].append(id)
- else:
- self.xref[token] = [id]
- except:
- pass
+ if str == None:
+ return
+ str = string.replace(str, "'", ' ')
+ str = string.replace(str, '"', ' ')
+ str = string.replace(str, "/", ' ')
+ str = string.replace(str, '*', ' ')
+ str = string.replace(str, "[", ' ')
+ str = string.replace(str, "]", ' ')
+ str = string.replace(str, "(", ' ')
+ str = string.replace(str, ")", ' ')
+ str = string.replace(str, "<", ' ')
+ str = string.replace(str, '>', ' ')
+ str = string.replace(str, "&", ' ')
+ str = string.replace(str, '#', ' ')
+ str = string.replace(str, ",", ' ')
+ str = string.replace(str, '.', ' ')
+ str = string.replace(str, ';', ' ')
+ tokens = string.split(str)
+ for token in tokens:
+ try:
+ c = token[0]
+ if string.find(string.letters, c) < 0:
+ pass
+ elif len(token) < 3:
+ pass
+ else:
+ lower = string.lower(token)
+ # TODO: generalize this a bit
+ if lower == 'and' or lower == 'the':
+ pass
+ elif self.xref.has_key(token):
+ self.xref[token].append(id)
+ else:
+ self.xref[token] = [id]
+ except:
+ pass
def analyze(self):
print "Project %s : %d headers, %d modules" % (self.name, len(self.headers.keys()), len(self.modules.keys()))
- self.idx.analyze()
+ self.idx.analyze()
def scanHeaders(self):
- for header in self.headers.keys():
- parser = CParser(header)
- idx = parser.parse()
- self.headers[header] = idx;
- self.idx.merge(idx)
+ for header in self.headers.keys():
+ parser = CParser(header)
+ idx = parser.parse()
+ self.headers[header] = idx;
+ self.idx.merge(idx)
def scanModules(self):
- for module in self.modules.keys():
- parser = CParser(module)
- idx = parser.parse()
- # idx.analyze()
- self.modules[module] = idx
- self.idx.merge_public(idx)
+ for module in self.modules.keys():
+ parser = CParser(module)
+ idx = parser.parse()
+ # idx.analyze()
+ self.modules[module] = idx
+ self.idx.merge_public(idx)
def scan(self):
for directory in self.directories:
- files = glob.glob(directory + "/*.c")
- for file in files:
- skip = 1
- for incl in self.includes:
- if string.find(file, incl) != -1:
- skip = 0;
- break
- if skip == 0:
- self.modules[file] = None;
- files = glob.glob(directory + "/*.h")
- for file in files:
- skip = 1
- for incl in self.includes:
- if string.find(file, incl) != -1:
- skip = 0;
- break
- if skip == 0:
- self.headers[file] = None;
- self.scanHeaders()
- self.scanModules()
+ files = glob.glob(directory + "/*.c")
+ for file in files:
+ skip = 1
+ for incl in self.includes:
+ if string.find(file, incl) != -1:
+ skip = 0;
+ break
+ if skip == 0:
+ self.modules[file] = None;
+ files = glob.glob(directory + "/*.h")
+ for file in files:
+ skip = 1
+ for incl in self.includes:
+ if string.find(file, incl) != -1:
+ skip = 0;
+ break
+ if skip == 0:
+ self.headers[file] = None;
+ self.scanHeaders()
+ self.scanModules()
def modulename_file(self, file):
module = os.path.basename(file)
- if module[-2:] == '.h':
- module = module[:-2]
- elif module[-2:] == '.c':
- module = module[:-2]
- return module
+ if module[-2:] == '.h':
+ module = module[:-2]
+ elif module[-2:] == '.c':
+ module = module[:-2]
+ return module
def serialize_enum(self, output, name):
id = self.idx.enums[name]
output.write(" <enum name='%s' file='%s'" % (name,
- self.modulename_file(id.header)))
- if id.info != None:
- info = id.info
- if info[0] != None and info[0] != '':
- try:
- val = eval(info[0])
- except:
- val = info[0]
- output.write(" value='%s'" % (val));
- if info[2] != None and info[2] != '':
- output.write(" type='%s'" % info[2]);
- if info[1] != None and info[1] != '':
- output.write(" info='%s'" % escape(info[1]));
+ self.modulename_file(id.header)))
+ if id.info != None:
+ info = id.info
+ if info[0] != None and info[0] != '':
+ try:
+ val = eval(info[0])
+ except:
+ val = info[0]
+ output.write(" value='%s'" % (val));
+ if info[2] != None and info[2] != '':
+ output.write(" type='%s'" % info[2]);
+ if info[1] != None and info[1] != '':
+ output.write(" info='%s'" % escape(info[1]));
output.write("/>\n")
def serialize_macro(self, output, name):
id = self.idx.macros[name]
output.write(" <macro name='%s' file='%s'>\n" % (name,
- self.modulename_file(id.header)))
- if id.info != None:
+ self.modulename_file(id.header)))
+ if id.info != None:
try:
- (args, desc) = id.info
- if desc != None and desc != "":
- output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
- self.indexString(name, desc)
- for arg in args:
- (name, desc) = arg
- if desc != None and desc != "":
- output.write(" <arg name='%s' info='%s'/>\n" % (
- name, escape(desc)))
- self.indexString(name, desc)
- else:
- output.write(" <arg name='%s'/>\n" % (name))
+ (args, desc) = id.info
+ if desc != None and desc != "":
+ output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
+ self.indexString(name, desc)
+ for arg in args:
+ (name, desc) = arg
+ if desc != None and desc != "":
+ output.write(" <arg name='%s' info='%s'/>\n" % (
+ name, escape(desc)))
+ self.indexString(name, desc)
+ else:
+ output.write(" <arg name='%s'/>\n" % (name))
except:
pass
output.write(" </macro>\n")
def serialize_typedef(self, output, name):
id = self.idx.typedefs[name]
- if id.info[0:7] == 'struct ':
- output.write(" <struct name='%s' file='%s' type='%s'" % (
- name, self.modulename_file(id.header), id.info))
- name = id.info[7:]
- if self.idx.structs.has_key(name) and ( \
- type(self.idx.structs[name].info) == type(()) or
- type(self.idx.structs[name].info) == type([])):
- output.write(">\n");
- try:
- for field in self.idx.structs[name].info:
- desc = field[2]
- self.indexString(name, desc)
- if desc == None:
- desc = ''
- else:
- desc = escape(desc)
- output.write(" <field name='%s' type='%s' info='%s'/>\n" % (field[1] , field[0], desc))
- except:
- print "Failed to serialize struct %s" % (name)
- output.write(" </struct>\n")
- else:
- output.write("/>\n");
- else :
- output.write(" <typedef name='%s' file='%s' type='%s'" % (
- name, self.modulename_file(id.header), id.info))
+ if id.info[0:7] == 'struct ':
+ output.write(" <struct name='%s' file='%s' type='%s'" % (
+ name, self.modulename_file(id.header), id.info))
+ name = id.info[7:]
+ if self.idx.structs.has_key(name) and ( \
+ type(self.idx.structs[name].info) == type(()) or
+ type(self.idx.structs[name].info) == type([])):
+ output.write(">\n");
+ try:
+ for field in self.idx.structs[name].info:
+ desc = field[2]
+ self.indexString(name, desc)
+ if desc == None:
+ desc = ''
+ else:
+ desc = escape(desc)
+ output.write(" <field name='%s' type='%s' info='%s'/>\n" % (field[1] , field[0], desc))
+ except:
+ print "Failed to serialize struct %s" % (name)
+ output.write(" </struct>\n")
+ else:
+ output.write("/>\n");
+ else :
+ output.write(" <typedef name='%s' file='%s' type='%s'" % (
+ name, self.modulename_file(id.header), id.info))
try:
- desc = id.extra
- if desc != None and desc != "":
- output.write(">\n <info><![CDATA[%s]]></info>\n" % (desc))
- output.write(" </typedef>\n")
- else:
- output.write("/>\n")
- except:
- output.write("/>\n")
+ desc = id.extra
+ if desc != None and desc != "":
+ output.write(">\n <info><![CDATA[%s]]></info>\n" % (desc))
+ output.write(" </typedef>\n")
+ else:
+ output.write("/>\n")
+ except:
+ output.write("/>\n")
def serialize_variable(self, output, name):
id = self.idx.variables[name]
- if id.info != None:
- output.write(" <variable name='%s' file='%s' type='%s'/>\n" % (
- name, self.modulename_file(id.header), id.info))
- else:
- output.write(" <variable name='%s' file='%s'/>\n" % (
- name, self.modulename_file(id.header)))
+ if id.info != None:
+ output.write(" <variable name='%s' file='%s' type='%s'/>\n" % (
+ name, self.modulename_file(id.header), id.info))
+ else:
+ output.write(" <variable name='%s' file='%s'/>\n" % (
+ name, self.modulename_file(id.header)))
def serialize_function(self, output, name):
id = self.idx.functions[name]
- if name == debugsym:
- print "=>", id
+ if name == debugsym:
+ print "=>", id
output.write(" <%s name='%s' file='%s' module='%s'>\n" % (id.type,
- name, self.modulename_file(id.header),
- self.modulename_file(id.module)))
- #
- # Processing of conditionals modified by Bill 1/1/05
- #
- if id.conditionals != None:
- apstr = ""
- for cond in id.conditionals:
- if apstr != "":
- apstr = apstr + " && "
- apstr = apstr + cond
- output.write(" <cond>%s</cond>\n"% (apstr));
- try:
- (ret, params, desc) = id.info
- output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
- self.indexString(name, desc)
- if ret[0] != None:
- if ret[0] == "void":
- output.write(" <return type='void'/>\n")
- else:
- output.write(" <return type='%s' info='%s'/>\n" % (
- ret[0], escape(ret[1])))
- self.indexString(name, ret[1])
- for param in params:
- if param[0] == 'void':
- continue
- if param[2] == None:
- output.write(" <arg name='%s' type='%s' info=''/>\n" % (param[1], param[0]))
- else:
- output.write(" <arg name='%s' type='%s' info='%s'/>\n" % (param[1], param[0], escape(param[2])))
- self.indexString(name, param[2])
- except:
- print "Failed to save function %s info: " % name, `id.info`
+ name, self.modulename_file(id.header),
+ self.modulename_file(id.module)))
+ #
+ # Processing of conditionals modified by Bill 1/1/05
+ #
+ if id.conditionals != None:
+ apstr = ""
+ for cond in id.conditionals:
+ if apstr != "":
+ apstr = apstr + " && "
+ apstr = apstr + cond
+ output.write(" <cond>%s</cond>\n"% (apstr));
+ try:
+ (ret, params, desc) = id.info
+ output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
+ self.indexString(name, desc)
+ if ret[0] != None:
+ if ret[0] == "void":
+ output.write(" <return type='void'/>\n")
+ else:
+ output.write(" <return type='%s' info='%s'/>\n" % (
+ ret[0], escape(ret[1])))
+ self.indexString(name, ret[1])
+ for param in params:
+ if param[0] == 'void':
+ continue
+ if param[2] == None:
+ output.write(" <arg name='%s' type='%s' info=''/>\n" % (param[1], param[0]))
+ else:
+ output.write(" <arg name='%s' type='%s' info='%s'/>\n" % (param[1], param[0], escape(param[2])))
+ self.indexString(name, param[2])
+ except:
+ print "Failed to save function %s info: " % name, `id.info`
output.write(" </%s>\n" % (id.type))
def serialize_exports(self, output, file):
module = self.modulename_file(file)
- output.write(" <file name='%s'>\n" % (module))
- dict = self.headers[file]
- if dict.info != None:
- for data in ('Summary', 'Description', 'Author'):
- try:
- output.write(" <%s>%s</%s>\n" % (
- string.lower(data),
- escape(dict.info[data]),
- string.lower(data)))
- except:
- print "Header %s lacks a %s description" % (module, data)
- if dict.info.has_key('Description'):
- desc = dict.info['Description']
- if string.find(desc, "DEPRECATED") != -1:
- output.write(" <deprecated/>\n")
+ output.write(" <file name='%s'>\n" % (module))
+ dict = self.headers[file]
+ if dict.info != None:
+ for data in ('Summary', 'Description', 'Author'):
+ try:
+ output.write(" <%s>%s</%s>\n" % (
+ string.lower(data),
+ escape(dict.info[data]),
+ string.lower(data)))
+ except:
+ print "Header %s lacks a %s description" % (module, data)
+ if dict.info.has_key('Description'):
+ desc = dict.info['Description']
+ if string.find(desc, "DEPRECATED") != -1:
+ output.write(" <deprecated/>\n")
ids = dict.macros.keys()
- ids.sort()
- for id in uniq(ids):
- # Macros are sometime used to masquerade other types.
- if dict.functions.has_key(id):
- continue
- if dict.variables.has_key(id):
- continue
- if dict.typedefs.has_key(id):
- continue
- if dict.structs.has_key(id):
- continue
- if dict.enums.has_key(id):
- continue
- output.write(" <exports symbol='%s' type='macro'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ # Macros are sometime used to masquerade other types.
+ if dict.functions.has_key(id):
+ continue
+ if dict.variables.has_key(id):
+ continue
+ if dict.typedefs.has_key(id):
+ continue
+ if dict.structs.has_key(id):
+ continue
+ if dict.enums.has_key(id):
+ continue
+ output.write(" <exports symbol='%s' type='macro'/>\n" % (id))
ids = dict.enums.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='enum'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='enum'/>\n" % (id))
ids = dict.typedefs.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='typedef'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='typedef'/>\n" % (id))
ids = dict.structs.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='struct'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='struct'/>\n" % (id))
ids = dict.variables.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='variable'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='variable'/>\n" % (id))
ids = dict.functions.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='function'/>\n" % (id))
- output.write(" </file>\n")
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='function'/>\n" % (id))
+ output.write(" </file>\n")
def serialize_xrefs_files(self, output):
headers = self.headers.keys()
headers.sort()
for file in headers:
- module = self.modulename_file(file)
- output.write(" <file name='%s'>\n" % (module))
- dict = self.headers[file]
- ids = uniq(dict.functions.keys() + dict.variables.keys() + \
- dict.macros.keys() + dict.typedefs.keys() + \
- dict.structs.keys() + dict.enums.keys())
- ids.sort()
- for id in ids:
- output.write(" <ref name='%s'/>\n" % (id))
- output.write(" </file>\n")
+ module = self.modulename_file(file)
+ output.write(" <file name='%s'>\n" % (module))
+ dict = self.headers[file]
+ ids = uniq(dict.functions.keys() + dict.variables.keys() + \
+ dict.macros.keys() + dict.typedefs.keys() + \
+ dict.structs.keys() + dict.enums.keys())
+ ids.sort()
+ for id in ids:
+ output.write(" <ref name='%s'/>\n" % (id))
+ output.write(" </file>\n")
pass
def serialize_xrefs_functions(self, output):
funcs = {}
- for name in self.idx.functions.keys():
- id = self.idx.functions[name]
- try:
- (ret, params, desc) = id.info
- for param in params:
- if param[0] == 'void':
- continue
- if funcs.has_key(param[0]):
- funcs[param[0]].append(name)
- else:
- funcs[param[0]] = [name]
- except:
- pass
- typ = funcs.keys()
- typ.sort()
- for type in typ:
- if type == '' or type == 'void' or type == "int" or \
- type == "char *" or type == "const char *" :
- continue
- output.write(" <type name='%s'>\n" % (type))
- ids = funcs[type]
- ids.sort()
- pid = '' # not sure why we have dups, but get rid of them!
- for id in ids:
- if id != pid:
- output.write(" <ref name='%s'/>\n" % (id))
- pid = id
- output.write(" </type>\n")
+ for name in self.idx.functions.keys():
+ id = self.idx.functions[name]
+ try:
+ (ret, params, desc) = id.info
+ for param in params:
+ if param[0] == 'void':
+ continue
+ if funcs.has_key(param[0]):
+ funcs[param[0]].append(name)
+ else:
+ funcs[param[0]] = [name]
+ except:
+ pass
+ typ = funcs.keys()
+ typ.sort()
+ for type in typ:
+ if type == '' or type == 'void' or type == "int" or \
+ type == "char *" or type == "const char *" :
+ continue
+ output.write(" <type name='%s'>\n" % (type))
+ ids = funcs[type]
+ ids.sort()
+ pid = '' # not sure why we have dups, but get rid of them!
+ for id in ids:
+ if id != pid:
+ output.write(" <ref name='%s'/>\n" % (id))
+ pid = id
+ output.write(" </type>\n")
def serialize_xrefs_constructors(self, output):
funcs = {}
- for name in self.idx.functions.keys():
- id = self.idx.functions[name]
- try:
- (ret, params, desc) = id.info
- if ret[0] == "void":
- continue
- if funcs.has_key(ret[0]):
- funcs[ret[0]].append(name)
- else:
- funcs[ret[0]] = [name]
- except:
- pass
- typ = funcs.keys()
- typ.sort()
- for type in typ:
- if type == '' or type == 'void' or type == "int" or \
- type == "char *" or type == "const char *" :
- continue
- output.write(" <type name='%s'>\n" % (type))
- ids = funcs[type]
- ids.sort()
- for id in ids:
- output.write(" <ref name='%s'/>\n" % (id))
- output.write(" </type>\n")
+ for name in self.idx.functions.keys():
+ id = self.idx.functions[name]
+ try:
+ (ret, params, desc) = id.info
+ if ret[0] == "void":
+ continue
+ if funcs.has_key(ret[0]):
+ funcs[ret[0]].append(name)
+ else:
+ funcs[ret[0]] = [name]
+ except:
+ pass
+ typ = funcs.keys()
+ typ.sort()
+ for type in typ:
+ if type == '' or type == 'void' or type == "int" or \
+ type == "char *" or type == "const char *" :
+ continue
+ output.write(" <type name='%s'>\n" % (type))
+ ids = funcs[type]
+ ids.sort()
+ for id in ids:
+ output.write(" <ref name='%s'/>\n" % (id))
+ output.write(" </type>\n")
def serialize_xrefs_alpha(self, output):
- letter = None
- ids = self.idx.identifiers.keys()
- ids.sort()
- for id in ids:
- if id[0] != letter:
- if letter != None:
- output.write(" </letter>\n")
- letter = id[0]
- output.write(" <letter name='%s'>\n" % (letter))
- output.write(" <ref name='%s'/>\n" % (id))
- if letter != None:
- output.write(" </letter>\n")
+ letter = None
+ ids = self.idx.identifiers.keys()
+ ids.sort()
+ for id in ids:
+ if id[0] != letter:
+ if letter != None:
+ output.write(" </letter>\n")
+ letter = id[0]
+ output.write(" <letter name='%s'>\n" % (letter))
+ output.write(" <ref name='%s'/>\n" % (id))
+ if letter != None:
+ output.write(" </letter>\n")
def serialize_xrefs_references(self, output):
typ = self.idx.identifiers.keys()
- typ.sort()
- for id in typ:
- idf = self.idx.identifiers[id]
- module = idf.header
- output.write(" <reference name='%s' href='%s'/>\n" % (id,
- 'html/' + self.basename + '-' +
- self.modulename_file(module) + '.html#' +
- id))
+ typ.sort()
+ for id in typ:
+ idf = self.idx.identifiers[id]
+ module = idf.header
+ output.write(" <reference name='%s' href='%s'/>\n" % (id,
+ 'html/' + self.basename + '-' +
+ self.modulename_file(module) + '.html#' +
+ id))
def serialize_xrefs_index(self, output):
index = self.xref
- typ = index.keys()
- typ.sort()
- letter = None
- count = 0
- chunk = 0
- chunks = []
- for id in typ:
- if len(index[id]) > 30:
- continue
- if id[0] != letter:
- if letter == None or count > 200:
- if letter != None:
- output.write(" </letter>\n")
- output.write(" </chunk>\n")
- count = 0
- chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
- output.write(" <chunk name='chunk%s'>\n" % (chunk))
- first_letter = id[0]
- chunk = chunk + 1
- elif letter != None:
- output.write(" </letter>\n")
- letter = id[0]
- output.write(" <letter name='%s'>\n" % (letter))
- output.write(" <word name='%s'>\n" % (id))
- tokens = index[id];
- tokens.sort()
- tok = None
- for token in tokens:
- if tok == token:
- continue
- tok = token
- output.write(" <ref name='%s'/>\n" % (token))
- count = count + 1
- output.write(" </word>\n")
- if letter != None:
- output.write(" </letter>\n")
- output.write(" </chunk>\n")
- if count != 0:
- chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
- output.write(" <chunks>\n")
- for ch in chunks:
- output.write(" <chunk name='%s' start='%s' end='%s'/>\n" % (
- ch[0], ch[1], ch[2]))
- output.write(" </chunks>\n")
+ typ = index.keys()
+ typ.sort()
+ letter = None
+ count = 0
+ chunk = 0
+ chunks = []
+ for id in typ:
+ if len(index[id]) > 30:
+ continue
+ if id[0] != letter:
+ if letter == None or count > 200:
+ if letter != None:
+ output.write(" </letter>\n")
+ output.write(" </chunk>\n")
+ count = 0
+ chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
+ output.write(" <chunk name='chunk%s'>\n" % (chunk))
+ first_letter = id[0]
+ chunk = chunk + 1
+ elif letter != None:
+ output.write(" </letter>\n")
+ letter = id[0]
+ output.write(" <letter name='%s'>\n" % (letter))
+ output.write(" <word name='%s'>\n" % (id))
+ tokens = index[id];
+ tokens.sort()
+ tok = None
+ for token in tokens:
+ if tok == token:
+ continue
+ tok = token
+ output.write(" <ref name='%s'/>\n" % (token))
+ count = count + 1
+ output.write(" </word>\n")
+ if letter != None:
+ output.write(" </letter>\n")
+ output.write(" </chunk>\n")
+ if count != 0:
+ chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
+ output.write(" <chunks>\n")
+ for ch in chunks:
+ output.write(" <chunk name='%s' start='%s' end='%s'/>\n" % (
+ ch[0], ch[1], ch[2]))
+ output.write(" </chunks>\n")
def serialize_xrefs(self, output):
- output.write(" <references>\n")
- self.serialize_xrefs_references(output)
- output.write(" </references>\n")
- output.write(" <alpha>\n")
- self.serialize_xrefs_alpha(output)
- output.write(" </alpha>\n")
- output.write(" <constructors>\n")
- self.serialize_xrefs_constructors(output)
- output.write(" </constructors>\n")
- output.write(" <functions>\n")
- self.serialize_xrefs_functions(output)
- output.write(" </functions>\n")
- output.write(" <files>\n")
- self.serialize_xrefs_files(output)
- output.write(" </files>\n")
- output.write(" <index>\n")
- self.serialize_xrefs_index(output)
- output.write(" </index>\n")
+ output.write(" <references>\n")
+ self.serialize_xrefs_references(output)
+ output.write(" </references>\n")
+ output.write(" <alpha>\n")
+ self.serialize_xrefs_alpha(output)
+ output.write(" </alpha>\n")
+ output.write(" <constructors>\n")
+ self.serialize_xrefs_constructors(output)
+ output.write(" </constructors>\n")
+ output.write(" <functions>\n")
+ self.serialize_xrefs_functions(output)
+ output.write(" </functions>\n")
+ output.write(" <files>\n")
+ self.serialize_xrefs_files(output)
+ output.write(" </files>\n")
+ output.write(" <index>\n")
+ self.serialize_xrefs_index(output)
+ output.write(" </index>\n")
def serialize(self):
filename = "%s/%s-api.xml" % (self.path, self.name)
@@ -2124,10 +2124,10 @@ def rebuild():
print "Rebuilding API description for libvirt"
builder = docBuilder("libvirt", srcdir,
["src", "src/util", "include/libvirt"],
- [])
+ [])
else:
print "rebuild() failed, unable to guess the module"
- return None
+ return None
builder.scan()
builder.analyze()
builder.serialize()
@@ -2146,4 +2146,4 @@ if __name__ == "__main__":
debug = 1
parse(sys.argv[1])
else:
- rebuild()
+ rebuild()
--
1.7.4.1
2
2
A new release of the Perl bindings, Sys::Virt, is now available from:
http://search.cpan.org/~danberr/Sys-Virt-0.2.6/
Direct download link:
http://search.cpan.org/CPAN/authors/id/D/DA/DANBERR/Sys-Virt-0.2.6.tar.gz
This supports all the APIs upto and including libvirt 0.8.7, with the
exception of the stream APIs
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
1
0
[libvirt] [BUG] storage_backend_fs: failung virStorageBackendProbeTarget() disabled whole pool
by Philipp Hahn 17 Feb '11
by Philipp Hahn 17 Feb '11
17 Feb '11
Hello,
I have a problem with the following code fragment, which refreshes a directory
based storage pool:
storage_backend_fs.c#virStorageBackendFileSystemRefresh(...)
...
while ((ent = readdir(dir)) != NULL) {
...
if ((ret = virStorageBackendProbeTarget(...) < 0) {
if (ret == -1)
goto cleanup;
...
}
...
}
closedir(dir);
...
return 0;
cleanup:
...
return -1;
}
This disables the whole pool, if it contains an Qcow2 volume while, whose
backfile is (currently) unavailable (because, for example, the central NFS
store is currently unavailable or the permissions don't allow reading). This
is very annoying, since it's hard to find the Qcow2 volume, which breaks the
pool. I use the following command to find broken volumes:
find "$dir" -maxdepth 1 \( -type f -o -type l \) \( -exec kvm-img info {}
\; -o -print \) 2>/dev/null
Even worse, you can't delete the broken or re-create the missing volume with
virsh allone.
To reproduce:
dir=$(mktemp -d)
virsh pool-create-as tmp dir '' '' '' '' "$dir"
virsh vol-create-as --format qcow2 tmp back 1G
virsh vol-create-as --format qcow2 --backing-vol-format qcow2 --backing-vol
back tmp cow 1G
virsh vol-delete --pool tmp back
virsh pool-refresh tmp
After the last step, the pool will be gone (because it was not persistent). As
long as the now broken image stays in the directory, you will not be able to
re-create or re-start the pool.
The easiest 'fix' would be to ignore all errors regarding the detection of the
backing files file format. This would at least allow users to still create
new volumes and list existing volumes.
I appreciate any comments.
BYtE
Philipp
--
Philipp Hahn Open Source Software Engineer hahn(a)univention.de
Univention GmbH Linux for Your Business fon: +49 421 22 232- 0
Mary-Somerville-Str.1 28359 Bremen fax: +49 421 22 232-99
http://www.univention.de/
** Besuchen Sie uns auf der CeBIT in Hannover **
** Auf dem Univention Stand D36 in Halle 2 **
** Vom 01. bis 05. März 2011 **
2
1
If we don't check it, virsh users will get "Disk detached successfully"
even when qemuMonitorDriveDel fails.
---
src/qemu/qemu_hotplug.c | 10 ++++++++--
src/qemu/qemu_monitor_json.c | 2 +-
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index fb9db5a..70e9d8e 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -1187,7 +1187,10 @@ int qemuDomainDetachPciDiskDevice(struct qemud_driver *driver,
}
/* disconnect guest from host device */
- qemuMonitorDriveDel(priv->mon, drivestr);
+ if (qemuMonitorDriveDel(priv->mon, drivestr) != 0) {
+ qemuDomainObjExitMonitor(vm);
+ goto cleanup;
+ }
qemuDomainObjExitMonitorWithDriver(driver, vm);
@@ -1269,7 +1272,10 @@ int qemuDomainDetachSCSIDiskDevice(struct qemud_driver *driver,
}
/* disconnect guest from host device */
- qemuMonitorDriveDel(priv->mon, drivestr);
+ if (qemuMonitorDriveDel(priv->mon, drivestr) != 0) {
+ qemuDomainObjExitMonitor(vm);
+ goto cleanup;
+ }
qemuDomainObjExitMonitorWithDriver(driver, vm);
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index d5e8d37..b088405 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2338,7 +2338,7 @@ int qemuMonitorJSONDriveDel(qemuMonitorPtr mon,
if (ret == 0) {
/* See if drive_del isn't supported */
if (qemuMonitorJSONHasError(reply, "CommandNotFound")) {
- VIR_ERROR0(_("deleting disk is not supported. "
+ qemuReportError(VIR_ERR_NO_SUPPORT, _("deleting disk is not supported. "
"This may leak data if disk is reassigned"));
ret = 1;
goto cleanup;
--
1.7.3.1
2
1
Hi,
i'd like to let you know about current status of this project. There is
a wiki page http://wiki.libvirt.org/page/Libvirt-snmp which is currently
a documentation.
I am currently waiting for ACK for patch I've sent (snmp trap support) -
so if anyone's interested and have a lot of free time ... :)
I'd like to also ask for a hint. Those of you who tried libvirt-snmp out
might already have noticed. snmpwalk displays a ugly domain UUID:
http://wiki.libvirt.org/page/Libvirt-snmp#Examples_of_use
Does anybody know how to get rid of it?
2
1
17 Feb '11
The name convention of device mapper disk is different, and 'parted'
can't be used to delete a device mapper disk partition. e.g.
Name Path
-----------------------------------------
3600a0b80005ad1d7000093604cae912fp1 /dev/mapper/3600a0b80005ad1d7000093604cae912fp1
Error: Expecting a partition number.
This patch introduces 'dmsetup' to fix it.
Changes:
- New function "virIsDevMapperDevice" in "src/utils/utils.c"
- remove "is_dm_device" in "src/storage/parthelper.c", use
"virIsDevMapperDevice" instead.
- Requires "device-mapper" for 'with-storage-disk" in "libvirt.spec.in"
- Check "dmsetup" in 'configure.ac' for "with-storage-disk"
- Changes on "src/Makefile.am" to link against libdevmapper
- New entry for "virIsDevMapperDevice" in "src/libvirt_private.syms"
Changes from v1 to v3:
- s/virIsDeviceMapperDevice/virIsDevMapperDevice/g
- replace "virRun" with "virCommand"
- sort the list of util functions in "libvirt_private.syms"
- ATTRIBUTE_NONNULL(1) for virIsDevMapperDevice declaration.
e.g.
Name Path
-----------------------------------------
3600a0b80005ad1d7000093604cae912fp1 /dev/mapper/3600a0b80005ad1d7000093604cae912fp1
Vol /dev/mapper/3600a0b80005ad1d7000093604cae912fp1 deleted
Name Path
-----------------------------------------
---
configure.ac | 28 +++++++++++++++------
libvirt.spec.in | 1 +
src/Makefile.am | 8 +++---
src/libvirt_private.syms | 2 +-
src/storage/parthelper.c | 14 +---------
src/storage/storage_backend_disk.c | 48 +++++++++++++++++++++--------------
src/util/util.c | 15 +++++++++++
src/util/util.h | 2 +
8 files changed, 73 insertions(+), 45 deletions(-)
diff --git a/configure.ac b/configure.ac
index 3cd824a..44a3b19 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1725,12 +1725,19 @@ LIBPARTED_LIBS=
if test "$with_storage_disk" = "yes" ||
test "$with_storage_disk" = "check"; then
AC_PATH_PROG([PARTED], [parted], [], [$PATH:/sbin:/usr/sbin])
+ AC_PATH_PROG([DMSETUP], [dmsetup], [], [$PATH:/sbin:/usr/sbin])
if test -z "$PARTED" ; then
PARTED_FOUND=no
else
PARTED_FOUND=yes
fi
+ if test -z "$DMSETUP" ; then
+ DMSETUP_FOUND=no
+ else
+ DMSETUP_FOUND=yes
+ fi
+
if test "$PARTED_FOUND" = "yes" && test "x$PKG_CONFIG" != "x" ; then
PKG_CHECK_MODULES([LIBPARTED], [libparted >= $PARTED_REQUIRED], [],
[PARTED_FOUND=no])
@@ -1748,14 +1755,17 @@ if test "$with_storage_disk" = "yes" ||
CFLAGS="$save_CFLAGS"
fi
- if test "$PARTED_FOUND" = "no" ; then
- if test "$with_storage_disk" = "yes" ; then
- AC_MSG_ERROR([We need parted for disk storage driver])
- else
- with_storage_disk=no
- fi
- else
- with_storage_disk=yes
+ if test "$with_storage_disk" = "yes" &&
+ test "$PARTED_FOUND:$DMSETUP_FOUND" != "yes:yes"; then
+ AC_MSG_ERROR([Need both parted and dmsetup for disk storage driver])
+ fi
+
+ if test "$with_storage_disk" = "check"; then
+ if test "$PARTED_FOUND:$DMSETUP_FOUND" != "yes:yes"; then
+ with_storage_disk=no
+ else
+ with_storage_disk=yes
+ fi
fi
if test "$with_storage_disk" = "yes"; then
@@ -1763,6 +1773,8 @@ if test "$with_storage_disk" = "yes" ||
[whether Disk backend for storage driver is enabled])
AC_DEFINE_UNQUOTED([PARTED],["$PARTED"],
[Location or name of the parted program])
+ AC_DEFINE_UNQUOTED([DMSETUP],["$DMSETUP"],
+ [Location or name of the dmsetup program])
fi
fi
AM_CONDITIONAL([WITH_STORAGE_DISK], [test "$with_storage_disk" = "yes"])
diff --git a/libvirt.spec.in b/libvirt.spec.in
index 5021f45..8cb8fad 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -279,6 +279,7 @@ Requires: iscsi-initiator-utils
%if %{with_storage_disk}
# For disk driver
Requires: parted
+Requires: device-mapper
%endif
%if %{with_storage_mpath}
# For multipath support
diff --git a/src/Makefile.am b/src/Makefile.am
index 2f94efd..6e14043 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -437,9 +437,9 @@ libvirt_la_BUILT_LIBADD = libvirt_util.la
libvirt_util_la_SOURCES = \
$(UTIL_SOURCES)
libvirt_util_la_CFLAGS = $(CAPNG_CFLAGS) $(YAJL_CFLAGS) $(LIBNL_CFLAGS) \
- $(AM_CFLAGS) $(AUDIT_CFLAGS)
+ $(AM_CFLAGS) $(AUDIT_CFLAGS) $(DEVMAPPER_CFLAGS)
libvirt_util_la_LIBADD = $(CAPNG_LIBS) $(YAJL_LIBS) $(LIBNL_LIBS) \
- $(LIB_PTHREAD) $(AUDIT_LIBS)
+ $(LIB_PTHREAD) $(AUDIT_LIBS) $(DEVMAPPER_LIBS)
noinst_LTLIBRARIES += libvirt_conf.la
@@ -1154,7 +1154,6 @@ libvirt_parthelper_SOURCES = $(STORAGE_HELPER_DISK_SOURCES)
libvirt_parthelper_LDFLAGS = $(WARN_LDFLAGS) $(AM_LDFLAGS)
libvirt_parthelper_LDADD = \
$(LIBPARTED_LIBS) \
- $(DEVMAPPER_LIBS) \
libvirt_util.la \
../gnulib/lib/libgnu.la
@@ -1179,7 +1178,8 @@ libvirt_lxc_SOURCES = \
libvirt_lxc_LDFLAGS = $(WARN_CFLAGS) $(AM_LDFLAGS)
libvirt_lxc_LDADD = $(CAPNG_LIBS) $(YAJL_LIBS) \
$(LIBXML_LIBS) $(NUMACTL_LIBS) $(LIB_PTHREAD) \
- $(LIBNL_LIBS) $(AUDIT_LIBS) ../gnulib/lib/libgnu.la
+ $(LIBNL_LIBS) $(AUDIT_LIBS) $(DEVMAPPER_LIBS) \
+ ../gnulib/lib/libgnu.la
libvirt_lxc_CFLAGS = \
$(LIBPARTED_CFLAGS) \
$(NUMACTL_CFLAGS) \
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index b9e3efe..1ab3da3 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -884,6 +884,7 @@ virGetUserID;
virGetUserName;
virHexToBin;
virIndexToDiskName;
+virIsDevMapperDevice;
virKillProcess;
virMacAddrCompare;
virParseMacAddr;
@@ -910,7 +911,6 @@ virStrncpy;
virTimestamp;
virVasprintf;
-
# uuid.h
virGetHostUUID;
virSetHostUUIDStr;
diff --git a/src/storage/parthelper.c b/src/storage/parthelper.c
index 6ef413d..acc9171 100644
--- a/src/storage/parthelper.c
+++ b/src/storage/parthelper.c
@@ -59,18 +59,6 @@ enum diskCommand {
DISK_GEOMETRY
};
-static int
-is_dm_device(const char *devname)
-{
- struct stat buf;
-
- if (devname && !stat(devname, &buf) && dm_is_dm_major(major(buf.st_rdev))) {
- return 1;
- }
-
- return 0;
-}
-
int main(int argc, char **argv)
{
PedDevice *dev;
@@ -96,7 +84,7 @@ int main(int argc, char **argv)
}
path = argv[1];
- if (is_dm_device(path)) {
+ if (virIsDevMapperDevice(path)) {
partsep = "p";
canonical_path = strdup(path);
if (canonical_path == NULL) {
diff --git a/src/storage/storage_backend_disk.c b/src/storage/storage_backend_disk.c
index c7ade6b..70e3ba6 100644
--- a/src/storage/storage_backend_disk.c
+++ b/src/storage/storage_backend_disk.c
@@ -31,6 +31,7 @@
#include "storage_backend_disk.h"
#include "util.h"
#include "memory.h"
+#include "command.h"
#include "configmake.h"
#define VIR_FROM_THIS VIR_FROM_STORAGE
@@ -647,6 +648,8 @@ virStorageBackendDiskDeleteVol(virConnectPtr conn ATTRIBUTE_UNUSED,
char *part_num = NULL;
char *devpath = NULL;
char *devname, *srcname;
+ virCommandPtr cmd = NULL;
+ bool isDevMapperDevice;
int rc = -1;
if (virFileResolveLink(vol->target.path, &devpath) < 0) {
@@ -660,36 +663,43 @@ virStorageBackendDiskDeleteVol(virConnectPtr conn ATTRIBUTE_UNUSED,
srcname = basename(pool->def->source.devices[0].path);
DEBUG("devname=%s, srcname=%s", devname, srcname);
- if (!STRPREFIX(devname, srcname)) {
+ isDevMapperDevice = virIsDevMapperDevice(devpath);
+
+ if (!isDevMapperDevice && !STRPREFIX(devname, srcname)) {
virStorageReportError(VIR_ERR_INTERNAL_ERROR,
_("Volume path '%s' did not start with parent "
"pool source device name."), devname);
goto cleanup;
}
- part_num = devname + strlen(srcname);
+ if (!isDevMapperDevice) {
+ part_num = devname + strlen(srcname);
- if (*part_num == 0) {
- virStorageReportError(VIR_ERR_INTERNAL_ERROR,
- _("cannot parse partition number from target "
- "'%s'"), devname);
- goto cleanup;
- }
+ if (*part_num == 0) {
+ virStorageReportError(VIR_ERR_INTERNAL_ERROR,
+ _("cannot parse partition number from target "
+ "'%s'"), devname);
+ goto cleanup;
+ }
- /* eg parted /dev/sda rm 2 */
- const char *prog[] = {
- PARTED,
- pool->def->source.devices[0].path,
- "rm",
- "--script",
- part_num,
- NULL,
- };
+ cmd = virCommandNewArgList(PARTED,
+ pool->def->source.devices[0].path,
+ "rm",
+ "--script",
+ part_num,
+ NULL);
- if (virRun(prog, NULL) < 0)
- goto cleanup;
+ if (virCommandRun(cmd, NULL) < 0)
+ goto cleanup;
+ } else {
+ cmd = virCommandNewArgList(DMSETUP, "remove", "--force", devpath, NULL);
+
+ if (virCommandRun(cmd, NULL) < 0)
+ goto cleanup;
+ }
rc = 0;
cleanup:
VIR_FREE(devpath);
+ virCommandFree(cmd);
return rc;
diff --git a/src/util/util.c b/src/util/util.c
index 6161be6..1ec61b8 100644
--- a/src/util/util.c
+++ b/src/util/util.c
@@ -45,6 +45,7 @@
#include <string.h>
#include <signal.h>
#include <termios.h>
+#include <libdevmapper.h>
#include "c-ctype.h"
#ifdef HAVE_PATHS_H
@@ -3123,3 +3124,17 @@ virTimestamp(void)
return timestamp;
}
+
+bool
+virIsDevMapperDevice(const char *devname)
+{
+ struct stat buf;
+
+ if (!stat(devname, &buf) &&
+ S_ISBLK(buf.st_mode) &&
+ dm_is_dm_major(major(buf.st_rdev)))
+ return true;
+
+ return false;
+}
+
diff --git a/src/util/util.h b/src/util/util.h
index 8373038..10d3253 100644
--- a/src/util/util.h
+++ b/src/util/util.h
@@ -296,4 +296,6 @@ int virBuildPathInternal(char **path, ...) ATTRIBUTE_SENTINEL;
char *virTimestamp(void);
+bool virIsDevMapperDevice(const char *devname) ATTRIBUTE_NONNULL(1);
+
#endif /* __VIR_UTIL_H__ */
--
1.7.4
3
4
Attempting to set a transient domain autostart results in
ERR_OPERATION_INVALID not ERR_INTERNAL_ERROR.
---
scripts/domain/051-transient-autostart.t | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/scripts/domain/051-transient-autostart.t b/scripts/domain/051-transient-autostart.t
index 3e5ea61..26ca82b 100644
--- a/scripts/domain/051-transient-autostart.t
+++ b/scripts/domain/051-transient-autostart.t
@@ -48,7 +48,7 @@ my $auto = $dom->get_autostart();
ok(!$auto, "autostart is disabled for transient VMs");
-ok_error(sub { $dom->set_autostart(1) }, "Set autostart not supported on transient VMs", Sys::Virt::Error::ERR_INTERNAL_ERROR);
+ok_error(sub { $dom->set_autostart(1) }, "Set autostart not supported on transient VMs", Sys::Virt::Error::ERR_OPERATION_INVALID);
diag "Destroying the transient domain";
$dom->destroy;
--
1.7.3.1
2
2