[libvirt] [PATCH 0/2] Add support for multiple serial ports into the Xen driver
by Michal Novotny
Hi,
this is the patch to add support for multiple serial ports to the
libvirt Xen driver. It support both old style (serial = "pty") and
new style (serial = [ "/dev/ttyS0", "/dev/ttyS1" ]) definition and
tests for xml2sexpr, sexpr2xml and xmconfig have been added as well.
Written and tested on RHEL-5 Xen dom0 and working as designed but
the Xen version have to have patch for RHBZ #614004 but this patch
is for upstream version of libvirt.
Also, this patch is addressing issue described in RHBZ #670789.
Differences between v2 and v3:
* Fixed handling of port number in virDomainChrDefParseTargetXML()
* Fixed serial port handling if we have definition of device in non-zero port number
* Added second test for first port undefined (i.e. port value > 0 for just one
serial port)
Michal
Signed-off-by: Michal Novotny <minovotn(a)redhat.com>
Michal Novotny (2):
Fix virDomainChrDefParseTargetXML() to parse the target port if
available
Add support for multiple serial ports into the Xen driver
src/conf/domain_conf.c | 7 +-
src/xen/xend_internal.c | 86 ++++++++++--
src/xen/xm_internal.c | 137 ++++++++++++++++---
.../sexpr2xml-fv-serial-dev-2-ports.sexpr | 1 +
.../sexpr2xml-fv-serial-dev-2-ports.xml | 53 ++++++++
.../sexpr2xml-fv-serial-dev-2nd-port.sexpr | 1 +
.../sexpr2xml-fv-serial-dev-2nd-port.xml | 49 +++++++
tests/sexpr2xmltest.c | 2 +
.../test-fullvirt-serial-dev-2-ports.cfg | 25 ++++
.../test-fullvirt-serial-dev-2-ports.xml | 55 ++++++++
.../test-fullvirt-serial-dev-2nd-port.cfg | 25 ++++
.../test-fullvirt-serial-dev-2nd-port.xml | 53 ++++++++
tests/xmconfigtest.c | 2 +
.../xml2sexpr-fv-serial-dev-2-ports.sexpr | 1 +
.../xml2sexpr-fv-serial-dev-2-ports.xml | 44 +++++++
.../xml2sexpr-fv-serial-dev-2nd-port.sexpr | 1 +
.../xml2sexpr-fv-serial-dev-2nd-port.xml | 40 ++++++
tests/xml2sexprtest.c | 2 +
18 files changed, 546 insertions(+), 38 deletions(-)
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2nd-port.sexpr
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2nd-port.xml
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2nd-port.cfg
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2nd-port.xml
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2nd-port.sexpr
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2nd-port.xml
--
1.7.3.2
13 years, 9 months
[libvirt] [PATCH v2] Requires gettext for client package
by Osier Yang
libvirt-guests invokes functions in gettext.sh, so we need to
require gettext package in spec file.
Demo with the fix:
% rpm -q gettext
package gettext is not installed
% rpm -ivh libvirt-client-0.8.8-1.fc14.x86_64.rpm
error: Failed dependencies:
gettext is needed by libvirt-client-0.8.8-1.fc14.x86_64
* libvirt.spec.in
---
libvirt.spec.in | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/libvirt.spec.in b/libvirt.spec.in
index d4208e8..c4b4e7f 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -415,6 +415,7 @@ Requires: ncurses
# So remote clients can access libvirt over SSH tunnel
# (client invokes 'nc' against the UNIX socket on the server)
Requires: nc
+Requires: gettext
%if %{with_sasl}
Requires: cyrus-sasl
# Not technically required, but makes 'out-of-box' config
--
1.7.4
13 years, 9 months
[libvirt] cleanup patches
by Christophe Fergeau
Hi,
Here are 2 cleanup patches for 2 smallish issues I found when
looking at how virHash was used in libvirt.
Christophe
13 years, 9 months
[libvirt] [PATCH] check device-mapper when building with mpath or disk storage driver
by Wen Congyang
When I build libvirt without libvirtd, I receive the following errors:
GEN virsh.1
CCLD virsh
../src/.libs/libvirt.so: undefined reference to `dm_is_dm_major'
collect2: ld returned 1 exit status
make[3]: *** [virsh] Error 1
My build step:
# ./autogen.sh --without-libvirtd
# make dist
# rpmbuild --nodeps --define "_sourcedir `pwd`" --define "_without_libvirtd 1" -ba libvirt.spec
This bug was caused by commit df1011ca.
Signed-off-by: Wen Congyang <wency(a)cn.fujitsu.com>
---
configure.ac | 47 ++++++++++++++++++++++++-----------------------
src/util/util.c | 3 +++
2 files changed, 27 insertions(+), 23 deletions(-)
diff --git a/configure.ac b/configure.ac
index 2bb6918..2f22c18 100644
--- a/configure.ac
+++ b/configure.ac
@@ -1697,29 +1697,6 @@ if test "$with_storage_mpath" = "check" && test "$with_linux" = "yes"; then
fi
AM_CONDITIONAL([WITH_STORAGE_MPATH], [test "$with_storage_mpath" = "yes"])
-if test "$with_storage_mpath" = "yes"; then
- DEVMAPPER_CFLAGS=
- DEVMAPPER_LIBS=
- PKG_CHECK_MODULES([DEVMAPPER], [devmapper >= $DEVMAPPER_REQUIRED], [], [DEVMAPPER_FOUND=no])
- if test "$DEVMAPPER_FOUND" = "no"; then
- # devmapper is missing pkg-config files in ubuntu, suse, etc
- save_LIBS="$LIBS"
- save_CFLAGS="$CFLAGS"
- DEVMAPPER_FOUND=yes
- AC_CHECK_HEADER([libdevmapper.h],,[DEVMAPPER_FOUND=no])
- AC_CHECK_LIB([devmapper], [dm_task_run],,[DEVMAPPER_FOUND=no])
- DEVMAPPER_LIBS="-ldevmapper"
- LIBS="$save_LIBS"
- CFLAGS="$save_CFLAGS"
- fi
- if test "$DEVMAPPER_FOUND" = "no" ; then
- AC_MSG_ERROR([You must install device-mapper-devel/libdevmapper >= $DEVMAPPER_REQUIRED to compile libvirt])
- fi
-
-fi
-AC_SUBST([DEVMAPPER_CFLAGS])
-AC_SUBST([DEVMAPPER_LIBS])
-
LIBPARTED_CFLAGS=
LIBPARTED_LIBS=
if test "$with_storage_disk" = "yes" ||
@@ -1781,6 +1758,30 @@ AM_CONDITIONAL([WITH_STORAGE_DISK], [test "$with_storage_disk" = "yes"])
AC_SUBST([LIBPARTED_CFLAGS])
AC_SUBST([LIBPARTED_LIBS])
+if test "$with_storage_mpath" = "yes" ||
+ test "$with_storage_disk" = "yes"; then
+ DEVMAPPER_CFLAGS=
+ DEVMAPPER_LIBS=
+ PKG_CHECK_MODULES([DEVMAPPER], [devmapper >= $DEVMAPPER_REQUIRED], [], [DEVMAPPER_FOUND=no])
+ if test "$DEVMAPPER_FOUND" = "no"; then
+ # devmapper is missing pkg-config files in ubuntu, suse, etc
+ save_LIBS="$LIBS"
+ save_CFLAGS="$CFLAGS"
+ DEVMAPPER_FOUND=yes
+ AC_CHECK_HEADER([libdevmapper.h],,[DEVMAPPER_FOUND=no])
+ AC_CHECK_LIB([devmapper], [dm_task_run],,[DEVMAPPER_FOUND=no])
+ DEVMAPPER_LIBS="-ldevmapper"
+ LIBS="$save_LIBS"
+ CFLAGS="$save_CFLAGS"
+ fi
+ if test "$DEVMAPPER_FOUND" = "no" ; then
+ AC_MSG_ERROR([You must install device-mapper-devel/libdevmapper >= $DEVMAPPER_REQUIRED to compile libvirt])
+ fi
+
+fi
+AC_SUBST([DEVMAPPER_CFLAGS])
+AC_SUBST([DEVMAPPER_LIBS])
+
dnl
dnl check for libcurl (ESX/XenAPI)
dnl
diff --git a/src/util/util.c b/src/util/util.c
index a7ce4b3..cd4ddf3 100644
--- a/src/util/util.c
+++ b/src/util/util.c
@@ -3125,6 +3125,8 @@ virTimestamp(void)
return timestamp;
}
+#if defined WITH_STORAGE_MPATH || defined WITH_STORAGE_DISK
+/* Currently only disk and mpath storage drivers use this function. */
bool
virIsDevMapperDevice(const char *devname)
{
@@ -3137,3 +3139,4 @@ virIsDevMapperDevice(const char *devname)
return false;
}
+#endif
--
1.7.1
13 years, 9 months
[libvirt] Driver ESX
by Cédric Penas
Hi,
I am currently working on a project where I must develop a function to
clone domain for the driver ESX.
I immediately saw that the function cloneVM_Task () from VMware API was
not supported for standalone ESX.
Does anyone know the best way to still implement this function or a
workaround ?
Thank you in advance,
Cédric
13 years, 9 months
[libvirt] [PATCH] Add support for multiple serial ports into the Xen driver
by Michal Novotny
Hi,
this is the patch to add support for multiple serial ports to the
libvirt Xen driver. It support both old style (serial = "pty") and
new style (serial = [ "/dev/ttyS0", "/dev/ttyS1" ]) definition and
tests for xml2sexpr, sexpr2xml and xmconfig have been added as well.
Written and tested on RHEL-5 Xen dom0 and working as designed but
the Xen version have to have patch for RHBZ #614004.
Also, this patch is addressing issue described in RHBZ #670789.
Michal
Signed-off-by: Michal Novotny <minovotn(a)redhat.com>
---
src/xen/xend_internal.c | 73 ++++++++++-
src/xen/xm_internal.c | 141 +++++++++++++++++---
.../sexpr2xml-fv-serial-dev-2-ports.sexpr | 1 +
.../sexpr2xml-fv-serial-dev-2-ports.xml | 53 ++++++++
tests/sexpr2xmltest.c | 1 +
.../test-fullvirt-serial-dev-2-ports.cfg | 25 ++++
.../test-fullvirt-serial-dev-2-ports.xml | 55 ++++++++
tests/xmconfigtest.c | 1 +
.../xml2sexpr-fv-serial-dev-2-ports.sexpr | 1 +
.../xml2sexpr-fv-serial-dev-2-ports.xml | 44 ++++++
tests/xml2sexprtest.c | 1 +
11 files changed, 370 insertions(+), 26 deletions(-)
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr
create mode 100644 tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg
create mode 100644 tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr
create mode 100644 tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml
diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c
index 44d5a22..493736e 100644
--- a/src/xen/xend_internal.c
+++ b/src/xen/xend_internal.c
@@ -1218,6 +1218,9 @@ xenDaemonParseSxprChar(const char *value,
if (value[0] == '/') {
def->source.type = VIR_DOMAIN_CHR_TYPE_DEV;
+ def->data.file.path = strdup(value);
+ if (!def->data.file.path)
+ goto error;
} else {
if ((tmp = strchr(value, ':')) != NULL) {
*tmp = '\0';
@@ -2334,6 +2337,8 @@ xenDaemonParseSxpr(virConnectPtr conn,
tty = xenStoreDomainGetConsolePath(conn, def->id);
xenUnifiedUnlock(priv);
if (hvm) {
+
+
tmp = sexpr_node(root, "domain/image/hvm/serial");
if (tmp && STRNEQ(tmp, "none")) {
virDomainChrDefPtr chr;
@@ -2346,6 +2351,54 @@ xenDaemonParseSxpr(virConnectPtr conn,
chr->deviceType = VIR_DOMAIN_CHR_DEVICE_TYPE_SERIAL;
def->serials[def->nserials++] = chr;
}
+
+
+ const struct sexpr *serial_root;
+ bool have_multiple_serials = false;
+
+ serial_root = sexpr_lookup(root, "domain/image/hvm/serial");
+ if (serial_root) {
+ const struct sexpr *cur, *node, *cur2;
+
+ for (cur = serial_root; cur->kind == SEXPR_CONS; cur = cur->u.s.cdr) {
+ node = cur->u.s.car;
+
+ for (cur2 = node; cur2->kind == SEXPR_CONS; cur2 = cur2->u.s.cdr) {
+ tmp = cur2->u.s.car->u.value;
+
+ if (tmp && STRNEQ(tmp, "none")) {
+ virDomainChrDefPtr chr;
+ if ((chr = xenDaemonParseSxprChar(tmp, tty)) == NULL)
+ goto error;
+ if (VIR_REALLOC_N(def->serials, def->nserials+1) < 0) {
+ virDomainChrDefFree(chr);
+ goto no_memory;
+ }
+ chr->targetType = VIR_DOMAIN_CHR_TARGET_TYPE_SERIAL;
+ chr->target.port = def->nserials;
+ def->serials[def->nserials++] = chr;
+ }
+ have_multiple_serials = true;
+ }
+ }
+ }
+
+ /* If no serial port has been defined (using the new-style definition) use the old way */
+ if (!have_multiple_serials) {
+ tmp = sexpr_node(root, "domain/image/hvm/serial");
+ if (tmp && STRNEQ(tmp, "none")) {
+ virDomainChrDefPtr chr;
+ if ((chr = xenDaemonParseSxprChar(tmp, tty)) == NULL)
+ goto error;
+ if (VIR_REALLOC_N(def->serials, def->nserials+1) < 0) {
+ virDomainChrDefFree(chr);
+ goto no_memory;
+ }
+ chr->targetType = VIR_DOMAIN_CHR_TARGET_TYPE_SERIAL;
+ def->serials[def->nserials++] = chr;
+ }
+ }
+
tmp = sexpr_node(root, "domain/image/hvm/parallel");
if (tmp && STRNEQ(tmp, "none")) {
virDomainChrDefPtr chr;
@@ -5958,10 +6011,22 @@ xenDaemonFormatSxpr(virConnectPtr conn,
virBufferAddLit(&buf, "(parallel none)");
}
if (def->serials) {
- virBufferAddLit(&buf, "(serial ");
- if (xenDaemonFormatSxprChr(def->serials[0], &buf) < 0)
- goto error;
- virBufferAddLit(&buf, ")");
+ if (def->nserials > 1) {
+ virBufferAddLit(&buf, "(serial (");
+ for (i = 0; i < def->nserials; i++) {
+ if (xenDaemonFormatSxprChr(def->serials[i], &buf) < 0)
+ goto error;
+ if (i < def->nserials - 1)
+ virBufferAddLit(&buf, " ");
+ }
+ virBufferAddLit(&buf, "))");
+ }
+ else {
+ virBufferAddLit(&buf, "(serial ");
+ if (xenDaemonFormatSxprChr(def->serials[0], &buf) < 0)
+ goto error;
+ virBufferAddLit(&buf, ")");
+ }
} else {
virBufferAddLit(&buf, "(serial none)");
}
diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c
index bfb6698..f457d80 100644
--- a/src/xen/xm_internal.c
+++ b/src/xen/xm_internal.c
@@ -1432,20 +1432,54 @@ xenXMDomainConfigParse(virConnectPtr conn, virConfPtr conf) {
chr = NULL;
}
- if (xenXMConfigGetString(conf, "serial", &str, NULL) < 0)
- goto cleanup;
- if (str && STRNEQ(str, "none") &&
- !(chr = xenDaemonParseSxprChar(str, NULL)))
- goto cleanup;
+ /* Try to get the list of values to support multiple serial ports */
+ list = virConfGetValue(conf, "serial");
+ if (list && list->type == VIR_CONF_LIST) {
+ list = list->list;
+ while (list) {
+ char *port;
+
+ if ((list->type != VIR_CONF_STRING) || (list->str == NULL)) {
+ xenXMError(VIR_ERR_INTERNAL_ERROR,
+ _("config value %s was malformed"), port);
+ goto cleanup;
+ }
- if (chr) {
- if (VIR_ALLOC_N(def->serials, 1) < 0) {
- virDomainChrDefFree(chr);
- goto no_memory;
+ port = list->str;
+ if (VIR_ALLOC(chr) < 0)
+ goto no_memory;
+ if (!(chr = xenDaemonParseSxprChar(port, NULL)))
+ goto cleanup;
+
+ if (VIR_REALLOC_N(def->serials, def->nserials+1) < 0)
+ goto no_memory;
+
+ chr->targetType = VIR_DOMAIN_CHR_TARGET_TYPE_SERIAL;
+ chr->target.port = def->nserials;
+
+ def->serials[def->nserials++] = chr;
+ chr = NULL;
+
+ list = list->next;
+ }
+ }
+ /* If domain is not using multiple serial ports we parse data old way */
+ else {
+ if (xenXMConfigGetString(conf, "serial", &str, NULL) < 0)
+ goto cleanup;
+ if (str && STRNEQ(str, "none") &&
+ !(chr = xenDaemonParseSxprChar(str, NULL)))
+ goto cleanup;
+
+ if (chr) {
+ if (VIR_ALLOC_N(def->serials, 1) < 0) {
+ virDomainChrDefFree(chr);
+ goto no_memory;
+ }
+ chr->targetType = VIR_DOMAIN_CHR_TARGET_TYPE_SERIAL;
+ def->serials[0] = chr;
+ def->nserials++;
}
- chr->deviceType = VIR_DOMAIN_CHR_DEVICE_TYPE_SERIAL;
- def->serials[0] = chr;
- def->nserials++;
}
} else {
if (!(def->console = xenDaemonParseSxprChar("pty", NULL)))
@@ -2123,6 +2157,45 @@ cleanup:
return -1;
}
+static int xenXMDomainConfigFormatSerial(virConfValuePtr list,
+ virDomainChrDefPtr serial)
+{
+ virBuffer buf = VIR_BUFFER_INITIALIZER;
+ virConfValuePtr val, tmp;
+ int ret;
+
+ ret = xenDaemonFormatSxprChr(serial, &buf);
+ if (ret < 0) {
+ virReportOOMError();
+ goto cleanup;
+ }
+ if (virBufferError(&buf)) {
+ virReportOOMError();
+ goto cleanup;
+ }
+
+ if (VIR_ALLOC(val) < 0) {
+ virReportOOMError();
+ goto cleanup;
+ }
+
+ val->type = VIR_CONF_STRING;
+ val->str = virBufferContentAndReset(&buf);
+ tmp = list->list;
+ while (tmp && tmp->next)
+ tmp = tmp->next;
+ if (tmp)
+ tmp->next = val;
+ else
+ list->list = val;
+
+ return 0;
+
+cleanup:
+ virBufferFreeAndReset(&buf);
+ return -1;
+}
+
static int xenXMDomainConfigFormatNet(virConnectPtr conn,
virConfValuePtr list,
virDomainNetDefPtr net,
@@ -2685,17 +2758,41 @@ virConfPtr xenXMDomainConfigFormat(virConnectPtr conn,
}
if (def->nserials) {
- virBuffer buf = VIR_BUFFER_INITIALIZER;
- char *str;
- int ret;
+ /* If there's a single serial port definition use the old approach not to break old configs */
+ if (def->nserials == 1) {
+ virBuffer buf = VIR_BUFFER_INITIALIZER;
+ char *str;
+ int ret;
+
+ ret = xenDaemonFormatSxprChr(def->serials[0], &buf);
+ str = virBufferContentAndReset(&buf);
+ if (ret == 0)
+ ret = xenXMConfigSetString(conf, "serial", str);
+ VIR_FREE(str);
+ if (ret < 0)
+ goto no_memory;
+ }
+ else {
+ virConfValuePtr serialVal = NULL;
- ret = xenDaemonFormatSxprChr(def->serials[0], &buf);
- str = virBufferContentAndReset(&buf);
- if (ret == 0)
- ret = xenXMConfigSetString(conf, "serial", str);
- VIR_FREE(str);
- if (ret < 0)
- goto no_memory;
+ if (VIR_ALLOC(serialVal) < 0)
+ goto no_memory;
+ serialVal->type = VIR_CONF_LIST;
+ serialVal->list = NULL;
+
+ for (i = 0; i < def->nserials; i++) {
+ if (xenXMDomainConfigFormatSerial(serialVal, def->serials[i]) < 0)
+ goto cleanup;
+ }
+
+ if (serialVal->list != NULL) {
+ int ret = virConfSetValue(conf, "serial", serialVal);
+ serialVal = NULL;
+ if (ret < 0)
+ goto no_memory;
+ }
+ VIR_FREE(serialVal);
+ }
} else {
if (xenXMConfigSetString(conf, "serial", "none") < 0)
goto no_memory;
diff --git a/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr b/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr
new file mode 100644
index 0000000..e709eb0
--- /dev/null
+++ b/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.sexpr
@@ -0,0 +1 @@
+(domain (domid 1)(name 'fvtest')(memory 400)(maxmem 400)(vcpus 1)(uuid 'b5d70dd275cdaca517769660b059d8ff')(on_poweroff 'destroy')(on_reboot 'restart')(on_crash 'restart')(image (hvm (kernel '/usr/lib/xen/boot/hvmloader')(vcpus 1)(boot c)(cdrom '/root/boot.iso')(acpi 1)(usb 1)(parallel none)(serial (/dev/ttyS0 /dev/ttyS1))(device_model '/usr/lib64/xen/bin/qemu-dm')(vnc 1)))(device (vbd (dev 'ioemu:hda')(uname 'file:/root/foo.img')(mode 'w')))(device (vif (mac '00:16:3e:1b:b1:47')(bridge 'xenbr0')(script 'vif-bridge')(type ioemu))))
diff --git a/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml b/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml
new file mode 100644
index 0000000..783cd67
--- /dev/null
+++ b/tests/sexpr2xmldata/sexpr2xml-fv-serial-dev-2-ports.xml
@@ -0,0 +1,53 @@
+<domain type='xen' id='1'>
+ <name>fvtest</name>
+ <uuid>b5d70dd2-75cd-aca5-1776-9660b059d8ff</uuid>
+ <memory>409600</memory>
+ <currentMemory>409600</currentMemory>
+ <vcpu>1</vcpu>
+ <os>
+ <type>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <boot dev='hd'/>
+ </os>
+ <features>
+ <acpi/>
+ </features>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ <devices>
+ <emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
+ <disk type='file' device='disk'>
+ <driver name='file'/>
+ <source file='/root/foo.img'/>
+ <target dev='hda' bus='ide'/>
+ </disk>
+ <disk type='file' device='cdrom'>
+ <driver name='file'/>
+ <source file='/root/boot.iso'/>
+ <target dev='hdc' bus='ide'/>
+ <readonly/>
+ </disk>
+ <interface type='bridge'>
+ <mac address='00:16:3e:1b:b1:47'/>
+ <source bridge='xenbr0'/>
+ <script path='vif-bridge'/>
+ <target dev='vif1.0'/>
+ </interface>
+ <serial type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </serial>
+ <serial type='dev'>
+ <source path='/dev/ttyS1'/>
+ <target port='1'/>
+ </serial>
+ <console type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </console>
+ <input type='mouse' bus='ps2'/>
+ <graphics type='vnc' port='5901' autoport='no'/>
+ </devices>
+</domain>
diff --git a/tests/sexpr2xmltest.c b/tests/sexpr2xmltest.c
index f100dd8..4b5766d 100644
--- a/tests/sexpr2xmltest.c
+++ b/tests/sexpr2xmltest.c
@@ -158,6 +158,7 @@ mymain(int argc, char **argv)
DO_TEST("fv-serial-null", "fv-serial-null", 1);
DO_TEST("fv-serial-file", "fv-serial-file", 1);
+ DO_TEST("fv-serial-dev-2-ports", "fv-serial-dev-2-ports", 1);
DO_TEST("fv-serial-stdio", "fv-serial-stdio", 1);
DO_TEST("fv-serial-pty", "fv-serial-pty", 1);
DO_TEST("fv-serial-pipe", "fv-serial-pipe", 1);
diff --git a/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg b/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg
new file mode 100644
index 0000000..86e7998
--- /dev/null
+++ b/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.cfg
@@ -0,0 +1,25 @@
+name = "XenGuest2"
+uuid = "c7a5fdb2-cdaf-9455-926a-d65c16db1809"
+maxmem = 579
+memory = 394
+vcpus = 1
+builder = "hvm"
+kernel = "/usr/lib/xen/boot/hvmloader"
+boot = "d"
+pae = 1
+acpi = 1
+apic = 1
+localtime = 0
+on_poweroff = "destroy"
+on_reboot = "restart"
+on_crash = "restart"
+device_model = "/usr/lib/xen/bin/qemu-dm"
+sdl = 0
+vnc = 1
+vncunused = 1
+vnclisten = "127.0.0.1"
+vncpasswd = "123poi"
+disk = [ "phy:/dev/HostVG/XenGuest2,hda,w", "file:/root/boot.iso,hdc:cdrom,r" ]
+vif = [ "mac=00:16:3e:66:92:9c,bridge=xenbr1,script=vif-bridge,model=e1000,type=ioemu" ]
+parallel = "none"
+serial = [ "/dev/ttyS0", "/dev/ttyS1" ]
diff --git a/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml b/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml
new file mode 100644
index 0000000..e4d3f16
--- /dev/null
+++ b/tests/xmconfigdata/test-fullvirt-serial-dev-2-ports.xml
@@ -0,0 +1,55 @@
+<domain type='xen'>
+ <name>XenGuest2</name>
+ <uuid>c7a5fdb2-cdaf-9455-926a-d65c16db1809</uuid>
+ <memory>592896</memory>
+ <currentMemory>403456</currentMemory>
+ <vcpu>1</vcpu>
+ <os>
+ <type arch='i686' machine='xenfv'>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <boot dev='cdrom'/>
+ </os>
+ <features>
+ <acpi/>
+ <apic/>
+ <pae/>
+ </features>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ <devices>
+ <emulator>/usr/lib/xen/bin/qemu-dm</emulator>
+ <disk type='block' device='disk'>
+ <driver name='phy'/>
+ <source dev='/dev/HostVG/XenGuest2'/>
+ <target dev='hda' bus='ide'/>
+ </disk>
+ <disk type='file' device='cdrom'>
+ <driver name='file'/>
+ <source file='/root/boot.iso'/>
+ <target dev='hdc' bus='ide'/>
+ <readonly/>
+ </disk>
+ <interface type='bridge'>
+ <mac address='00:16:3e:66:92:9c'/>
+ <source bridge='xenbr1'/>
+ <script path='vif-bridge'/>
+ <model type='e1000'/>
+ </interface>
+ <serial type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </serial>
+ <serial type='dev'>
+ <source path='/dev/ttyS1'/>
+ <target port='1'/>
+ </serial>
+ <console type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </console>
+ <input type='mouse' bus='ps2'/>
+ <graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1' passwd='123poi'/>
+ </devices>
+</domain>
diff --git a/tests/xmconfigtest.c b/tests/xmconfigtest.c
index ea00747..0de890c 100644
--- a/tests/xmconfigtest.c
+++ b/tests/xmconfigtest.c
@@ -218,6 +218,7 @@ mymain(int argc, char **argv)
DO_TEST("fullvirt-usbtablet", 2);
DO_TEST("fullvirt-usbmouse", 2);
DO_TEST("fullvirt-serial-file", 2);
+ DO_TEST("fullvirt-serial-dev-2-ports", 2);
DO_TEST("fullvirt-serial-null", 2);
DO_TEST("fullvirt-serial-pipe", 2);
DO_TEST("fullvirt-serial-pty", 2);
diff --git a/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr b/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr
new file mode 100644
index 0000000..2048159
--- /dev/null
+++ b/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.sexpr
@@ -0,0 +1 @@
+(vm (name 'fvtest')(memory 400)(maxmem 400)(vcpus 1)(uuid 'b5d70dd2-75cd-aca5-1776-9660b059d8bc')(on_poweroff 'destroy')(on_reboot 'restart')(on_crash 'restart')(image (hvm (kernel '/usr/lib/xen/boot/hvmloader')(vcpus 1)(boot c)(cdrom '/root/boot.iso')(acpi 1)(usb 1)(parallel none)(serial (/dev/ttyS0 /dev/ttyS1))(device_model '/usr/lib64/xen/bin/qemu-dm')(vnc 1)))(device (vbd (dev 'ioemu:hda')(uname 'file:/root/foo.img')(mode 'w')))(device (vif (mac '00:16:3e:1b:b1:47')(bridge 'xenbr0')(script 'vif-bridge')(model 'e1000')(type ioemu))))
\ No newline at end of file
diff --git a/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml b/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml
new file mode 100644
index 0000000..e5d1817
--- /dev/null
+++ b/tests/xml2sexprdata/xml2sexpr-fv-serial-dev-2-ports.xml
@@ -0,0 +1,44 @@
+<domain type='xen'>
+ <name>fvtest</name>
+ <uuid>b5d70dd275cdaca517769660b059d8bc</uuid>
+ <os>
+ <type>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <boot dev='hd'/>
+ </os>
+ <memory>409600</memory>
+ <vcpu>1</vcpu>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ <features>
+ <acpi/>
+ </features>
+ <devices>
+ <emulator>/usr/lib64/xen/bin/qemu-dm</emulator>
+ <interface type='bridge'>
+ <source bridge='xenbr0'/>
+ <mac address='00:16:3e:1b:b1:47'/>
+ <script path='vif-bridge'/>
+ <model type='e1000'/>
+ </interface>
+ <disk type='file' device='cdrom'>
+ <source file='/root/boot.iso'/>
+ <target dev='hdc'/>
+ <readonly/>
+ </disk>
+ <disk type='file'>
+ <source file='/root/foo.img'/>
+ <target dev='ioemu:hda'/>
+ </disk>
+ <serial type='dev'>
+ <source path='/dev/ttyS0'/>
+ <target port='0'/>
+ </serial>
+ <serial type='dev'>
+ <source path='/dev/ttyS1'/>
+ <target port='1'/>
+ </serial>
+ <graphics type='vnc' port='5917' keymap='ja'/>
+ </devices>
+</domain>
diff --git a/tests/xml2sexprtest.c b/tests/xml2sexprtest.c
index 8a5d115..ed10dec 100644
--- a/tests/xml2sexprtest.c
+++ b/tests/xml2sexprtest.c
@@ -148,6 +148,7 @@ mymain(int argc, char **argv)
DO_TEST("fv-serial-null", "fv-serial-null", "fvtest", 1);
DO_TEST("fv-serial-file", "fv-serial-file", "fvtest", 1);
+ DO_TEST("fv-serial-dev-2-ports", "fv-serial-dev-2-ports", "fvtest", 1);
DO_TEST("fv-serial-stdio", "fv-serial-stdio", "fvtest", 1);
DO_TEST("fv-serial-pty", "fv-serial-pty", "fvtest", 1);
DO_TEST("fv-serial-pipe", "fv-serial-pipe", "fvtest", 1);
--
1.7.3.2
13 years, 9 months
[libvirt] [PATCH] maint: Expand tabs in python code
by Jiri Denemark
Also cfg.mk is tweaked to force this for all future changes to *.py
files.
---
cfg.mk | 4 +-
docs/apibuild.py | 3194 ++++++++++++++++++++++++------------------------
docs/index.py | 834 +++++++-------
python/generator.py | 578 +++++-----
python/tests/create.py | 8 +-
5 files changed, 2309 insertions(+), 2309 deletions(-)
diff --git a/cfg.mk b/cfg.mk
index 4bbeb96..f870723 100644
--- a/cfg.mk
+++ b/cfg.mk
@@ -327,8 +327,8 @@ sc_prohibit_ctype_h:
# files in gnulib, since they're imported.
sc_TAB_in_indentation:
@prohibit='^ * ' \
- in_vc_files='(\.(rng|[ch](\.in)?|html.in)|(daemon|tools)/.*\.in)$$' \
- halt='use leading spaces, not TAB, in C, sh, html, and RNG schemas' \
+ in_vc_files='(\.(rng|[ch](\.in)?|html.in|py)|(daemon|tools)/.*\.in)$$' \
+ halt='use leading spaces, not TAB, in C, sh, html, py, and RNG schemas' \
$(_sc_search_regexp)
ctype_re = isalnum|isalpha|isascii|isblank|iscntrl|isdigit|isgraph|islower\
diff --git a/docs/apibuild.py b/docs/apibuild.py
index 895a313..506932d 100755
--- a/docs/apibuild.py
+++ b/docs/apibuild.py
@@ -72,34 +72,34 @@ class identifier:
def __init__(self, name, header=None, module=None, type=None, lineno = 0,
info=None, extra=None, conditionals = None):
self.name = name
- self.header = header
- self.module = module
- self.type = type
- self.info = info
- self.extra = extra
- self.lineno = lineno
- self.static = 0
- if conditionals == None or len(conditionals) == 0:
- self.conditionals = None
- else:
- self.conditionals = conditionals[:]
- if self.name == debugsym:
- print "=> define %s : %s" % (debugsym, (module, type, info,
- extra, conditionals))
+ self.header = header
+ self.module = module
+ self.type = type
+ self.info = info
+ self.extra = extra
+ self.lineno = lineno
+ self.static = 0
+ if conditionals == None or len(conditionals) == 0:
+ self.conditionals = None
+ else:
+ self.conditionals = conditionals[:]
+ if self.name == debugsym:
+ print "=> define %s : %s" % (debugsym, (module, type, info,
+ extra, conditionals))
def __repr__(self):
r = "%s %s:" % (self.type, self.name)
- if self.static:
- r = r + " static"
- if self.module != None:
- r = r + " from %s" % (self.module)
- if self.info != None:
- r = r + " " + `self.info`
- if self.extra != None:
- r = r + " " + `self.extra`
- if self.conditionals != None:
- r = r + " " + `self.conditionals`
- return r
+ if self.static:
+ r = r + " static"
+ if self.module != None:
+ r = r + " from %s" % (self.module)
+ if self.info != None:
+ r = r + " " + `self.info`
+ if self.extra != None:
+ r = r + " " + `self.extra`
+ if self.conditionals != None:
+ r = r + " " + `self.conditionals`
+ return r
def set_header(self, header):
@@ -117,10 +117,10 @@ class identifier:
def set_static(self, static):
self.static = static
def set_conditionals(self, conditionals):
- if conditionals == None or len(conditionals) == 0:
- self.conditionals = None
- else:
- self.conditionals = conditionals[:]
+ if conditionals == None or len(conditionals) == 0:
+ self.conditionals = None
+ else:
+ self.conditionals = conditionals[:]
def get_name(self):
return self.name
@@ -143,96 +143,96 @@ class identifier:
def update(self, header, module, type = None, info = None, extra=None,
conditionals=None):
- if self.name == debugsym:
- print "=> update %s : %s" % (debugsym, (module, type, info,
- extra, conditionals))
+ if self.name == debugsym:
+ print "=> update %s : %s" % (debugsym, (module, type, info,
+ extra, conditionals))
if header != None and self.header == None:
- self.set_header(module)
+ self.set_header(module)
if module != None and (self.module == None or self.header == self.module):
- self.set_module(module)
+ self.set_module(module)
if type != None and self.type == None:
- self.set_type(type)
+ self.set_type(type)
if info != None:
- self.set_info(info)
+ self.set_info(info)
if extra != None:
- self.set_extra(extra)
+ self.set_extra(extra)
if conditionals != None:
- self.set_conditionals(conditionals)
+ self.set_conditionals(conditionals)
class index:
def __init__(self, name = "noname"):
self.name = name
self.identifiers = {}
self.functions = {}
- self.variables = {}
- self.includes = {}
- self.structs = {}
- self.enums = {}
- self.typedefs = {}
- self.macros = {}
- self.references = {}
- self.info = {}
+ self.variables = {}
+ self.includes = {}
+ self.structs = {}
+ self.enums = {}
+ self.typedefs = {}
+ self.macros = {}
+ self.references = {}
+ self.info = {}
def add_ref(self, name, header, module, static, type, lineno, info=None, extra=None, conditionals = None):
if name[0:2] == '__':
- return None
+ return None
d = None
try:
- d = self.identifiers[name]
- d.update(header, module, type, lineno, info, extra, conditionals)
- except:
- d = identifier(name, header, module, type, lineno, info, extra, conditionals)
- self.identifiers[name] = d
+ d = self.identifiers[name]
+ d.update(header, module, type, lineno, info, extra, conditionals)
+ except:
+ d = identifier(name, header, module, type, lineno, info, extra, conditionals)
+ self.identifiers[name] = d
- if d != None and static == 1:
- d.set_static(1)
+ if d != None and static == 1:
+ d.set_static(1)
- if d != None and name != None and type != None:
- self.references[name] = d
+ if d != None and name != None and type != None:
+ self.references[name] = d
- if name == debugsym:
- print "New ref: %s" % (d)
+ if name == debugsym:
+ print "New ref: %s" % (d)
- return d
+ return d
def add(self, name, header, module, static, type, lineno, info=None, extra=None, conditionals = None):
if name[0:2] == '__':
- return None
+ return None
d = None
try:
- d = self.identifiers[name]
- d.update(header, module, type, lineno, info, extra, conditionals)
- except:
- d = identifier(name, header, module, type, lineno, info, extra, conditionals)
- self.identifiers[name] = d
-
- if d != None and static == 1:
- d.set_static(1)
-
- if d != None and name != None and type != None:
- if type == "function":
- self.functions[name] = d
- elif type == "functype":
- self.functions[name] = d
- elif type == "variable":
- self.variables[name] = d
- elif type == "include":
- self.includes[name] = d
- elif type == "struct":
- self.structs[name] = d
- elif type == "enum":
- self.enums[name] = d
- elif type == "typedef":
- self.typedefs[name] = d
- elif type == "macro":
- self.macros[name] = d
- else:
- print "Unable to register type ", type
-
- if name == debugsym:
- print "New symbol: %s" % (d)
-
- return d
+ d = self.identifiers[name]
+ d.update(header, module, type, lineno, info, extra, conditionals)
+ except:
+ d = identifier(name, header, module, type, lineno, info, extra, conditionals)
+ self.identifiers[name] = d
+
+ if d != None and static == 1:
+ d.set_static(1)
+
+ if d != None and name != None and type != None:
+ if type == "function":
+ self.functions[name] = d
+ elif type == "functype":
+ self.functions[name] = d
+ elif type == "variable":
+ self.variables[name] = d
+ elif type == "include":
+ self.includes[name] = d
+ elif type == "struct":
+ self.structs[name] = d
+ elif type == "enum":
+ self.enums[name] = d
+ elif type == "typedef":
+ self.typedefs[name] = d
+ elif type == "macro":
+ self.macros[name] = d
+ else:
+ print "Unable to register type ", type
+
+ if name == debugsym:
+ print "New symbol: %s" % (d)
+
+ return d
def merge(self, idx):
for id in idx.functions.keys():
@@ -240,41 +240,41 @@ class index:
# macro might be used to override functions or variables
# definitions
#
- if self.macros.has_key(id):
- del self.macros[id]
- if self.functions.has_key(id):
- print "function %s from %s redeclared in %s" % (
- id, self.functions[id].header, idx.functions[id].header)
- else:
- self.functions[id] = idx.functions[id]
- self.identifiers[id] = idx.functions[id]
+ if self.macros.has_key(id):
+ del self.macros[id]
+ if self.functions.has_key(id):
+ print "function %s from %s redeclared in %s" % (
+ id, self.functions[id].header, idx.functions[id].header)
+ else:
+ self.functions[id] = idx.functions[id]
+ self.identifiers[id] = idx.functions[id]
for id in idx.variables.keys():
#
# macro might be used to override functions or variables
# definitions
#
- if self.macros.has_key(id):
- del self.macros[id]
- if self.variables.has_key(id):
- print "variable %s from %s redeclared in %s" % (
- id, self.variables[id].header, idx.variables[id].header)
- else:
- self.variables[id] = idx.variables[id]
- self.identifiers[id] = idx.variables[id]
+ if self.macros.has_key(id):
+ del self.macros[id]
+ if self.variables.has_key(id):
+ print "variable %s from %s redeclared in %s" % (
+ id, self.variables[id].header, idx.variables[id].header)
+ else:
+ self.variables[id] = idx.variables[id]
+ self.identifiers[id] = idx.variables[id]
for id in idx.structs.keys():
- if self.structs.has_key(id):
- print "struct %s from %s redeclared in %s" % (
- id, self.structs[id].header, idx.structs[id].header)
- else:
- self.structs[id] = idx.structs[id]
- self.identifiers[id] = idx.structs[id]
+ if self.structs.has_key(id):
+ print "struct %s from %s redeclared in %s" % (
+ id, self.structs[id].header, idx.structs[id].header)
+ else:
+ self.structs[id] = idx.structs[id]
+ self.identifiers[id] = idx.structs[id]
for id in idx.typedefs.keys():
- if self.typedefs.has_key(id):
- print "typedef %s from %s redeclared in %s" % (
- id, self.typedefs[id].header, idx.typedefs[id].header)
- else:
- self.typedefs[id] = idx.typedefs[id]
- self.identifiers[id] = idx.typedefs[id]
+ if self.typedefs.has_key(id):
+ print "typedef %s from %s redeclared in %s" % (
+ id, self.typedefs[id].header, idx.typedefs[id].header)
+ else:
+ self.typedefs[id] = idx.typedefs[id]
+ self.identifiers[id] = idx.typedefs[id]
for id in idx.macros.keys():
#
# macro might be used to override functions or variables
@@ -286,88 +286,88 @@ class index:
continue
if self.enums.has_key(id):
continue
- if self.macros.has_key(id):
- print "macro %s from %s redeclared in %s" % (
- id, self.macros[id].header, idx.macros[id].header)
- else:
- self.macros[id] = idx.macros[id]
- self.identifiers[id] = idx.macros[id]
+ if self.macros.has_key(id):
+ print "macro %s from %s redeclared in %s" % (
+ id, self.macros[id].header, idx.macros[id].header)
+ else:
+ self.macros[id] = idx.macros[id]
+ self.identifiers[id] = idx.macros[id]
for id in idx.enums.keys():
- if self.enums.has_key(id):
- print "enum %s from %s redeclared in %s" % (
- id, self.enums[id].header, idx.enums[id].header)
- else:
- self.enums[id] = idx.enums[id]
- self.identifiers[id] = idx.enums[id]
+ if self.enums.has_key(id):
+ print "enum %s from %s redeclared in %s" % (
+ id, self.enums[id].header, idx.enums[id].header)
+ else:
+ self.enums[id] = idx.enums[id]
+ self.identifiers[id] = idx.enums[id]
def merge_public(self, idx):
for id in idx.functions.keys():
- if self.functions.has_key(id):
- # check that function condition agrees with header
- if idx.functions[id].conditionals != \
- self.functions[id].conditionals:
- print "Header condition differs from Function for %s:" \
- % id
- print " H: %s" % self.functions[id].conditionals
- print " C: %s" % idx.functions[id].conditionals
- up = idx.functions[id]
- self.functions[id].update(None, up.module, up.type, up.info, up.extra)
- # else:
- # print "Function %s from %s is not declared in headers" % (
- # id, idx.functions[id].module)
- # TODO: do the same for variables.
+ if self.functions.has_key(id):
+ # check that function condition agrees with header
+ if idx.functions[id].conditionals != \
+ self.functions[id].conditionals:
+ print "Header condition differs from Function for %s:" \
+ % id
+ print " H: %s" % self.functions[id].conditionals
+ print " C: %s" % idx.functions[id].conditionals
+ up = idx.functions[id]
+ self.functions[id].update(None, up.module, up.type, up.info, up.extra)
+ # else:
+ # print "Function %s from %s is not declared in headers" % (
+ # id, idx.functions[id].module)
+ # TODO: do the same for variables.
def analyze_dict(self, type, dict):
count = 0
- public = 0
+ public = 0
for name in dict.keys():
- id = dict[name]
- count = count + 1
- if id.static == 0:
- public = public + 1
+ id = dict[name]
+ count = count + 1
+ if id.static == 0:
+ public = public + 1
if count != public:
- print " %d %s , %d public" % (count, type, public)
- elif count != 0:
- print " %d public %s" % (count, type)
+ print " %d %s , %d public" % (count, type, public)
+ elif count != 0:
+ print " %d public %s" % (count, type)
def analyze(self):
- self.analyze_dict("functions", self.functions)
- self.analyze_dict("variables", self.variables)
- self.analyze_dict("structs", self.structs)
- self.analyze_dict("typedefs", self.typedefs)
- self.analyze_dict("macros", self.macros)
+ self.analyze_dict("functions", self.functions)
+ self.analyze_dict("variables", self.variables)
+ self.analyze_dict("structs", self.structs)
+ self.analyze_dict("typedefs", self.typedefs)
+ self.analyze_dict("macros", self.macros)
class CLexer:
"""A lexer for the C language, tokenize the input by reading and
analyzing it line by line"""
def __init__(self, input):
self.input = input
- self.tokens = []
- self.line = ""
- self.lineno = 0
+ self.tokens = []
+ self.line = ""
+ self.lineno = 0
def getline(self):
line = ''
- while line == '':
- line = self.input.readline()
- if not line:
- return None
- self.lineno = self.lineno + 1
- line = string.lstrip(line)
- line = string.rstrip(line)
- if line == '':
- continue
- while line[-1] == '\\':
- line = line[:-1]
- n = self.input.readline()
- self.lineno = self.lineno + 1
- n = string.lstrip(n)
- n = string.rstrip(n)
- if not n:
- break
- else:
- line = line + n
+ while line == '':
+ line = self.input.readline()
+ if not line:
+ return None
+ self.lineno = self.lineno + 1
+ line = string.lstrip(line)
+ line = string.rstrip(line)
+ if line == '':
+ continue
+ while line[-1] == '\\':
+ line = line[:-1]
+ n = self.input.readline()
+ self.lineno = self.lineno + 1
+ n = string.lstrip(n)
+ n = string.rstrip(n)
+ if not n:
+ break
+ else:
+ line = line + n
return line
def getlineno(self):
@@ -378,193 +378,193 @@ class CLexer:
def debug(self):
print "Last token: ", self.last
- print "Token queue: ", self.tokens
- print "Line %d end: " % (self.lineno), self.line
+ print "Token queue: ", self.tokens
+ print "Line %d end: " % (self.lineno), self.line
def token(self):
while self.tokens == []:
- if self.line == "":
- line = self.getline()
- else:
- line = self.line
- self.line = ""
- if line == None:
- return None
-
- if line[0] == '#':
- self.tokens = map((lambda x: ('preproc', x)),
- string.split(line))
- break;
- l = len(line)
- if line[0] == '"' or line[0] == "'":
- end = line[0]
- line = line[1:]
- found = 0
- tok = ""
- while found == 0:
- i = 0
- l = len(line)
- while i < l:
- if line[i] == end:
- self.line = line[i+1:]
- line = line[:i]
- l = i
- found = 1
- break
- if line[i] == '\\':
- i = i + 1
- i = i + 1
- tok = tok + line
- if found == 0:
- line = self.getline()
- if line == None:
- return None
- self.last = ('string', tok)
- return self.last
-
- if l >= 2 and line[0] == '/' and line[1] == '*':
- line = line[2:]
- found = 0
- tok = ""
- while found == 0:
- i = 0
- l = len(line)
- while i < l:
- if line[i] == '*' and i+1 < l and line[i+1] == '/':
- self.line = line[i+2:]
- line = line[:i-1]
- l = i
- found = 1
- break
- i = i + 1
- if tok != "":
- tok = tok + "\n"
- tok = tok + line
- if found == 0:
- line = self.getline()
- if line == None:
- return None
- self.last = ('comment', tok)
- return self.last
- if l >= 2 and line[0] == '/' and line[1] == '/':
- line = line[2:]
- self.last = ('comment', line)
- return self.last
- i = 0
- while i < l:
- if line[i] == '/' and i+1 < l and line[i+1] == '/':
- self.line = line[i:]
- line = line[:i]
- break
- if line[i] == '/' and i+1 < l and line[i+1] == '*':
- self.line = line[i:]
- line = line[:i]
- break
- if line[i] == '"' or line[i] == "'":
- self.line = line[i:]
- line = line[:i]
- break
- i = i + 1
- l = len(line)
- i = 0
- while i < l:
- if line[i] == ' ' or line[i] == '\t':
- i = i + 1
- continue
- o = ord(line[i])
- if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
- (o >= 48 and o <= 57):
- s = i
- while i < l:
- o = ord(line[i])
- if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
- (o >= 48 and o <= 57) or string.find(
- " \t(){}:;,+-*/%&!|[]=><", line[i]) == -1:
- i = i + 1
- else:
- break
- self.tokens.append(('name', line[s:i]))
- continue
- if string.find("(){}:;,[]", line[i]) != -1:
+ if self.line == "":
+ line = self.getline()
+ else:
+ line = self.line
+ self.line = ""
+ if line == None:
+ return None
+
+ if line[0] == '#':
+ self.tokens = map((lambda x: ('preproc', x)),
+ string.split(line))
+ break;
+ l = len(line)
+ if line[0] == '"' or line[0] == "'":
+ end = line[0]
+ line = line[1:]
+ found = 0
+ tok = ""
+ while found == 0:
+ i = 0
+ l = len(line)
+ while i < l:
+ if line[i] == end:
+ self.line = line[i+1:]
+ line = line[:i]
+ l = i
+ found = 1
+ break
+ if line[i] == '\\':
+ i = i + 1
+ i = i + 1
+ tok = tok + line
+ if found == 0:
+ line = self.getline()
+ if line == None:
+ return None
+ self.last = ('string', tok)
+ return self.last
+
+ if l >= 2 and line[0] == '/' and line[1] == '*':
+ line = line[2:]
+ found = 0
+ tok = ""
+ while found == 0:
+ i = 0
+ l = len(line)
+ while i < l:
+ if line[i] == '*' and i+1 < l and line[i+1] == '/':
+ self.line = line[i+2:]
+ line = line[:i-1]
+ l = i
+ found = 1
+ break
+ i = i + 1
+ if tok != "":
+ tok = tok + "\n"
+ tok = tok + line
+ if found == 0:
+ line = self.getline()
+ if line == None:
+ return None
+ self.last = ('comment', tok)
+ return self.last
+ if l >= 2 and line[0] == '/' and line[1] == '/':
+ line = line[2:]
+ self.last = ('comment', line)
+ return self.last
+ i = 0
+ while i < l:
+ if line[i] == '/' and i+1 < l and line[i+1] == '/':
+ self.line = line[i:]
+ line = line[:i]
+ break
+ if line[i] == '/' and i+1 < l and line[i+1] == '*':
+ self.line = line[i:]
+ line = line[:i]
+ break
+ if line[i] == '"' or line[i] == "'":
+ self.line = line[i:]
+ line = line[:i]
+ break
+ i = i + 1
+ l = len(line)
+ i = 0
+ while i < l:
+ if line[i] == ' ' or line[i] == '\t':
+ i = i + 1
+ continue
+ o = ord(line[i])
+ if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
+ (o >= 48 and o <= 57):
+ s = i
+ while i < l:
+ o = ord(line[i])
+ if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
+ (o >= 48 and o <= 57) or string.find(
+ " \t(){}:;,+-*/%&!|[]=><", line[i]) == -1:
+ i = i + 1
+ else:
+ break
+ self.tokens.append(('name', line[s:i]))
+ continue
+ if string.find("(){}:;,[]", line[i]) != -1:
# if line[i] == '(' or line[i] == ')' or line[i] == '{' or \
-# line[i] == '}' or line[i] == ':' or line[i] == ';' or \
-# line[i] == ',' or line[i] == '[' or line[i] == ']':
- self.tokens.append(('sep', line[i]))
- i = i + 1
- continue
- if string.find("+-*><=/%&!|.", line[i]) != -1:
+# line[i] == '}' or line[i] == ':' or line[i] == ';' or \
+# line[i] == ',' or line[i] == '[' or line[i] == ']':
+ self.tokens.append(('sep', line[i]))
+ i = i + 1
+ continue
+ if string.find("+-*><=/%&!|.", line[i]) != -1:
# if line[i] == '+' or line[i] == '-' or line[i] == '*' or \
-# line[i] == '>' or line[i] == '<' or line[i] == '=' or \
-# line[i] == '/' or line[i] == '%' or line[i] == '&' or \
-# line[i] == '!' or line[i] == '|' or line[i] == '.':
- if line[i] == '.' and i + 2 < l and \
- line[i+1] == '.' and line[i+2] == '.':
- self.tokens.append(('name', '...'))
- i = i + 3
- continue
-
- j = i + 1
- if j < l and (
- string.find("+-*><=/%&!|", line[j]) != -1):
-# line[j] == '+' or line[j] == '-' or line[j] == '*' or \
-# line[j] == '>' or line[j] == '<' or line[j] == '=' or \
-# line[j] == '/' or line[j] == '%' or line[j] == '&' or \
-# line[j] == '!' or line[j] == '|'):
- self.tokens.append(('op', line[i:j+1]))
- i = j + 1
- else:
- self.tokens.append(('op', line[i]))
- i = i + 1
- continue
- s = i
- while i < l:
- o = ord(line[i])
- if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
- (o >= 48 and o <= 57) or (
- string.find(" \t(){}:;,+-*/%&!|[]=><", line[i]) == -1):
-# line[i] != ' ' and line[i] != '\t' and
-# line[i] != '(' and line[i] != ')' and
-# line[i] != '{' and line[i] != '}' and
-# line[i] != ':' and line[i] != ';' and
-# line[i] != ',' and line[i] != '+' and
-# line[i] != '-' and line[i] != '*' and
-# line[i] != '/' and line[i] != '%' and
-# line[i] != '&' and line[i] != '!' and
-# line[i] != '|' and line[i] != '[' and
-# line[i] != ']' and line[i] != '=' and
-# line[i] != '*' and line[i] != '>' and
-# line[i] != '<'):
- i = i + 1
- else:
- break
- self.tokens.append(('name', line[s:i]))
-
- tok = self.tokens[0]
- self.tokens = self.tokens[1:]
- self.last = tok
- return tok
+# line[i] == '>' or line[i] == '<' or line[i] == '=' or \
+# line[i] == '/' or line[i] == '%' or line[i] == '&' or \
+# line[i] == '!' or line[i] == '|' or line[i] == '.':
+ if line[i] == '.' and i + 2 < l and \
+ line[i+1] == '.' and line[i+2] == '.':
+ self.tokens.append(('name', '...'))
+ i = i + 3
+ continue
+
+ j = i + 1
+ if j < l and (
+ string.find("+-*><=/%&!|", line[j]) != -1):
+# line[j] == '+' or line[j] == '-' or line[j] == '*' or \
+# line[j] == '>' or line[j] == '<' or line[j] == '=' or \
+# line[j] == '/' or line[j] == '%' or line[j] == '&' or \
+# line[j] == '!' or line[j] == '|'):
+ self.tokens.append(('op', line[i:j+1]))
+ i = j + 1
+ else:
+ self.tokens.append(('op', line[i]))
+ i = i + 1
+ continue
+ s = i
+ while i < l:
+ o = ord(line[i])
+ if (o >= 97 and o <= 122) or (o >= 65 and o <= 90) or \
+ (o >= 48 and o <= 57) or (
+ string.find(" \t(){}:;,+-*/%&!|[]=><", line[i]) == -1):
+# line[i] != ' ' and line[i] != '\t' and
+# line[i] != '(' and line[i] != ')' and
+# line[i] != '{' and line[i] != '}' and
+# line[i] != ':' and line[i] != ';' and
+# line[i] != ',' and line[i] != '+' and
+# line[i] != '-' and line[i] != '*' and
+# line[i] != '/' and line[i] != '%' and
+# line[i] != '&' and line[i] != '!' and
+# line[i] != '|' and line[i] != '[' and
+# line[i] != ']' and line[i] != '=' and
+# line[i] != '*' and line[i] != '>' and
+# line[i] != '<'):
+ i = i + 1
+ else:
+ break
+ self.tokens.append(('name', line[s:i]))
+
+ tok = self.tokens[0]
+ self.tokens = self.tokens[1:]
+ self.last = tok
+ return tok
class CParser:
"""The C module parser"""
def __init__(self, filename, idx = None):
self.filename = filename
- if len(filename) > 2 and filename[-2:] == '.h':
- self.is_header = 1
- else:
- self.is_header = 0
+ if len(filename) > 2 and filename[-2:] == '.h':
+ self.is_header = 1
+ else:
+ self.is_header = 0
self.input = open(filename)
- self.lexer = CLexer(self.input)
- if idx == None:
- self.index = index()
- else:
- self.index = idx
- self.top_comment = ""
- self.last_comment = ""
- self.comment = None
- self.collect_ref = 0
- self.no_error = 0
- self.conditionals = []
- self.defines = []
+ self.lexer = CLexer(self.input)
+ if idx == None:
+ self.index = index()
+ else:
+ self.index = idx
+ self.top_comment = ""
+ self.last_comment = ""
+ self.comment = None
+ self.collect_ref = 0
+ self.no_error = 0
+ self.conditionals = []
+ self.defines = []
def collect_references(self):
self.collect_ref = 1
@@ -579,203 +579,203 @@ class CParser:
return self.lexer.getlineno()
def index_add(self, name, module, static, type, info=None, extra = None):
- if self.is_header == 1:
- self.index.add(name, module, module, static, type, self.lineno(),
- info, extra, self.conditionals)
- else:
- self.index.add(name, None, module, static, type, self.lineno(),
- info, extra, self.conditionals)
+ if self.is_header == 1:
+ self.index.add(name, module, module, static, type, self.lineno(),
+ info, extra, self.conditionals)
+ else:
+ self.index.add(name, None, module, static, type, self.lineno(),
+ info, extra, self.conditionals)
def index_add_ref(self, name, module, static, type, info=None,
extra = None):
- if self.is_header == 1:
- self.index.add_ref(name, module, module, static, type,
- self.lineno(), info, extra, self.conditionals)
- else:
- self.index.add_ref(name, None, module, static, type, self.lineno(),
- info, extra, self.conditionals)
+ if self.is_header == 1:
+ self.index.add_ref(name, module, module, static, type,
+ self.lineno(), info, extra, self.conditionals)
+ else:
+ self.index.add_ref(name, None, module, static, type, self.lineno(),
+ info, extra, self.conditionals)
def warning(self, msg):
if self.no_error:
- return
- print msg
+ return
+ print msg
def error(self, msg, token=-1):
if self.no_error:
- return
+ return
print "Parse Error: " + msg
- if token != -1:
- print "Got token ", token
- self.lexer.debug()
- sys.exit(1)
+ if token != -1:
+ print "Got token ", token
+ self.lexer.debug()
+ sys.exit(1)
def debug(self, msg, token=-1):
print "Debug: " + msg
- if token != -1:
- print "Got token ", token
- self.lexer.debug()
+ if token != -1:
+ print "Got token ", token
+ self.lexer.debug()
def parseTopComment(self, comment):
- res = {}
- lines = string.split(comment, "\n")
- item = None
- for line in lines:
- while line != "" and (line[0] == ' ' or line[0] == '\t'):
- line = line[1:]
- while line != "" and line[0] == '*':
- line = line[1:]
- while line != "" and (line[0] == ' ' or line[0] == '\t'):
- line = line[1:]
- try:
- (it, line) = string.split(line, ":", 1)
- item = it
- while line != "" and (line[0] == ' ' or line[0] == '\t'):
- line = line[1:]
- if res.has_key(item):
- res[item] = res[item] + " " + line
- else:
- res[item] = line
- except:
- if item != None:
- if res.has_key(item):
- res[item] = res[item] + " " + line
- else:
- res[item] = line
- self.index.info = res
+ res = {}
+ lines = string.split(comment, "\n")
+ item = None
+ for line in lines:
+ while line != "" and (line[0] == ' ' or line[0] == '\t'):
+ line = line[1:]
+ while line != "" and line[0] == '*':
+ line = line[1:]
+ while line != "" and (line[0] == ' ' or line[0] == '\t'):
+ line = line[1:]
+ try:
+ (it, line) = string.split(line, ":", 1)
+ item = it
+ while line != "" and (line[0] == ' ' or line[0] == '\t'):
+ line = line[1:]
+ if res.has_key(item):
+ res[item] = res[item] + " " + line
+ else:
+ res[item] = line
+ except:
+ if item != None:
+ if res.has_key(item):
+ res[item] = res[item] + " " + line
+ else:
+ res[item] = line
+ self.index.info = res
def parseComment(self, token):
if self.top_comment == "":
- self.top_comment = token[1]
- if self.comment == None or token[1][0] == '*':
- self.comment = token[1];
- else:
- self.comment = self.comment + token[1]
- token = self.lexer.token()
+ self.top_comment = token[1]
+ if self.comment == None or token[1][0] == '*':
+ self.comment = token[1];
+ else:
+ self.comment = self.comment + token[1]
+ token = self.lexer.token()
if string.find(self.comment, "DOC_DISABLE") != -1:
- self.stop_error()
+ self.stop_error()
if string.find(self.comment, "DOC_ENABLE") != -1:
- self.start_error()
+ self.start_error()
- return token
+ return token
#
# Parse a comment block associate to a typedef
#
def parseTypeComment(self, name, quiet = 0):
if name[0:2] == '__':
- quiet = 1
+ quiet = 1
args = []
- desc = ""
+ desc = ""
if self.comment == None:
- if not quiet:
- self.warning("Missing comment for type %s" % (name))
- return((args, desc))
+ if not quiet:
+ self.warning("Missing comment for type %s" % (name))
+ return((args, desc))
if self.comment[0] != '*':
- if not quiet:
- self.warning("Missing * in type comment for %s" % (name))
- return((args, desc))
- lines = string.split(self.comment, '\n')
- if lines[0] == '*':
- del lines[0]
- if lines[0] != "* %s:" % (name):
- if not quiet:
- self.warning("Misformatted type comment for %s" % (name))
- self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
- return((args, desc))
- del lines[0]
- while len(lines) > 0 and lines[0] == '*':
- del lines[0]
- desc = ""
- while len(lines) > 0:
- l = lines[0]
- while len(l) > 0 and l[0] == '*':
- l = l[1:]
- l = string.strip(l)
- desc = desc + " " + l
- del lines[0]
-
- desc = string.strip(desc)
-
- if quiet == 0:
- if desc == "":
- self.warning("Type comment for %s lack description of the macro" % (name))
-
- return(desc)
+ if not quiet:
+ self.warning("Missing * in type comment for %s" % (name))
+ return((args, desc))
+ lines = string.split(self.comment, '\n')
+ if lines[0] == '*':
+ del lines[0]
+ if lines[0] != "* %s:" % (name):
+ if not quiet:
+ self.warning("Misformatted type comment for %s" % (name))
+ self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
+ return((args, desc))
+ del lines[0]
+ while len(lines) > 0 and lines[0] == '*':
+ del lines[0]
+ desc = ""
+ while len(lines) > 0:
+ l = lines[0]
+ while len(l) > 0 and l[0] == '*':
+ l = l[1:]
+ l = string.strip(l)
+ desc = desc + " " + l
+ del lines[0]
+
+ desc = string.strip(desc)
+
+ if quiet == 0:
+ if desc == "":
+ self.warning("Type comment for %s lack description of the macro" % (name))
+
+ return(desc)
#
# Parse a comment block associate to a macro
#
def parseMacroComment(self, name, quiet = 0):
if name[0:2] == '__':
- quiet = 1
+ quiet = 1
args = []
- desc = ""
+ desc = ""
if self.comment == None:
- if not quiet:
- self.warning("Missing comment for macro %s" % (name))
- return((args, desc))
+ if not quiet:
+ self.warning("Missing comment for macro %s" % (name))
+ return((args, desc))
if self.comment[0] != '*':
- if not quiet:
- self.warning("Missing * in macro comment for %s" % (name))
- return((args, desc))
- lines = string.split(self.comment, '\n')
- if lines[0] == '*':
- del lines[0]
- if lines[0] != "* %s:" % (name):
- if not quiet:
- self.warning("Misformatted macro comment for %s" % (name))
- self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
- return((args, desc))
- del lines[0]
- while lines[0] == '*':
- del lines[0]
- while len(lines) > 0 and lines[0][0:3] == '* @':
- l = lines[0][3:]
- try:
- (arg, desc) = string.split(l, ':', 1)
- desc=string.strip(desc)
- arg=string.strip(arg)
+ if not quiet:
+ self.warning("Missing * in macro comment for %s" % (name))
+ return((args, desc))
+ lines = string.split(self.comment, '\n')
+ if lines[0] == '*':
+ del lines[0]
+ if lines[0] != "* %s:" % (name):
+ if not quiet:
+ self.warning("Misformatted macro comment for %s" % (name))
+ self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
+ return((args, desc))
+ del lines[0]
+ while lines[0] == '*':
+ del lines[0]
+ while len(lines) > 0 and lines[0][0:3] == '* @':
+ l = lines[0][3:]
+ try:
+ (arg, desc) = string.split(l, ':', 1)
+ desc=string.strip(desc)
+ arg=string.strip(arg)
except:
- if not quiet:
- self.warning("Misformatted macro comment for %s" % (name))
- self.warning(" problem with '%s'" % (lines[0]))
- del lines[0]
- continue
- del lines[0]
- l = string.strip(lines[0])
- while len(l) > 2 and l[0:3] != '* @':
- while l[0] == '*':
- l = l[1:]
- desc = desc + ' ' + string.strip(l)
- del lines[0]
- if len(lines) == 0:
- break
- l = lines[0]
+ if not quiet:
+ self.warning("Misformatted macro comment for %s" % (name))
+ self.warning(" problem with '%s'" % (lines[0]))
+ del lines[0]
+ continue
+ del lines[0]
+ l = string.strip(lines[0])
+ while len(l) > 2 and l[0:3] != '* @':
+ while l[0] == '*':
+ l = l[1:]
+ desc = desc + ' ' + string.strip(l)
+ del lines[0]
+ if len(lines) == 0:
+ break
+ l = lines[0]
args.append((arg, desc))
- while len(lines) > 0 and lines[0] == '*':
- del lines[0]
- desc = ""
- while len(lines) > 0:
- l = lines[0]
- while len(l) > 0 and l[0] == '*':
- l = l[1:]
- l = string.strip(l)
- desc = desc + " " + l
- del lines[0]
+ while len(lines) > 0 and lines[0] == '*':
+ del lines[0]
+ desc = ""
+ while len(lines) > 0:
+ l = lines[0]
+ while len(l) > 0 and l[0] == '*':
+ l = l[1:]
+ l = string.strip(l)
+ desc = desc + " " + l
+ del lines[0]
- desc = string.strip(desc)
+ desc = string.strip(desc)
- if quiet == 0:
- if desc == "":
- self.warning("Macro comment for %s lack description of the macro" % (name))
+ if quiet == 0:
+ if desc == "":
+ self.warning("Macro comment for %s lack description of the macro" % (name))
- return((args, desc))
+ return((args, desc))
#
# Parse a comment block and merge the information found in the
@@ -786,219 +786,219 @@ class CParser:
global ignored_functions
if name == 'main':
- quiet = 1
+ quiet = 1
if name[0:2] == '__':
- quiet = 1
+ quiet = 1
if ignored_functions.has_key(name):
quiet = 1
- (ret, args) = description
- desc = ""
- retdesc = ""
+ (ret, args) = description
+ desc = ""
+ retdesc = ""
if self.comment == None:
- if not quiet:
- self.warning("Missing comment for function %s" % (name))
- return(((ret[0], retdesc), args, desc))
+ if not quiet:
+ self.warning("Missing comment for function %s" % (name))
+ return(((ret[0], retdesc), args, desc))
if self.comment[0] != '*':
- if not quiet:
- self.warning("Missing * in function comment for %s" % (name))
- return(((ret[0], retdesc), args, desc))
- lines = string.split(self.comment, '\n')
- if lines[0] == '*':
- del lines[0]
- if lines[0] != "* %s:" % (name):
- if not quiet:
- self.warning("Misformatted function comment for %s" % (name))
- self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
- return(((ret[0], retdesc), args, desc))
- del lines[0]
- while lines[0] == '*':
- del lines[0]
- nbargs = len(args)
- while len(lines) > 0 and lines[0][0:3] == '* @':
- l = lines[0][3:]
- try:
- (arg, desc) = string.split(l, ':', 1)
- desc=string.strip(desc)
- arg=string.strip(arg)
+ if not quiet:
+ self.warning("Missing * in function comment for %s" % (name))
+ return(((ret[0], retdesc), args, desc))
+ lines = string.split(self.comment, '\n')
+ if lines[0] == '*':
+ del lines[0]
+ if lines[0] != "* %s:" % (name):
+ if not quiet:
+ self.warning("Misformatted function comment for %s" % (name))
+ self.warning(" Expecting '* %s:' got '%s'" % (name, lines[0]))
+ return(((ret[0], retdesc), args, desc))
+ del lines[0]
+ while lines[0] == '*':
+ del lines[0]
+ nbargs = len(args)
+ while len(lines) > 0 and lines[0][0:3] == '* @':
+ l = lines[0][3:]
+ try:
+ (arg, desc) = string.split(l, ':', 1)
+ desc=string.strip(desc)
+ arg=string.strip(arg)
except:
- if not quiet:
- self.warning("Misformatted function comment for %s" % (name))
- self.warning(" problem with '%s'" % (lines[0]))
- del lines[0]
- continue
- del lines[0]
- l = string.strip(lines[0])
- while len(l) > 2 and l[0:3] != '* @':
- while l[0] == '*':
- l = l[1:]
- desc = desc + ' ' + string.strip(l)
- del lines[0]
- if len(lines) == 0:
- break
- l = lines[0]
- i = 0
- while i < nbargs:
- if args[i][1] == arg:
- args[i] = (args[i][0], arg, desc)
- break;
- i = i + 1
- if i >= nbargs:
- if not quiet:
- self.warning("Unable to find arg %s from function comment for %s" % (
- arg, name))
- while len(lines) > 0 and lines[0] == '*':
- del lines[0]
- desc = None
- while len(lines) > 0:
- l = lines[0]
- i = 0
- # Remove all leading '*', followed by at most one ' ' character
- # since we need to preserve correct identation of code examples
- while i < len(l) and l[i] == '*':
- i = i + 1
- if i > 0:
- if i < len(l) and l[i] == ' ':
- i = i + 1
- l = l[i:]
- if len(l) >= 6 and l[0:7] == "returns" or l[0:7] == "Returns":
- try:
- l = string.split(l, ' ', 1)[1]
- except:
- l = ""
- retdesc = string.strip(l)
- del lines[0]
- while len(lines) > 0:
- l = lines[0]
- while len(l) > 0 and l[0] == '*':
- l = l[1:]
- l = string.strip(l)
- retdesc = retdesc + " " + l
- del lines[0]
- else:
- if desc is not None:
- desc = desc + "\n" + l
- else:
- desc = l
- del lines[0]
-
- if desc is None:
- desc = ""
- retdesc = string.strip(retdesc)
- desc = string.strip(desc)
-
- if quiet == 0:
- #
- # report missing comments
- #
- i = 0
- while i < nbargs:
- if args[i][2] == None and args[i][0] != "void" and args[i][1] != None:
- self.warning("Function comment for %s lacks description of arg %s" % (name, args[i][1]))
- i = i + 1
- if retdesc == "" and ret[0] != "void":
- self.warning("Function comment for %s lacks description of return value" % (name))
- if desc == "":
- self.warning("Function comment for %s lacks description of the function" % (name))
-
-
- return(((ret[0], retdesc), args, desc))
+ if not quiet:
+ self.warning("Misformatted function comment for %s" % (name))
+ self.warning(" problem with '%s'" % (lines[0]))
+ del lines[0]
+ continue
+ del lines[0]
+ l = string.strip(lines[0])
+ while len(l) > 2 and l[0:3] != '* @':
+ while l[0] == '*':
+ l = l[1:]
+ desc = desc + ' ' + string.strip(l)
+ del lines[0]
+ if len(lines) == 0:
+ break
+ l = lines[0]
+ i = 0
+ while i < nbargs:
+ if args[i][1] == arg:
+ args[i] = (args[i][0], arg, desc)
+ break;
+ i = i + 1
+ if i >= nbargs:
+ if not quiet:
+ self.warning("Unable to find arg %s from function comment for %s" % (
+ arg, name))
+ while len(lines) > 0 and lines[0] == '*':
+ del lines[0]
+ desc = None
+ while len(lines) > 0:
+ l = lines[0]
+ i = 0
+ # Remove all leading '*', followed by at most one ' ' character
+ # since we need to preserve correct identation of code examples
+ while i < len(l) and l[i] == '*':
+ i = i + 1
+ if i > 0:
+ if i < len(l) and l[i] == ' ':
+ i = i + 1
+ l = l[i:]
+ if len(l) >= 6 and l[0:7] == "returns" or l[0:7] == "Returns":
+ try:
+ l = string.split(l, ' ', 1)[1]
+ except:
+ l = ""
+ retdesc = string.strip(l)
+ del lines[0]
+ while len(lines) > 0:
+ l = lines[0]
+ while len(l) > 0 and l[0] == '*':
+ l = l[1:]
+ l = string.strip(l)
+ retdesc = retdesc + " " + l
+ del lines[0]
+ else:
+ if desc is not None:
+ desc = desc + "\n" + l
+ else:
+ desc = l
+ del lines[0]
+
+ if desc is None:
+ desc = ""
+ retdesc = string.strip(retdesc)
+ desc = string.strip(desc)
+
+ if quiet == 0:
+ #
+ # report missing comments
+ #
+ i = 0
+ while i < nbargs:
+ if args[i][2] == None and args[i][0] != "void" and args[i][1] != None:
+ self.warning("Function comment for %s lacks description of arg %s" % (name, args[i][1]))
+ i = i + 1
+ if retdesc == "" and ret[0] != "void":
+ self.warning("Function comment for %s lacks description of return value" % (name))
+ if desc == "":
+ self.warning("Function comment for %s lacks description of the function" % (name))
+
+
+ return(((ret[0], retdesc), args, desc))
def parsePreproc(self, token):
- if debug:
- print "=> preproc ", token, self.lexer.tokens
+ if debug:
+ print "=> preproc ", token, self.lexer.tokens
name = token[1]
- if name == "#include":
- token = self.lexer.token()
- if token == None:
- return None
- if token[0] == 'preproc':
- self.index_add(token[1], self.filename, not self.is_header,
- "include")
- return self.lexer.token()
- return token
- if name == "#define":
- token = self.lexer.token()
- if token == None:
- return None
- if token[0] == 'preproc':
- # TODO macros with arguments
- name = token[1]
- lst = []
- token = self.lexer.token()
- while token != None and token[0] == 'preproc' and \
- token[1][0] != '#':
- lst.append(token[1])
- token = self.lexer.token()
+ if name == "#include":
+ token = self.lexer.token()
+ if token == None:
+ return None
+ if token[0] == 'preproc':
+ self.index_add(token[1], self.filename, not self.is_header,
+ "include")
+ return self.lexer.token()
+ return token
+ if name == "#define":
+ token = self.lexer.token()
+ if token == None:
+ return None
+ if token[0] == 'preproc':
+ # TODO macros with arguments
+ name = token[1]
+ lst = []
+ token = self.lexer.token()
+ while token != None and token[0] == 'preproc' and \
+ token[1][0] != '#':
+ lst.append(token[1])
+ token = self.lexer.token()
try:
- name = string.split(name, '(') [0]
+ name = string.split(name, '(') [0]
except:
pass
info = self.parseMacroComment(name, not self.is_header)
- self.index_add(name, self.filename, not self.is_header,
- "macro", info)
- return token
-
- #
- # Processing of conditionals modified by Bill 1/1/05
- #
- # We process conditionals (i.e. tokens from #ifdef, #ifndef,
- # #if, #else and #endif) for headers and mainline code,
- # store the ones from the header in libxml2-api.xml, and later
- # (in the routine merge_public) verify that the two (header and
- # mainline code) agree.
- #
- # There is a small problem with processing the headers. Some of
- # the variables are not concerned with enabling / disabling of
- # library functions (e.g. '__XML_PARSER_H__'), and we don't want
- # them to be included in libxml2-api.xml, or involved in
- # the check between the header and the mainline code. To
- # accomplish this, we ignore any conditional which doesn't include
- # the string 'ENABLED'
- #
- if name == "#ifdef":
- apstr = self.lexer.tokens[0][1]
- try:
- self.defines.append(apstr)
- if string.find(apstr, 'ENABLED') != -1:
- self.conditionals.append("defined(%s)" % apstr)
- except:
- pass
- elif name == "#ifndef":
- apstr = self.lexer.tokens[0][1]
- try:
- self.defines.append(apstr)
- if string.find(apstr, 'ENABLED') != -1:
- self.conditionals.append("!defined(%s)" % apstr)
- except:
- pass
- elif name == "#if":
- apstr = ""
- for tok in self.lexer.tokens:
- if apstr != "":
- apstr = apstr + " "
- apstr = apstr + tok[1]
- try:
- self.defines.append(apstr)
- if string.find(apstr, 'ENABLED') != -1:
- self.conditionals.append(apstr)
- except:
- pass
- elif name == "#else":
- if self.conditionals != [] and \
- string.find(self.defines[-1], 'ENABLED') != -1:
- self.conditionals[-1] = "!(%s)" % self.conditionals[-1]
- elif name == "#endif":
- if self.conditionals != [] and \
- string.find(self.defines[-1], 'ENABLED') != -1:
- self.conditionals = self.conditionals[:-1]
- self.defines = self.defines[:-1]
- token = self.lexer.token()
- while token != None and token[0] == 'preproc' and \
- token[1][0] != '#':
- token = self.lexer.token()
- return token
+ self.index_add(name, self.filename, not self.is_header,
+ "macro", info)
+ return token
+
+ #
+ # Processing of conditionals modified by Bill 1/1/05
+ #
+ # We process conditionals (i.e. tokens from #ifdef, #ifndef,
+ # #if, #else and #endif) for headers and mainline code,
+ # store the ones from the header in libxml2-api.xml, and later
+ # (in the routine merge_public) verify that the two (header and
+ # mainline code) agree.
+ #
+ # There is a small problem with processing the headers. Some of
+ # the variables are not concerned with enabling / disabling of
+ # library functions (e.g. '__XML_PARSER_H__'), and we don't want
+ # them to be included in libxml2-api.xml, or involved in
+ # the check between the header and the mainline code. To
+ # accomplish this, we ignore any conditional which doesn't include
+ # the string 'ENABLED'
+ #
+ if name == "#ifdef":
+ apstr = self.lexer.tokens[0][1]
+ try:
+ self.defines.append(apstr)
+ if string.find(apstr, 'ENABLED') != -1:
+ self.conditionals.append("defined(%s)" % apstr)
+ except:
+ pass
+ elif name == "#ifndef":
+ apstr = self.lexer.tokens[0][1]
+ try:
+ self.defines.append(apstr)
+ if string.find(apstr, 'ENABLED') != -1:
+ self.conditionals.append("!defined(%s)" % apstr)
+ except:
+ pass
+ elif name == "#if":
+ apstr = ""
+ for tok in self.lexer.tokens:
+ if apstr != "":
+ apstr = apstr + " "
+ apstr = apstr + tok[1]
+ try:
+ self.defines.append(apstr)
+ if string.find(apstr, 'ENABLED') != -1:
+ self.conditionals.append(apstr)
+ except:
+ pass
+ elif name == "#else":
+ if self.conditionals != [] and \
+ string.find(self.defines[-1], 'ENABLED') != -1:
+ self.conditionals[-1] = "!(%s)" % self.conditionals[-1]
+ elif name == "#endif":
+ if self.conditionals != [] and \
+ string.find(self.defines[-1], 'ENABLED') != -1:
+ self.conditionals = self.conditionals[:-1]
+ self.defines = self.defines[:-1]
+ token = self.lexer.token()
+ while token != None and token[0] == 'preproc' and \
+ token[1][0] != '#':
+ token = self.lexer.token()
+ return token
#
# token acquisition on top of the lexer, it handle internally
@@ -1012,89 +1012,89 @@ class CParser:
global ignored_words
token = self.lexer.token()
- while token != None:
- if token[0] == 'comment':
- token = self.parseComment(token)
- continue
- elif token[0] == 'preproc':
- token = self.parsePreproc(token)
- continue
- elif token[0] == "name" and token[1] == "__const":
- token = ("name", "const")
- return token
- elif token[0] == "name" and token[1] == "__attribute":
- token = self.lexer.token()
- while token != None and token[1] != ";":
- token = self.lexer.token()
- return token
- elif token[0] == "name" and ignored_words.has_key(token[1]):
- (n, info) = ignored_words[token[1]]
- i = 0
- while i < n:
- token = self.lexer.token()
- i = i + 1
- token = self.lexer.token()
- continue
- else:
- if debug:
- print "=> ", token
- return token
- return None
+ while token != None:
+ if token[0] == 'comment':
+ token = self.parseComment(token)
+ continue
+ elif token[0] == 'preproc':
+ token = self.parsePreproc(token)
+ continue
+ elif token[0] == "name" and token[1] == "__const":
+ token = ("name", "const")
+ return token
+ elif token[0] == "name" and token[1] == "__attribute":
+ token = self.lexer.token()
+ while token != None and token[1] != ";":
+ token = self.lexer.token()
+ return token
+ elif token[0] == "name" and ignored_words.has_key(token[1]):
+ (n, info) = ignored_words[token[1]]
+ i = 0
+ while i < n:
+ token = self.lexer.token()
+ i = i + 1
+ token = self.lexer.token()
+ continue
+ else:
+ if debug:
+ print "=> ", token
+ return token
+ return None
#
# Parse a typedef, it records the type and its name.
#
def parseTypedef(self, token):
if token == None:
- return None
- token = self.parseType(token)
- if token == None:
- self.error("parsing typedef")
- return None
- base_type = self.type
- type = base_type
- #self.debug("end typedef type", token)
- while token != None:
- if token[0] == "name":
- name = token[1]
- signature = self.signature
- if signature != None:
- type = string.split(type, '(')[0]
- d = self.mergeFunctionComment(name,
- ((type, None), signature), 1)
- self.index_add(name, self.filename, not self.is_header,
- "functype", d)
- else:
- if base_type == "struct":
- self.index_add(name, self.filename, not self.is_header,
- "struct", type)
- base_type = "struct " + name
- else:
- # TODO report missing or misformatted comments
- info = self.parseTypeComment(name, 1)
- self.index_add(name, self.filename, not self.is_header,
- "typedef", type, info)
- token = self.token()
- else:
- self.error("parsing typedef: expecting a name")
- return token
- #self.debug("end typedef", token)
- if token != None and token[0] == 'sep' and token[1] == ',':
- type = base_type
- token = self.token()
- while token != None and token[0] == "op":
- type = type + token[1]
- token = self.token()
- elif token != None and token[0] == 'sep' and token[1] == ';':
- break;
- elif token != None and token[0] == 'name':
- type = base_type
- continue;
- else:
- self.error("parsing typedef: expecting ';'", token)
- return token
- token = self.token()
- return token
+ return None
+ token = self.parseType(token)
+ if token == None:
+ self.error("parsing typedef")
+ return None
+ base_type = self.type
+ type = base_type
+ #self.debug("end typedef type", token)
+ while token != None:
+ if token[0] == "name":
+ name = token[1]
+ signature = self.signature
+ if signature != None:
+ type = string.split(type, '(')[0]
+ d = self.mergeFunctionComment(name,
+ ((type, None), signature), 1)
+ self.index_add(name, self.filename, not self.is_header,
+ "functype", d)
+ else:
+ if base_type == "struct":
+ self.index_add(name, self.filename, not self.is_header,
+ "struct", type)
+ base_type = "struct " + name
+ else:
+ # TODO report missing or misformatted comments
+ info = self.parseTypeComment(name, 1)
+ self.index_add(name, self.filename, not self.is_header,
+ "typedef", type, info)
+ token = self.token()
+ else:
+ self.error("parsing typedef: expecting a name")
+ return token
+ #self.debug("end typedef", token)
+ if token != None and token[0] == 'sep' and token[1] == ',':
+ type = base_type
+ token = self.token()
+ while token != None and token[0] == "op":
+ type = type + token[1]
+ token = self.token()
+ elif token != None and token[0] == 'sep' and token[1] == ';':
+ break;
+ elif token != None and token[0] == 'name':
+ type = base_type
+ continue;
+ else:
+ self.error("parsing typedef: expecting ';'", token)
+ return token
+ token = self.token()
+ return token
#
# Parse a C code block, used for functions it parse till
@@ -1102,138 +1102,138 @@ class CParser:
#
def parseBlock(self, token):
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- self.comment = None
- token = self.token()
- return token
- else:
- if self.collect_ref == 1:
- oldtok = token
- token = self.token()
- if oldtok[0] == "name" and oldtok[1][0:3] == "vir":
- if token[0] == "sep" and token[1] == "(":
- self.index_add_ref(oldtok[1], self.filename,
- 0, "function")
- token = self.token()
- elif token[0] == "name":
- token = self.token()
- if token[0] == "sep" and (token[1] == ";" or
- token[1] == "," or token[1] == "="):
- self.index_add_ref(oldtok[1], self.filename,
- 0, "type")
- elif oldtok[0] == "name" and oldtok[1][0:4] == "XEN_":
- self.index_add_ref(oldtok[1], self.filename,
- 0, "typedef")
- elif oldtok[0] == "name" and oldtok[1][0:7] == "LIBXEN_":
- self.index_add_ref(oldtok[1], self.filename,
- 0, "typedef")
-
- else:
- token = self.token()
- return token
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ self.comment = None
+ token = self.token()
+ return token
+ else:
+ if self.collect_ref == 1:
+ oldtok = token
+ token = self.token()
+ if oldtok[0] == "name" and oldtok[1][0:3] == "vir":
+ if token[0] == "sep" and token[1] == "(":
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "function")
+ token = self.token()
+ elif token[0] == "name":
+ token = self.token()
+ if token[0] == "sep" and (token[1] == ";" or
+ token[1] == "," or token[1] == "="):
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "type")
+ elif oldtok[0] == "name" and oldtok[1][0:4] == "XEN_":
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "typedef")
+ elif oldtok[0] == "name" and oldtok[1][0:7] == "LIBXEN_":
+ self.index_add_ref(oldtok[1], self.filename,
+ 0, "typedef")
+
+ else:
+ token = self.token()
+ return token
#
# Parse a C struct definition till the balancing }
#
def parseStruct(self, token):
fields = []
- #self.debug("start parseStruct", token)
+ #self.debug("start parseStruct", token)
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- self.struct_fields = fields
- #self.debug("end parseStruct", token)
- #print fields
- token = self.token()
- return token
- else:
- base_type = self.type
- #self.debug("before parseType", token)
- token = self.parseType(token)
- #self.debug("after parseType", token)
- if token != None and token[0] == "name":
- fname = token[1]
- token = self.token()
- if token[0] == "sep" and token[1] == ";":
- self.comment = None
- token = self.token()
- fields.append((self.type, fname, self.comment))
- self.comment = None
- else:
- self.error("parseStruct: expecting ;", token)
- elif token != None and token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- if token != None and token[0] == "name":
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == ";":
- token = self.token()
- else:
- self.error("parseStruct: expecting ;", token)
- else:
- self.error("parseStruct: name", token)
- token = self.token()
- self.type = base_type;
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ self.struct_fields = fields
+ #self.debug("end parseStruct", token)
+ #print fields
+ token = self.token()
+ return token
+ else:
+ base_type = self.type
+ #self.debug("before parseType", token)
+ token = self.parseType(token)
+ #self.debug("after parseType", token)
+ if token != None and token[0] == "name":
+ fname = token[1]
+ token = self.token()
+ if token[0] == "sep" and token[1] == ";":
+ self.comment = None
+ token = self.token()
+ fields.append((self.type, fname, self.comment))
+ self.comment = None
+ else:
+ self.error("parseStruct: expecting ;", token)
+ elif token != None and token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ if token != None and token[0] == "name":
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == ";":
+ token = self.token()
+ else:
+ self.error("parseStruct: expecting ;", token)
+ else:
+ self.error("parseStruct: name", token)
+ token = self.token()
+ self.type = base_type;
self.struct_fields = fields
- #self.debug("end parseStruct", token)
- #print fields
- return token
+ #self.debug("end parseStruct", token)
+ #print fields
+ return token
#
# Parse a C enum block, parse till the balancing }
#
def parseEnumBlock(self, token):
self.enums = []
- name = None
- self.comment = None
- comment = ""
- value = "0"
+ name = None
+ self.comment = None
+ comment = ""
+ value = "0"
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- if name != None:
- if self.comment != None:
- comment = self.comment
- self.comment = None
- self.enums.append((name, value, comment))
- token = self.token()
- return token
- elif token[0] == "name":
- if name != None:
- if self.comment != None:
- comment = string.strip(self.comment)
- self.comment = None
- self.enums.append((name, value, comment))
- name = token[1]
- comment = ""
- token = self.token()
- if token[0] == "op" and token[1][0] == "=":
- value = ""
- if len(token[1]) > 1:
- value = token[1][1:]
- token = self.token()
- while token[0] != "sep" or (token[1] != ',' and
- token[1] != '}'):
- value = value + token[1]
- token = self.token()
- else:
- try:
- value = "%d" % (int(value) + 1)
- except:
- self.warning("Failed to compute value of enum %s" % (name))
- value=""
- if token[0] == "sep" and token[1] == ",":
- token = self.token()
- else:
- token = self.token()
- return token
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ if name != None:
+ if self.comment != None:
+ comment = self.comment
+ self.comment = None
+ self.enums.append((name, value, comment))
+ token = self.token()
+ return token
+ elif token[0] == "name":
+ if name != None:
+ if self.comment != None:
+ comment = string.strip(self.comment)
+ self.comment = None
+ self.enums.append((name, value, comment))
+ name = token[1]
+ comment = ""
+ token = self.token()
+ if token[0] == "op" and token[1][0] == "=":
+ value = ""
+ if len(token[1]) > 1:
+ value = token[1][1:]
+ token = self.token()
+ while token[0] != "sep" or (token[1] != ',' and
+ token[1] != '}'):
+ value = value + token[1]
+ token = self.token()
+ else:
+ try:
+ value = "%d" % (int(value) + 1)
+ except:
+ self.warning("Failed to compute value of enum %s" % (name))
+ value=""
+ if token[0] == "sep" and token[1] == ",":
+ token = self.token()
+ else:
+ token = self.token()
+ return token
#
# Parse a C definition block, used for structs it parse till
@@ -1241,15 +1241,15 @@ class CParser:
#
def parseTypeBlock(self, token):
while token != None:
- if token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseTypeBlock(token)
- elif token[0] == "sep" and token[1] == "}":
- token = self.token()
- return token
- else:
- token = self.token()
- return token
+ if token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseTypeBlock(token)
+ elif token[0] == "sep" and token[1] == "}":
+ token = self.token()
+ return token
+ else:
+ token = self.token()
+ return token
#
# Parse a type: the fact that the type name can either occur after
@@ -1258,221 +1258,221 @@ class CParser:
#
def parseType(self, token):
self.type = ""
- self.struct_fields = []
+ self.struct_fields = []
self.signature = None
- if token == None:
- return token
-
- while token[0] == "name" and (
- token[1] == "const" or \
- token[1] == "unsigned" or \
- token[1] == "signed"):
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- token = self.token()
+ if token == None:
+ return token
+
+ while token[0] == "name" and (
+ token[1] == "const" or \
+ token[1] == "unsigned" or \
+ token[1] == "signed"):
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ token = self.token()
if token[0] == "name" and token[1] == "long":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
-
- # some read ahead for long long
- oldtmp = token
- token = self.token()
- if token[0] == "name" and token[1] == "long":
- self.type = self.type + " " + token[1]
- else:
- self.push(token)
- token = oldtmp
-
- if token[0] == "name" and token[1] == "int":
- if self.type == "":
- self.type = tmp[1]
- else:
- self.type = self.type + " " + tmp[1]
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+
+ # some read ahead for long long
+ oldtmp = token
+ token = self.token()
+ if token[0] == "name" and token[1] == "long":
+ self.type = self.type + " " + token[1]
+ else:
+ self.push(token)
+ token = oldtmp
+
+ if token[0] == "name" and token[1] == "int":
+ if self.type == "":
+ self.type = tmp[1]
+ else:
+ self.type = self.type + " " + tmp[1]
elif token[0] == "name" and token[1] == "short":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- if token[0] == "name" and token[1] == "int":
- if self.type == "":
- self.type = tmp[1]
- else:
- self.type = self.type + " " + tmp[1]
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ if token[0] == "name" and token[1] == "int":
+ if self.type == "":
+ self.type = tmp[1]
+ else:
+ self.type = self.type + " " + tmp[1]
elif token[0] == "name" and token[1] == "struct":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- token = self.token()
- nametok = None
- if token[0] == "name":
- nametok = token
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseStruct(token)
- elif token != None and token[0] == "op" and token[1] == "*":
- self.type = self.type + " " + nametok[1] + " *"
- token = self.token()
- while token != None and token[0] == "op" and token[1] == "*":
- self.type = self.type + " *"
- token = self.token()
- if token[0] == "name":
- nametok = token
- token = self.token()
- else:
- self.error("struct : expecting name", token)
- return token
- elif token != None and token[0] == "name" and nametok != None:
- self.type = self.type + " " + nametok[1]
- return token
-
- if nametok != None:
- self.lexer.push(token)
- token = nametok
- return token
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ token = self.token()
+ nametok = None
+ if token[0] == "name":
+ nametok = token
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseStruct(token)
+ elif token != None and token[0] == "op" and token[1] == "*":
+ self.type = self.type + " " + nametok[1] + " *"
+ token = self.token()
+ while token != None and token[0] == "op" and token[1] == "*":
+ self.type = self.type + " *"
+ token = self.token()
+ if token[0] == "name":
+ nametok = token
+ token = self.token()
+ else:
+ self.error("struct : expecting name", token)
+ return token
+ elif token != None and token[0] == "name" and nametok != None:
+ self.type = self.type + " " + nametok[1]
+ return token
+
+ if nametok != None:
+ self.lexer.push(token)
+ token = nametok
+ return token
elif token[0] == "name" and token[1] == "enum":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- self.enums = []
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == "{":
- token = self.token()
- token = self.parseEnumBlock(token)
- else:
- self.error("parsing enum: expecting '{'", token)
- enum_type = None
- if token != None and token[0] != "name":
- self.lexer.push(token)
- token = ("name", "enum")
- else:
- enum_type = token[1]
- for enum in self.enums:
- self.index_add(enum[0], self.filename,
- not self.is_header, "enum",
- (enum[1], enum[2], enum_type))
- return token
-
- elif token[0] == "name":
- if self.type == "":
- self.type = token[1]
- else:
- self.type = self.type + " " + token[1]
- else:
- self.error("parsing type %s: expecting a name" % (self.type),
- token)
- return token
- token = self.token()
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ self.enums = []
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == "{":
+ token = self.token()
+ token = self.parseEnumBlock(token)
+ else:
+ self.error("parsing enum: expecting '{'", token)
+ enum_type = None
+ if token != None and token[0] != "name":
+ self.lexer.push(token)
+ token = ("name", "enum")
+ else:
+ enum_type = token[1]
+ for enum in self.enums:
+ self.index_add(enum[0], self.filename,
+ not self.is_header, "enum",
+ (enum[1], enum[2], enum_type))
+ return token
+
+ elif token[0] == "name":
+ if self.type == "":
+ self.type = token[1]
+ else:
+ self.type = self.type + " " + token[1]
+ else:
+ self.error("parsing type %s: expecting a name" % (self.type),
+ token)
+ return token
+ token = self.token()
while token != None and (token[0] == "op" or
- token[0] == "name" and token[1] == "const"):
- self.type = self.type + " " + token[1]
- token = self.token()
-
- #
- # if there is a parenthesis here, this means a function type
- #
- if token != None and token[0] == "sep" and token[1] == '(':
- self.type = self.type + token[1]
- token = self.token()
- while token != None and token[0] == "op" and token[1] == '*':
- self.type = self.type + token[1]
- token = self.token()
- if token == None or token[0] != "name" :
- self.error("parsing function type, name expected", token);
- return token
- self.type = self.type + token[1]
- nametok = token
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == ')':
- self.type = self.type + token[1]
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == '(':
- token = self.token()
- type = self.type;
- token = self.parseSignature(token);
- self.type = type;
- else:
- self.error("parsing function type, '(' expected", token);
- return token
- else:
- self.error("parsing function type, ')' expected", token);
- return token
- self.lexer.push(token)
- token = nametok
- return token
+ token[0] == "name" and token[1] == "const"):
+ self.type = self.type + " " + token[1]
+ token = self.token()
#
- # do some lookahead for arrays
- #
- if token != None and token[0] == "name":
- nametok = token
- token = self.token()
- if token != None and token[0] == "sep" and token[1] == '[':
- self.type = self.type + nametok[1]
- while token != None and token[0] == "sep" and token[1] == '[':
- self.type = self.type + token[1]
- token = self.token()
- while token != None and token[0] != 'sep' and \
- token[1] != ']' and token[1] != ';':
- self.type = self.type + token[1]
- token = self.token()
- if token != None and token[0] == 'sep' and token[1] == ']':
- self.type = self.type + token[1]
- token = self.token()
- else:
- self.error("parsing array type, ']' expected", token);
- return token
- elif token != None and token[0] == "sep" and token[1] == ':':
- # remove :12 in case it's a limited int size
- token = self.token()
- token = self.token()
- self.lexer.push(token)
- token = nametok
-
- return token
+ # if there is a parenthesis here, this means a function type
+ #
+ if token != None and token[0] == "sep" and token[1] == '(':
+ self.type = self.type + token[1]
+ token = self.token()
+ while token != None and token[0] == "op" and token[1] == '*':
+ self.type = self.type + token[1]
+ token = self.token()
+ if token == None or token[0] != "name" :
+ self.error("parsing function type, name expected", token);
+ return token
+ self.type = self.type + token[1]
+ nametok = token
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == ')':
+ self.type = self.type + token[1]
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == '(':
+ token = self.token()
+ type = self.type;
+ token = self.parseSignature(token);
+ self.type = type;
+ else:
+ self.error("parsing function type, '(' expected", token);
+ return token
+ else:
+ self.error("parsing function type, ')' expected", token);
+ return token
+ self.lexer.push(token)
+ token = nametok
+ return token
+
+ #
+ # do some lookahead for arrays
+ #
+ if token != None and token[0] == "name":
+ nametok = token
+ token = self.token()
+ if token != None and token[0] == "sep" and token[1] == '[':
+ self.type = self.type + nametok[1]
+ while token != None and token[0] == "sep" and token[1] == '[':
+ self.type = self.type + token[1]
+ token = self.token()
+ while token != None and token[0] != 'sep' and \
+ token[1] != ']' and token[1] != ';':
+ self.type = self.type + token[1]
+ token = self.token()
+ if token != None and token[0] == 'sep' and token[1] == ']':
+ self.type = self.type + token[1]
+ token = self.token()
+ else:
+ self.error("parsing array type, ']' expected", token);
+ return token
+ elif token != None and token[0] == "sep" and token[1] == ':':
+ # remove :12 in case it's a limited int size
+ token = self.token()
+ token = self.token()
+ self.lexer.push(token)
+ token = nametok
+
+ return token
#
# Parse a signature: '(' has been parsed and we scan the type definition
# up to the ')' included
def parseSignature(self, token):
signature = []
- if token != None and token[0] == "sep" and token[1] == ')':
- self.signature = []
- token = self.token()
- return token
- while token != None:
- token = self.parseType(token)
- if token != None and token[0] == "name":
- signature.append((self.type, token[1], None))
- token = self.token()
- elif token != None and token[0] == "sep" and token[1] == ',':
- token = self.token()
- continue
- elif token != None and token[0] == "sep" and token[1] == ')':
- # only the type was provided
- if self.type == "...":
- signature.append((self.type, "...", None))
- else:
- signature.append((self.type, None, None))
- if token != None and token[0] == "sep":
- if token[1] == ',':
- token = self.token()
- continue
- elif token[1] == ')':
- token = self.token()
- break
- self.signature = signature
- return token
+ if token != None and token[0] == "sep" and token[1] == ')':
+ self.signature = []
+ token = self.token()
+ return token
+ while token != None:
+ token = self.parseType(token)
+ if token != None and token[0] == "name":
+ signature.append((self.type, token[1], None))
+ token = self.token()
+ elif token != None and token[0] == "sep" and token[1] == ',':
+ token = self.token()
+ continue
+ elif token != None and token[0] == "sep" and token[1] == ')':
+ # only the type was provided
+ if self.type == "...":
+ signature.append((self.type, "...", None))
+ else:
+ signature.append((self.type, None, None))
+ if token != None and token[0] == "sep":
+ if token[1] == ',':
+ token = self.token()
+ continue
+ elif token[1] == ')':
+ token = self.token()
+ break
+ self.signature = signature
+ return token
#
# Parse a global definition, be it a type, variable or function
@@ -1481,134 +1481,134 @@ class CParser:
def parseGlobal(self, token):
static = 0
if token[1] == 'extern':
- token = self.token()
- if token == None:
- return token
- if token[0] == 'string':
- if token[1] == 'C':
- token = self.token()
- if token == None:
- return token
- if token[0] == 'sep' and token[1] == "{":
- token = self.token()
-# print 'Entering extern "C line ', self.lineno()
- while token != None and (token[0] != 'sep' or
- token[1] != "}"):
- if token[0] == 'name':
- token = self.parseGlobal(token)
- else:
- self.error(
- "token %s %s unexpected at the top level" % (
- token[0], token[1]))
- token = self.parseGlobal(token)
-# print 'Exiting extern "C" line', self.lineno()
- token = self.token()
- return token
- else:
- return token
- elif token[1] == 'static':
- static = 1
- token = self.token()
- if token == None or token[0] != 'name':
- return token
-
- if token[1] == 'typedef':
- token = self.token()
- return self.parseTypedef(token)
- else:
- token = self.parseType(token)
- type_orig = self.type
- if token == None or token[0] != "name":
- return token
- type = type_orig
- self.name = token[1]
- token = self.token()
- while token != None and (token[0] == "sep" or token[0] == "op"):
- if token[0] == "sep":
- if token[1] == "[":
- type = type + token[1]
- token = self.token()
- while token != None and (token[0] != "sep" or \
- token[1] != ";"):
- type = type + token[1]
- token = self.token()
-
- if token != None and token[0] == "op" and token[1] == "=":
- #
- # Skip the initialization of the variable
- #
- token = self.token()
- if token[0] == 'sep' and token[1] == '{':
- token = self.token()
- token = self.parseBlock(token)
- else:
- self.comment = None
- while token != None and (token[0] != "sep" or \
- (token[1] != ';' and token[1] != ',')):
- token = self.token()
- self.comment = None
- if token == None or token[0] != "sep" or (token[1] != ';' and
- token[1] != ','):
- self.error("missing ';' or ',' after value")
-
- if token != None and token[0] == "sep":
- if token[1] == ";":
- self.comment = None
- token = self.token()
- if type == "struct":
- self.index_add(self.name, self.filename,
- not self.is_header, "struct", self.struct_fields)
- else:
- self.index_add(self.name, self.filename,
- not self.is_header, "variable", type)
- break
- elif token[1] == "(":
- token = self.token()
- token = self.parseSignature(token)
- if token == None:
- return None
- if token[0] == "sep" and token[1] == ";":
- d = self.mergeFunctionComment(self.name,
- ((type, None), self.signature), 1)
- self.index_add(self.name, self.filename, static,
- "function", d)
- token = self.token()
- elif token[0] == "sep" and token[1] == "{":
- d = self.mergeFunctionComment(self.name,
- ((type, None), self.signature), static)
- self.index_add(self.name, self.filename, static,
- "function", d)
- token = self.token()
- token = self.parseBlock(token);
- elif token[1] == ',':
- self.comment = None
- self.index_add(self.name, self.filename, static,
- "variable", type)
- type = type_orig
- token = self.token()
- while token != None and token[0] == "sep":
- type = type + token[1]
- token = self.token()
- if token != None and token[0] == "name":
- self.name = token[1]
- token = self.token()
- else:
- break
-
- return token
+ token = self.token()
+ if token == None:
+ return token
+ if token[0] == 'string':
+ if token[1] == 'C':
+ token = self.token()
+ if token == None:
+ return token
+ if token[0] == 'sep' and token[1] == "{":
+ token = self.token()
+# print 'Entering extern "C line ', self.lineno()
+ while token != None and (token[0] != 'sep' or
+ token[1] != "}"):
+ if token[0] == 'name':
+ token = self.parseGlobal(token)
+ else:
+ self.error(
+ "token %s %s unexpected at the top level" % (
+ token[0], token[1]))
+ token = self.parseGlobal(token)
+# print 'Exiting extern "C" line', self.lineno()
+ token = self.token()
+ return token
+ else:
+ return token
+ elif token[1] == 'static':
+ static = 1
+ token = self.token()
+ if token == None or token[0] != 'name':
+ return token
+
+ if token[1] == 'typedef':
+ token = self.token()
+ return self.parseTypedef(token)
+ else:
+ token = self.parseType(token)
+ type_orig = self.type
+ if token == None or token[0] != "name":
+ return token
+ type = type_orig
+ self.name = token[1]
+ token = self.token()
+ while token != None and (token[0] == "sep" or token[0] == "op"):
+ if token[0] == "sep":
+ if token[1] == "[":
+ type = type + token[1]
+ token = self.token()
+ while token != None and (token[0] != "sep" or \
+ token[1] != ";"):
+ type = type + token[1]
+ token = self.token()
+
+ if token != None and token[0] == "op" and token[1] == "=":
+ #
+ # Skip the initialization of the variable
+ #
+ token = self.token()
+ if token[0] == 'sep' and token[1] == '{':
+ token = self.token()
+ token = self.parseBlock(token)
+ else:
+ self.comment = None
+ while token != None and (token[0] != "sep" or \
+ (token[1] != ';' and token[1] != ',')):
+ token = self.token()
+ self.comment = None
+ if token == None or token[0] != "sep" or (token[1] != ';' and
+ token[1] != ','):
+ self.error("missing ';' or ',' after value")
+
+ if token != None and token[0] == "sep":
+ if token[1] == ";":
+ self.comment = None
+ token = self.token()
+ if type == "struct":
+ self.index_add(self.name, self.filename,
+ not self.is_header, "struct", self.struct_fields)
+ else:
+ self.index_add(self.name, self.filename,
+ not self.is_header, "variable", type)
+ break
+ elif token[1] == "(":
+ token = self.token()
+ token = self.parseSignature(token)
+ if token == None:
+ return None
+ if token[0] == "sep" and token[1] == ";":
+ d = self.mergeFunctionComment(self.name,
+ ((type, None), self.signature), 1)
+ self.index_add(self.name, self.filename, static,
+ "function", d)
+ token = self.token()
+ elif token[0] == "sep" and token[1] == "{":
+ d = self.mergeFunctionComment(self.name,
+ ((type, None), self.signature), static)
+ self.index_add(self.name, self.filename, static,
+ "function", d)
+ token = self.token()
+ token = self.parseBlock(token);
+ elif token[1] == ',':
+ self.comment = None
+ self.index_add(self.name, self.filename, static,
+ "variable", type)
+ type = type_orig
+ token = self.token()
+ while token != None and token[0] == "sep":
+ type = type + token[1]
+ token = self.token()
+ if token != None and token[0] == "name":
+ self.name = token[1]
+ token = self.token()
+ else:
+ break
+
+ return token
def parse(self):
self.warning("Parsing %s" % (self.filename))
token = self.token()
- while token != None:
+ while token != None:
if token[0] == 'name':
- token = self.parseGlobal(token)
+ token = self.parseGlobal(token)
else:
- self.error("token %s %s unexpected at the top level" % (
- token[0], token[1]))
- token = self.parseGlobal(token)
- return
- self.parseTopComment(self.top_comment)
+ self.error("token %s %s unexpected at the top level" % (
+ token[0], token[1]))
+ token = self.parseGlobal(token)
+ return
+ self.parseTopComment(self.top_comment)
return self.index
@@ -1618,449 +1618,449 @@ class docBuilder:
self.name = name
self.path = path
self.directories = directories
- self.includes = includes + included_files.keys()
- self.modules = {}
- self.headers = {}
- self.idx = index()
+ self.includes = includes + included_files.keys()
+ self.modules = {}
+ self.headers = {}
+ self.idx = index()
self.xref = {}
- self.index = {}
- self.basename = name
+ self.index = {}
+ self.basename = name
def indexString(self, id, str):
- if str == None:
- return
- str = string.replace(str, "'", ' ')
- str = string.replace(str, '"', ' ')
- str = string.replace(str, "/", ' ')
- str = string.replace(str, '*', ' ')
- str = string.replace(str, "[", ' ')
- str = string.replace(str, "]", ' ')
- str = string.replace(str, "(", ' ')
- str = string.replace(str, ")", ' ')
- str = string.replace(str, "<", ' ')
- str = string.replace(str, '>', ' ')
- str = string.replace(str, "&", ' ')
- str = string.replace(str, '#', ' ')
- str = string.replace(str, ",", ' ')
- str = string.replace(str, '.', ' ')
- str = string.replace(str, ';', ' ')
- tokens = string.split(str)
- for token in tokens:
- try:
- c = token[0]
- if string.find(string.letters, c) < 0:
- pass
- elif len(token) < 3:
- pass
- else:
- lower = string.lower(token)
- # TODO: generalize this a bit
- if lower == 'and' or lower == 'the':
- pass
- elif self.xref.has_key(token):
- self.xref[token].append(id)
- else:
- self.xref[token] = [id]
- except:
- pass
+ if str == None:
+ return
+ str = string.replace(str, "'", ' ')
+ str = string.replace(str, '"', ' ')
+ str = string.replace(str, "/", ' ')
+ str = string.replace(str, '*', ' ')
+ str = string.replace(str, "[", ' ')
+ str = string.replace(str, "]", ' ')
+ str = string.replace(str, "(", ' ')
+ str = string.replace(str, ")", ' ')
+ str = string.replace(str, "<", ' ')
+ str = string.replace(str, '>', ' ')
+ str = string.replace(str, "&", ' ')
+ str = string.replace(str, '#', ' ')
+ str = string.replace(str, ",", ' ')
+ str = string.replace(str, '.', ' ')
+ str = string.replace(str, ';', ' ')
+ tokens = string.split(str)
+ for token in tokens:
+ try:
+ c = token[0]
+ if string.find(string.letters, c) < 0:
+ pass
+ elif len(token) < 3:
+ pass
+ else:
+ lower = string.lower(token)
+ # TODO: generalize this a bit
+ if lower == 'and' or lower == 'the':
+ pass
+ elif self.xref.has_key(token):
+ self.xref[token].append(id)
+ else:
+ self.xref[token] = [id]
+ except:
+ pass
def analyze(self):
print "Project %s : %d headers, %d modules" % (self.name, len(self.headers.keys()), len(self.modules.keys()))
- self.idx.analyze()
+ self.idx.analyze()
def scanHeaders(self):
- for header in self.headers.keys():
- parser = CParser(header)
- idx = parser.parse()
- self.headers[header] = idx;
- self.idx.merge(idx)
+ for header in self.headers.keys():
+ parser = CParser(header)
+ idx = parser.parse()
+ self.headers[header] = idx;
+ self.idx.merge(idx)
def scanModules(self):
- for module in self.modules.keys():
- parser = CParser(module)
- idx = parser.parse()
- # idx.analyze()
- self.modules[module] = idx
- self.idx.merge_public(idx)
+ for module in self.modules.keys():
+ parser = CParser(module)
+ idx = parser.parse()
+ # idx.analyze()
+ self.modules[module] = idx
+ self.idx.merge_public(idx)
def scan(self):
for directory in self.directories:
- files = glob.glob(directory + "/*.c")
- for file in files:
- skip = 1
- for incl in self.includes:
- if string.find(file, incl) != -1:
- skip = 0;
- break
- if skip == 0:
- self.modules[file] = None;
- files = glob.glob(directory + "/*.h")
- for file in files:
- skip = 1
- for incl in self.includes:
- if string.find(file, incl) != -1:
- skip = 0;
- break
- if skip == 0:
- self.headers[file] = None;
- self.scanHeaders()
- self.scanModules()
+ files = glob.glob(directory + "/*.c")
+ for file in files:
+ skip = 1
+ for incl in self.includes:
+ if string.find(file, incl) != -1:
+ skip = 0;
+ break
+ if skip == 0:
+ self.modules[file] = None;
+ files = glob.glob(directory + "/*.h")
+ for file in files:
+ skip = 1
+ for incl in self.includes:
+ if string.find(file, incl) != -1:
+ skip = 0;
+ break
+ if skip == 0:
+ self.headers[file] = None;
+ self.scanHeaders()
+ self.scanModules()
def modulename_file(self, file):
module = os.path.basename(file)
- if module[-2:] == '.h':
- module = module[:-2]
- elif module[-2:] == '.c':
- module = module[:-2]
- return module
+ if module[-2:] == '.h':
+ module = module[:-2]
+ elif module[-2:] == '.c':
+ module = module[:-2]
+ return module
def serialize_enum(self, output, name):
id = self.idx.enums[name]
output.write(" <enum name='%s' file='%s'" % (name,
- self.modulename_file(id.header)))
- if id.info != None:
- info = id.info
- if info[0] != None and info[0] != '':
- try:
- val = eval(info[0])
- except:
- val = info[0]
- output.write(" value='%s'" % (val));
- if info[2] != None and info[2] != '':
- output.write(" type='%s'" % info[2]);
- if info[1] != None and info[1] != '':
- output.write(" info='%s'" % escape(info[1]));
+ self.modulename_file(id.header)))
+ if id.info != None:
+ info = id.info
+ if info[0] != None and info[0] != '':
+ try:
+ val = eval(info[0])
+ except:
+ val = info[0]
+ output.write(" value='%s'" % (val));
+ if info[2] != None and info[2] != '':
+ output.write(" type='%s'" % info[2]);
+ if info[1] != None and info[1] != '':
+ output.write(" info='%s'" % escape(info[1]));
output.write("/>\n")
def serialize_macro(self, output, name):
id = self.idx.macros[name]
output.write(" <macro name='%s' file='%s'>\n" % (name,
- self.modulename_file(id.header)))
- if id.info != None:
+ self.modulename_file(id.header)))
+ if id.info != None:
try:
- (args, desc) = id.info
- if desc != None and desc != "":
- output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
- self.indexString(name, desc)
- for arg in args:
- (name, desc) = arg
- if desc != None and desc != "":
- output.write(" <arg name='%s' info='%s'/>\n" % (
- name, escape(desc)))
- self.indexString(name, desc)
- else:
- output.write(" <arg name='%s'/>\n" % (name))
+ (args, desc) = id.info
+ if desc != None and desc != "":
+ output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
+ self.indexString(name, desc)
+ for arg in args:
+ (name, desc) = arg
+ if desc != None and desc != "":
+ output.write(" <arg name='%s' info='%s'/>\n" % (
+ name, escape(desc)))
+ self.indexString(name, desc)
+ else:
+ output.write(" <arg name='%s'/>\n" % (name))
except:
pass
output.write(" </macro>\n")
def serialize_typedef(self, output, name):
id = self.idx.typedefs[name]
- if id.info[0:7] == 'struct ':
- output.write(" <struct name='%s' file='%s' type='%s'" % (
- name, self.modulename_file(id.header), id.info))
- name = id.info[7:]
- if self.idx.structs.has_key(name) and ( \
- type(self.idx.structs[name].info) == type(()) or
- type(self.idx.structs[name].info) == type([])):
- output.write(">\n");
- try:
- for field in self.idx.structs[name].info:
- desc = field[2]
- self.indexString(name, desc)
- if desc == None:
- desc = ''
- else:
- desc = escape(desc)
- output.write(" <field name='%s' type='%s' info='%s'/>\n" % (field[1] , field[0], desc))
- except:
- print "Failed to serialize struct %s" % (name)
- output.write(" </struct>\n")
- else:
- output.write("/>\n");
- else :
- output.write(" <typedef name='%s' file='%s' type='%s'" % (
- name, self.modulename_file(id.header), id.info))
+ if id.info[0:7] == 'struct ':
+ output.write(" <struct name='%s' file='%s' type='%s'" % (
+ name, self.modulename_file(id.header), id.info))
+ name = id.info[7:]
+ if self.idx.structs.has_key(name) and ( \
+ type(self.idx.structs[name].info) == type(()) or
+ type(self.idx.structs[name].info) == type([])):
+ output.write(">\n");
+ try:
+ for field in self.idx.structs[name].info:
+ desc = field[2]
+ self.indexString(name, desc)
+ if desc == None:
+ desc = ''
+ else:
+ desc = escape(desc)
+ output.write(" <field name='%s' type='%s' info='%s'/>\n" % (field[1] , field[0], desc))
+ except:
+ print "Failed to serialize struct %s" % (name)
+ output.write(" </struct>\n")
+ else:
+ output.write("/>\n");
+ else :
+ output.write(" <typedef name='%s' file='%s' type='%s'" % (
+ name, self.modulename_file(id.header), id.info))
try:
- desc = id.extra
- if desc != None and desc != "":
- output.write(">\n <info><![CDATA[%s]]></info>\n" % (desc))
- output.write(" </typedef>\n")
- else:
- output.write("/>\n")
- except:
- output.write("/>\n")
+ desc = id.extra
+ if desc != None and desc != "":
+ output.write(">\n <info><![CDATA[%s]]></info>\n" % (desc))
+ output.write(" </typedef>\n")
+ else:
+ output.write("/>\n")
+ except:
+ output.write("/>\n")
def serialize_variable(self, output, name):
id = self.idx.variables[name]
- if id.info != None:
- output.write(" <variable name='%s' file='%s' type='%s'/>\n" % (
- name, self.modulename_file(id.header), id.info))
- else:
- output.write(" <variable name='%s' file='%s'/>\n" % (
- name, self.modulename_file(id.header)))
+ if id.info != None:
+ output.write(" <variable name='%s' file='%s' type='%s'/>\n" % (
+ name, self.modulename_file(id.header), id.info))
+ else:
+ output.write(" <variable name='%s' file='%s'/>\n" % (
+ name, self.modulename_file(id.header)))
def serialize_function(self, output, name):
id = self.idx.functions[name]
- if name == debugsym:
- print "=>", id
+ if name == debugsym:
+ print "=>", id
output.write(" <%s name='%s' file='%s' module='%s'>\n" % (id.type,
- name, self.modulename_file(id.header),
- self.modulename_file(id.module)))
- #
- # Processing of conditionals modified by Bill 1/1/05
- #
- if id.conditionals != None:
- apstr = ""
- for cond in id.conditionals:
- if apstr != "":
- apstr = apstr + " && "
- apstr = apstr + cond
- output.write(" <cond>%s</cond>\n"% (apstr));
- try:
- (ret, params, desc) = id.info
- output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
- self.indexString(name, desc)
- if ret[0] != None:
- if ret[0] == "void":
- output.write(" <return type='void'/>\n")
- else:
- output.write(" <return type='%s' info='%s'/>\n" % (
- ret[0], escape(ret[1])))
- self.indexString(name, ret[1])
- for param in params:
- if param[0] == 'void':
- continue
- if param[2] == None:
- output.write(" <arg name='%s' type='%s' info=''/>\n" % (param[1], param[0]))
- else:
- output.write(" <arg name='%s' type='%s' info='%s'/>\n" % (param[1], param[0], escape(param[2])))
- self.indexString(name, param[2])
- except:
- print "Failed to save function %s info: " % name, `id.info`
+ name, self.modulename_file(id.header),
+ self.modulename_file(id.module)))
+ #
+ # Processing of conditionals modified by Bill 1/1/05
+ #
+ if id.conditionals != None:
+ apstr = ""
+ for cond in id.conditionals:
+ if apstr != "":
+ apstr = apstr + " && "
+ apstr = apstr + cond
+ output.write(" <cond>%s</cond>\n"% (apstr));
+ try:
+ (ret, params, desc) = id.info
+ output.write(" <info><![CDATA[%s]]></info>\n" % (desc))
+ self.indexString(name, desc)
+ if ret[0] != None:
+ if ret[0] == "void":
+ output.write(" <return type='void'/>\n")
+ else:
+ output.write(" <return type='%s' info='%s'/>\n" % (
+ ret[0], escape(ret[1])))
+ self.indexString(name, ret[1])
+ for param in params:
+ if param[0] == 'void':
+ continue
+ if param[2] == None:
+ output.write(" <arg name='%s' type='%s' info=''/>\n" % (param[1], param[0]))
+ else:
+ output.write(" <arg name='%s' type='%s' info='%s'/>\n" % (param[1], param[0], escape(param[2])))
+ self.indexString(name, param[2])
+ except:
+ print "Failed to save function %s info: " % name, `id.info`
output.write(" </%s>\n" % (id.type))
def serialize_exports(self, output, file):
module = self.modulename_file(file)
- output.write(" <file name='%s'>\n" % (module))
- dict = self.headers[file]
- if dict.info != None:
- for data in ('Summary', 'Description', 'Author'):
- try:
- output.write(" <%s>%s</%s>\n" % (
- string.lower(data),
- escape(dict.info[data]),
- string.lower(data)))
- except:
- print "Header %s lacks a %s description" % (module, data)
- if dict.info.has_key('Description'):
- desc = dict.info['Description']
- if string.find(desc, "DEPRECATED") != -1:
- output.write(" <deprecated/>\n")
+ output.write(" <file name='%s'>\n" % (module))
+ dict = self.headers[file]
+ if dict.info != None:
+ for data in ('Summary', 'Description', 'Author'):
+ try:
+ output.write(" <%s>%s</%s>\n" % (
+ string.lower(data),
+ escape(dict.info[data]),
+ string.lower(data)))
+ except:
+ print "Header %s lacks a %s description" % (module, data)
+ if dict.info.has_key('Description'):
+ desc = dict.info['Description']
+ if string.find(desc, "DEPRECATED") != -1:
+ output.write(" <deprecated/>\n")
ids = dict.macros.keys()
- ids.sort()
- for id in uniq(ids):
- # Macros are sometime used to masquerade other types.
- if dict.functions.has_key(id):
- continue
- if dict.variables.has_key(id):
- continue
- if dict.typedefs.has_key(id):
- continue
- if dict.structs.has_key(id):
- continue
- if dict.enums.has_key(id):
- continue
- output.write(" <exports symbol='%s' type='macro'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ # Macros are sometime used to masquerade other types.
+ if dict.functions.has_key(id):
+ continue
+ if dict.variables.has_key(id):
+ continue
+ if dict.typedefs.has_key(id):
+ continue
+ if dict.structs.has_key(id):
+ continue
+ if dict.enums.has_key(id):
+ continue
+ output.write(" <exports symbol='%s' type='macro'/>\n" % (id))
ids = dict.enums.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='enum'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='enum'/>\n" % (id))
ids = dict.typedefs.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='typedef'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='typedef'/>\n" % (id))
ids = dict.structs.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='struct'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='struct'/>\n" % (id))
ids = dict.variables.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='variable'/>\n" % (id))
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='variable'/>\n" % (id))
ids = dict.functions.keys()
- ids.sort()
- for id in uniq(ids):
- output.write(" <exports symbol='%s' type='function'/>\n" % (id))
- output.write(" </file>\n")
+ ids.sort()
+ for id in uniq(ids):
+ output.write(" <exports symbol='%s' type='function'/>\n" % (id))
+ output.write(" </file>\n")
def serialize_xrefs_files(self, output):
headers = self.headers.keys()
headers.sort()
for file in headers:
- module = self.modulename_file(file)
- output.write(" <file name='%s'>\n" % (module))
- dict = self.headers[file]
- ids = uniq(dict.functions.keys() + dict.variables.keys() + \
- dict.macros.keys() + dict.typedefs.keys() + \
- dict.structs.keys() + dict.enums.keys())
- ids.sort()
- for id in ids:
- output.write(" <ref name='%s'/>\n" % (id))
- output.write(" </file>\n")
+ module = self.modulename_file(file)
+ output.write(" <file name='%s'>\n" % (module))
+ dict = self.headers[file]
+ ids = uniq(dict.functions.keys() + dict.variables.keys() + \
+ dict.macros.keys() + dict.typedefs.keys() + \
+ dict.structs.keys() + dict.enums.keys())
+ ids.sort()
+ for id in ids:
+ output.write(" <ref name='%s'/>\n" % (id))
+ output.write(" </file>\n")
pass
def serialize_xrefs_functions(self, output):
funcs = {}
- for name in self.idx.functions.keys():
- id = self.idx.functions[name]
- try:
- (ret, params, desc) = id.info
- for param in params:
- if param[0] == 'void':
- continue
- if funcs.has_key(param[0]):
- funcs[param[0]].append(name)
- else:
- funcs[param[0]] = [name]
- except:
- pass
- typ = funcs.keys()
- typ.sort()
- for type in typ:
- if type == '' or type == 'void' or type == "int" or \
- type == "char *" or type == "const char *" :
- continue
- output.write(" <type name='%s'>\n" % (type))
- ids = funcs[type]
- ids.sort()
- pid = '' # not sure why we have dups, but get rid of them!
- for id in ids:
- if id != pid:
- output.write(" <ref name='%s'/>\n" % (id))
- pid = id
- output.write(" </type>\n")
+ for name in self.idx.functions.keys():
+ id = self.idx.functions[name]
+ try:
+ (ret, params, desc) = id.info
+ for param in params:
+ if param[0] == 'void':
+ continue
+ if funcs.has_key(param[0]):
+ funcs[param[0]].append(name)
+ else:
+ funcs[param[0]] = [name]
+ except:
+ pass
+ typ = funcs.keys()
+ typ.sort()
+ for type in typ:
+ if type == '' or type == 'void' or type == "int" or \
+ type == "char *" or type == "const char *" :
+ continue
+ output.write(" <type name='%s'>\n" % (type))
+ ids = funcs[type]
+ ids.sort()
+ pid = '' # not sure why we have dups, but get rid of them!
+ for id in ids:
+ if id != pid:
+ output.write(" <ref name='%s'/>\n" % (id))
+ pid = id
+ output.write(" </type>\n")
def serialize_xrefs_constructors(self, output):
funcs = {}
- for name in self.idx.functions.keys():
- id = self.idx.functions[name]
- try:
- (ret, params, desc) = id.info
- if ret[0] == "void":
- continue
- if funcs.has_key(ret[0]):
- funcs[ret[0]].append(name)
- else:
- funcs[ret[0]] = [name]
- except:
- pass
- typ = funcs.keys()
- typ.sort()
- for type in typ:
- if type == '' or type == 'void' or type == "int" or \
- type == "char *" or type == "const char *" :
- continue
- output.write(" <type name='%s'>\n" % (type))
- ids = funcs[type]
- ids.sort()
- for id in ids:
- output.write(" <ref name='%s'/>\n" % (id))
- output.write(" </type>\n")
+ for name in self.idx.functions.keys():
+ id = self.idx.functions[name]
+ try:
+ (ret, params, desc) = id.info
+ if ret[0] == "void":
+ continue
+ if funcs.has_key(ret[0]):
+ funcs[ret[0]].append(name)
+ else:
+ funcs[ret[0]] = [name]
+ except:
+ pass
+ typ = funcs.keys()
+ typ.sort()
+ for type in typ:
+ if type == '' or type == 'void' or type == "int" or \
+ type == "char *" or type == "const char *" :
+ continue
+ output.write(" <type name='%s'>\n" % (type))
+ ids = funcs[type]
+ ids.sort()
+ for id in ids:
+ output.write(" <ref name='%s'/>\n" % (id))
+ output.write(" </type>\n")
def serialize_xrefs_alpha(self, output):
- letter = None
- ids = self.idx.identifiers.keys()
- ids.sort()
- for id in ids:
- if id[0] != letter:
- if letter != None:
- output.write(" </letter>\n")
- letter = id[0]
- output.write(" <letter name='%s'>\n" % (letter))
- output.write(" <ref name='%s'/>\n" % (id))
- if letter != None:
- output.write(" </letter>\n")
+ letter = None
+ ids = self.idx.identifiers.keys()
+ ids.sort()
+ for id in ids:
+ if id[0] != letter:
+ if letter != None:
+ output.write(" </letter>\n")
+ letter = id[0]
+ output.write(" <letter name='%s'>\n" % (letter))
+ output.write(" <ref name='%s'/>\n" % (id))
+ if letter != None:
+ output.write(" </letter>\n")
def serialize_xrefs_references(self, output):
typ = self.idx.identifiers.keys()
- typ.sort()
- for id in typ:
- idf = self.idx.identifiers[id]
- module = idf.header
- output.write(" <reference name='%s' href='%s'/>\n" % (id,
- 'html/' + self.basename + '-' +
- self.modulename_file(module) + '.html#' +
- id))
+ typ.sort()
+ for id in typ:
+ idf = self.idx.identifiers[id]
+ module = idf.header
+ output.write(" <reference name='%s' href='%s'/>\n" % (id,
+ 'html/' + self.basename + '-' +
+ self.modulename_file(module) + '.html#' +
+ id))
def serialize_xrefs_index(self, output):
index = self.xref
- typ = index.keys()
- typ.sort()
- letter = None
- count = 0
- chunk = 0
- chunks = []
- for id in typ:
- if len(index[id]) > 30:
- continue
- if id[0] != letter:
- if letter == None or count > 200:
- if letter != None:
- output.write(" </letter>\n")
- output.write(" </chunk>\n")
- count = 0
- chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
- output.write(" <chunk name='chunk%s'>\n" % (chunk))
- first_letter = id[0]
- chunk = chunk + 1
- elif letter != None:
- output.write(" </letter>\n")
- letter = id[0]
- output.write(" <letter name='%s'>\n" % (letter))
- output.write(" <word name='%s'>\n" % (id))
- tokens = index[id];
- tokens.sort()
- tok = None
- for token in tokens:
- if tok == token:
- continue
- tok = token
- output.write(" <ref name='%s'/>\n" % (token))
- count = count + 1
- output.write(" </word>\n")
- if letter != None:
- output.write(" </letter>\n")
- output.write(" </chunk>\n")
- if count != 0:
- chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
- output.write(" <chunks>\n")
- for ch in chunks:
- output.write(" <chunk name='%s' start='%s' end='%s'/>\n" % (
- ch[0], ch[1], ch[2]))
- output.write(" </chunks>\n")
+ typ = index.keys()
+ typ.sort()
+ letter = None
+ count = 0
+ chunk = 0
+ chunks = []
+ for id in typ:
+ if len(index[id]) > 30:
+ continue
+ if id[0] != letter:
+ if letter == None or count > 200:
+ if letter != None:
+ output.write(" </letter>\n")
+ output.write(" </chunk>\n")
+ count = 0
+ chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
+ output.write(" <chunk name='chunk%s'>\n" % (chunk))
+ first_letter = id[0]
+ chunk = chunk + 1
+ elif letter != None:
+ output.write(" </letter>\n")
+ letter = id[0]
+ output.write(" <letter name='%s'>\n" % (letter))
+ output.write(" <word name='%s'>\n" % (id))
+ tokens = index[id];
+ tokens.sort()
+ tok = None
+ for token in tokens:
+ if tok == token:
+ continue
+ tok = token
+ output.write(" <ref name='%s'/>\n" % (token))
+ count = count + 1
+ output.write(" </word>\n")
+ if letter != None:
+ output.write(" </letter>\n")
+ output.write(" </chunk>\n")
+ if count != 0:
+ chunks.append(["chunk%s" % (chunk -1), first_letter, letter])
+ output.write(" <chunks>\n")
+ for ch in chunks:
+ output.write(" <chunk name='%s' start='%s' end='%s'/>\n" % (
+ ch[0], ch[1], ch[2]))
+ output.write(" </chunks>\n")
def serialize_xrefs(self, output):
- output.write(" <references>\n")
- self.serialize_xrefs_references(output)
- output.write(" </references>\n")
- output.write(" <alpha>\n")
- self.serialize_xrefs_alpha(output)
- output.write(" </alpha>\n")
- output.write(" <constructors>\n")
- self.serialize_xrefs_constructors(output)
- output.write(" </constructors>\n")
- output.write(" <functions>\n")
- self.serialize_xrefs_functions(output)
- output.write(" </functions>\n")
- output.write(" <files>\n")
- self.serialize_xrefs_files(output)
- output.write(" </files>\n")
- output.write(" <index>\n")
- self.serialize_xrefs_index(output)
- output.write(" </index>\n")
+ output.write(" <references>\n")
+ self.serialize_xrefs_references(output)
+ output.write(" </references>\n")
+ output.write(" <alpha>\n")
+ self.serialize_xrefs_alpha(output)
+ output.write(" </alpha>\n")
+ output.write(" <constructors>\n")
+ self.serialize_xrefs_constructors(output)
+ output.write(" </constructors>\n")
+ output.write(" <functions>\n")
+ self.serialize_xrefs_functions(output)
+ output.write(" </functions>\n")
+ output.write(" <files>\n")
+ self.serialize_xrefs_files(output)
+ output.write(" </files>\n")
+ output.write(" <index>\n")
+ self.serialize_xrefs_index(output)
+ output.write(" </index>\n")
def serialize(self):
filename = "%s/%s-api.xml" % (self.path, self.name)
@@ -2124,10 +2124,10 @@ def rebuild():
print "Rebuilding API description for libvirt"
builder = docBuilder("libvirt", srcdir,
["src", "src/util", "include/libvirt"],
- [])
+ [])
else:
print "rebuild() failed, unable to guess the module"
- return None
+ return None
builder.scan()
builder.analyze()
builder.serialize()
@@ -2146,4 +2146,4 @@ if __name__ == "__main__":
debug = 1
parse(sys.argv[1])
else:
- rebuild()
+ rebuild()
diff --git a/docs/index.py b/docs/index.py
index 17683c5..df6bd81 100755
--- a/docs/index.py
+++ b/docs/index.py
@@ -61,56 +61,56 @@ libxml2.registerErrorHandler(callback, None)
TABLES={
"symbols" : """CREATE TABLE symbols (
name varchar(255) BINARY NOT NULL,
- module varchar(255) BINARY NOT NULL,
+ module varchar(255) BINARY NOT NULL,
type varchar(25) NOT NULL,
- descr varchar(255),
- UNIQUE KEY name (name),
- KEY module (module))""",
+ descr varchar(255),
+ UNIQUE KEY name (name),
+ KEY module (module))""",
"words" : """CREATE TABLE words (
name varchar(50) BINARY NOT NULL,
- symbol varchar(255) BINARY NOT NULL,
+ symbol varchar(255) BINARY NOT NULL,
relevance int,
- KEY name (name),
- KEY symbol (symbol),
- UNIQUE KEY ID (name, symbol))""",
+ KEY name (name),
+ KEY symbol (symbol),
+ UNIQUE KEY ID (name, symbol))""",
"wordsHTML" : """CREATE TABLE wordsHTML (
name varchar(50) BINARY NOT NULL,
- resource varchar(255) BINARY NOT NULL,
- section varchar(255),
- id varchar(50),
+ resource varchar(255) BINARY NOT NULL,
+ section varchar(255),
+ id varchar(50),
relevance int,
- KEY name (name),
- KEY resource (resource),
- UNIQUE KEY ref (name, resource))""",
+ KEY name (name),
+ KEY resource (resource),
+ UNIQUE KEY ref (name, resource))""",
"wordsArchive" : """CREATE TABLE wordsArchive (
name varchar(50) BINARY NOT NULL,
- ID int(11) NOT NULL,
+ ID int(11) NOT NULL,
relevance int,
- KEY name (name),
- UNIQUE KEY ref (name, ID))""",
+ KEY name (name),
+ UNIQUE KEY ref (name, ID))""",
"pages" : """CREATE TABLE pages (
resource varchar(255) BINARY NOT NULL,
- title varchar(255) BINARY NOT NULL,
- UNIQUE KEY name (resource))""",
+ title varchar(255) BINARY NOT NULL,
+ UNIQUE KEY name (resource))""",
"archives" : """CREATE TABLE archives (
ID int(11) NOT NULL auto_increment,
resource varchar(255) BINARY NOT NULL,
- title varchar(255) BINARY NOT NULL,
- UNIQUE KEY id (ID,resource(255)),
- INDEX (ID),
- INDEX (resource))""",
+ title varchar(255) BINARY NOT NULL,
+ UNIQUE KEY id (ID,resource(255)),
+ INDEX (ID),
+ INDEX (resource))""",
"Queries" : """CREATE TABLE Queries (
ID int(11) NOT NULL auto_increment,
- Value varchar(50) NOT NULL,
- Count int(11) NOT NULL,
- UNIQUE KEY id (ID,Value(35)),
- INDEX (ID))""",
+ Value varchar(50) NOT NULL,
+ Count int(11) NOT NULL,
+ UNIQUE KEY id (ID,Value(35)),
+ INDEX (ID))""",
"AllQueries" : """CREATE TABLE AllQueries (
ID int(11) NOT NULL auto_increment,
- Value varchar(50) NOT NULL,
- Count int(11) NOT NULL,
- UNIQUE KEY id (ID,Value(35)),
- INDEX (ID))""",
+ Value varchar(50) NOT NULL,
+ Count int(11) NOT NULL,
+ UNIQUE KEY id (ID,Value(35)),
+ INDEX (ID))""",
}
#
@@ -120,9 +120,9 @@ API="libvirt-api.xml"
DB=None
#########################################################################
-# #
-# MySQL database interfaces #
-# #
+# #
+# MySQL database interfaces #
+# #
#########################################################################
def createTable(db, name):
global TABLES
@@ -141,7 +141,7 @@ def createTable(db, name):
ret = c.execute(TABLES[name])
except:
print "Failed to create table %s" % (name)
- return -1
+ return -1
return ret
def checkTables(db, verbose = 1):
@@ -152,38 +152,38 @@ def checkTables(db, verbose = 1):
c = db.cursor()
nbtables = c.execute("show tables")
if verbose:
- print "Found %d tables" % (nbtables)
+ print "Found %d tables" % (nbtables)
tables = {}
i = 0
while i < nbtables:
l = c.fetchone()
- name = l[0]
- tables[name] = {}
+ name = l[0]
+ tables[name] = {}
i = i + 1
for table in TABLES.keys():
if not tables.has_key(table):
- print "table %s missing" % (table)
- createTable(db, table)
- try:
- ret = c.execute("SELECT count(*) from %s" % table);
- row = c.fetchone()
- if verbose:
- print "Table %s contains %d records" % (table, row[0])
- except:
- print "Troubles with table %s : repairing" % (table)
- ret = c.execute("repair table %s" % table);
- print "repairing returned %d" % (ret)
- ret = c.execute("SELECT count(*) from %s" % table);
- row = c.fetchone()
- print "Table %s contains %d records" % (table, row[0])
+ print "table %s missing" % (table)
+ createTable(db, table)
+ try:
+ ret = c.execute("SELECT count(*) from %s" % table);
+ row = c.fetchone()
+ if verbose:
+ print "Table %s contains %d records" % (table, row[0])
+ except:
+ print "Troubles with table %s : repairing" % (table)
+ ret = c.execute("repair table %s" % table);
+ print "repairing returned %d" % (ret)
+ ret = c.execute("SELECT count(*) from %s" % table);
+ row = c.fetchone()
+ print "Table %s contains %d records" % (table, row[0])
if verbose:
- print "checkTables finished"
+ print "checkTables finished"
# make sure apache can access the tables read-only
try:
- ret = c.execute("GRANT SELECT ON libvir.* TO nobody@localhost")
- ret = c.execute("GRANT INSERT,SELECT,UPDATE ON libvir.Queries TO nobody@localhost")
+ ret = c.execute("GRANT SELECT ON libvir.* TO nobody@localhost")
+ ret = c.execute("GRANT INSERT,SELECT,UPDATE ON libvir.Queries TO nobody@localhost")
except:
pass
return 0
@@ -193,10 +193,10 @@ def openMySQL(db="libvir", passwd=None, verbose = 1):
if passwd == None:
try:
- passwd = os.environ["MySQL_PASS"]
- except:
- print "No password available, set environment MySQL_PASS"
- sys.exit(1)
+ passwd = os.environ["MySQL_PASS"]
+ except:
+ print "No password available, set environment MySQL_PASS"
+ sys.exit(1)
DB = MySQLdb.connect(passwd=passwd, db=db)
if DB == None:
@@ -218,19 +218,19 @@ def updateWord(name, symbol, relevance):
c = DB.cursor()
try:
- ret = c.execute(
+ ret = c.execute(
"""INSERT INTO words (name, symbol, relevance) VALUES ('%s','%s', %d)""" %
- (name, symbol, relevance))
+ (name, symbol, relevance))
except:
try:
- ret = c.execute(
+ ret = c.execute(
"""UPDATE words SET relevance = %d where name = '%s' and symbol = '%s'""" %
- (relevance, name, symbol))
- except:
- print "Update word (%s, %s, %s) failed command" % (name, symbol, relevance)
- print "UPDATE words SET relevance = %d where name = '%s' and symbol = '%s'" % (relevance, name, symbol)
- print sys.exc_type, sys.exc_value
- return -1
+ (relevance, name, symbol))
+ except:
+ print "Update word (%s, %s, %s) failed command" % (name, symbol, relevance)
+ print "UPDATE words SET relevance = %d where name = '%s' and symbol = '%s'" % (relevance, name, symbol)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
@@ -250,28 +250,28 @@ def updateSymbol(name, module, type, desc):
return -1
try:
- desc = string.replace(desc, "'", " ")
- l = string.split(desc, ".")
- desc = l[0]
- desc = desc[0:99]
+ desc = string.replace(desc, "'", " ")
+ l = string.split(desc, ".")
+ desc = l[0]
+ desc = desc[0:99]
except:
desc = ""
c = DB.cursor()
try:
- ret = c.execute(
+ ret = c.execute(
"""INSERT INTO symbols (name, module, type, descr) VALUES ('%s','%s', '%s', '%s')""" %
(name, module, type, desc))
except:
try:
- ret = c.execute(
+ ret = c.execute(
"""UPDATE symbols SET module='%s', type='%s', descr='%s' where name='%s'""" %
(module, type, desc, name))
except:
- print "Update symbol (%s, %s, %s) failed command" % (name, module, type)
- print """UPDATE symbols SET module='%s', type='%s', descr='%s' where name='%s'""" % (module, type, desc, name)
- print sys.exc_type, sys.exc_value
- return -1
+ print "Update symbol (%s, %s, %s) failed command" % (name, module, type)
+ print """UPDATE symbols SET module='%s', type='%s', descr='%s' where name='%s'""" % (module, type, desc, name)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
@@ -308,19 +308,19 @@ def addPage(resource, title):
c = DB.cursor()
try:
- ret = c.execute(
- """INSERT INTO pages (resource, title) VALUES ('%s','%s')""" %
+ ret = c.execute(
+ """INSERT INTO pages (resource, title) VALUES ('%s','%s')""" %
(resource, title))
except:
try:
- ret = c.execute(
- """UPDATE pages SET title='%s' WHERE resource='%s'""" %
+ ret = c.execute(
+ """UPDATE pages SET title='%s' WHERE resource='%s'""" %
(title, resource))
except:
- print "Update symbol (%s, %s, %s) failed command" % (name, module, type)
- print """UPDATE pages SET title='%s' WHERE resource='%s'""" % (title, resource)
- print sys.exc_type, sys.exc_value
- return -1
+ print "Update symbol (%s, %s, %s) failed command" % (name, module, type)
+ print """UPDATE pages SET title='%s' WHERE resource='%s'""" % (title, resource)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
@@ -340,27 +340,27 @@ def updateWordHTML(name, resource, desc, id, relevance):
if desc == None:
desc = ""
else:
- try:
- desc = string.replace(desc, "'", " ")
- desc = desc[0:99]
- except:
- desc = ""
+ try:
+ desc = string.replace(desc, "'", " ")
+ desc = desc[0:99]
+ except:
+ desc = ""
c = DB.cursor()
try:
- ret = c.execute(
+ ret = c.execute(
"""INSERT INTO wordsHTML (name, resource, section, id, relevance) VALUES ('%s','%s', '%s', '%s', '%d')""" %
(name, resource, desc, id, relevance))
except:
try:
- ret = c.execute(
+ ret = c.execute(
"""UPDATE wordsHTML SET section='%s', id='%s', relevance='%d' where name='%s' and resource='%s'""" %
(desc, id, relevance, name, resource))
except:
- print "Update symbol (%s, %s, %d) failed command" % (name, resource, relevance)
- print """UPDATE wordsHTML SET section='%s', id='%s', relevance='%d' where name='%s' and resource='%s'""" % (desc, id, relevance, name, resource)
- print sys.exc_type, sys.exc_value
- return -1
+ print "Update symbol (%s, %s, %d) failed command" % (name, resource, relevance)
+ print """UPDATE wordsHTML SET section='%s', id='%s', relevance='%d' where name='%s' and resource='%s'""" % (desc, id, relevance, name, resource)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
@@ -376,13 +376,13 @@ def checkXMLMsgArchive(url):
c = DB.cursor()
try:
- ret = c.execute(
- """SELECT ID FROM archives WHERE resource='%s'""" % (url))
- row = c.fetchone()
- if row == None:
- return -1
+ ret = c.execute(
+ """SELECT ID FROM archives WHERE resource='%s'""" % (url))
+ row = c.fetchone()
+ if row == None:
+ return -1
except:
- return -1
+ return -1
return row[0]
@@ -398,22 +398,22 @@ def addXMLMsgArchive(url, title):
if title == None:
title = ""
else:
- title = string.replace(title, "'", " ")
- title = title[0:99]
+ title = string.replace(title, "'", " ")
+ title = title[0:99]
c = DB.cursor()
try:
cmd = """INSERT INTO archives (resource, title) VALUES ('%s','%s')""" % (url, title)
ret = c.execute(cmd)
- cmd = """SELECT ID FROM archives WHERE resource='%s'""" % (url)
+ cmd = """SELECT ID FROM archives WHERE resource='%s'""" % (url)
ret = c.execute(cmd)
- row = c.fetchone()
- if row == None:
- print "addXMLMsgArchive failed to get the ID: %s" % (url)
- return -1
+ row = c.fetchone()
+ if row == None:
+ print "addXMLMsgArchive failed to get the ID: %s" % (url)
+ return -1
except:
print "addXMLMsgArchive failed command: %s" % (cmd)
- return -1
+ return -1
return((int)(row[0]))
@@ -431,26 +431,26 @@ def updateWordArchive(name, id, relevance):
c = DB.cursor()
try:
- ret = c.execute(
+ ret = c.execute(
"""INSERT INTO wordsArchive (name, id, relevance) VALUES ('%s', '%d', '%d')""" %
(name, id, relevance))
except:
try:
- ret = c.execute(
+ ret = c.execute(
"""UPDATE wordsArchive SET relevance='%d' where name='%s' and ID='%d'""" %
(relevance, name, id))
except:
- print "Update word archive (%s, %d, %d) failed command" % (name, id, relevance)
- print """UPDATE wordsArchive SET relevance='%d' where name='%s' and ID='%d'""" % (relevance, name, id)
- print sys.exc_type, sys.exc_value
- return -1
+ print "Update word archive (%s, %d, %d) failed command" % (name, id, relevance)
+ print """UPDATE wordsArchive SET relevance='%d' where name='%s' and ID='%d'""" % (relevance, name, id)
+ print sys.exc_type, sys.exc_value
+ return -1
return ret
#########################################################################
-# #
-# Word dictionary and analysis routines #
-# #
+# #
+# Word dictionary and analysis routines #
+# #
#########################################################################
#
@@ -516,18 +516,18 @@ def splitIdentifier(str):
ret = []
while str != "":
cur = string.lower(str[0])
- str = str[1:]
- if ((cur < 'a') or (cur > 'z')):
- continue
- while (str != "") and (str[0] >= 'A') and (str[0] <= 'Z'):
- cur = cur + string.lower(str[0])
- str = str[1:]
- while (str != "") and (str[0] >= 'a') and (str[0] <= 'z'):
- cur = cur + str[0]
- str = str[1:]
- while (str != "") and (str[0] >= '0') and (str[0] <= '9'):
- str = str[1:]
- ret.append(cur)
+ str = str[1:]
+ if ((cur < 'a') or (cur > 'z')):
+ continue
+ while (str != "") and (str[0] >= 'A') and (str[0] <= 'Z'):
+ cur = cur + string.lower(str[0])
+ str = str[1:]
+ while (str != "") and (str[0] >= 'a') and (str[0] <= 'z'):
+ cur = cur + str[0]
+ str = str[1:]
+ while (str != "") and (str[0] >= '0') and (str[0] <= '9'):
+ str = str[1:]
+ ret.append(cur)
return ret
def addWord(word, module, symbol, relevance):
@@ -544,15 +544,15 @@ def addWord(word, module, symbol, relevance):
if wordsDict.has_key(word):
d = wordsDict[word]
- if d == None:
- return 0
- if len(d) > 500:
- wordsDict[word] = None
- return 0
- try:
- relevance = relevance + d[(module, symbol)]
- except:
- pass
+ if d == None:
+ return 0
+ if len(d) > 500:
+ wordsDict[word] = None
+ return 0
+ try:
+ relevance = relevance + d[(module, symbol)]
+ except:
+ pass
else:
wordsDict[word] = {}
wordsDict[word][(module, symbol)] = relevance
@@ -565,8 +565,8 @@ def addString(str, module, symbol, relevance):
str = cleanupWordsString(str)
l = string.split(str)
for word in l:
- if len(word) > 2:
- ret = ret + addWord(word, module, symbol, 5)
+ if len(word) > 2:
+ ret = ret + addWord(word, module, symbol, 5)
return ret
@@ -586,18 +586,18 @@ def addWordHTML(word, resource, id, section, relevance):
if wordsDictHTML.has_key(word):
d = wordsDictHTML[word]
- if d == None:
- print "skipped %s" % (word)
- return 0
- try:
- (r,i,s) = d[resource]
- if i != None:
- id = i
- if s != None:
- section = s
- relevance = relevance + r
- except:
- pass
+ if d == None:
+ print "skipped %s" % (word)
+ return 0
+ try:
+ (r,i,s) = d[resource]
+ if i != None:
+ id = i
+ if s != None:
+ section = s
+ relevance = relevance + r
+ except:
+ pass
else:
wordsDictHTML[word] = {}
d = wordsDictHTML[word];
@@ -611,15 +611,15 @@ def addStringHTML(str, resource, id, section, relevance):
str = cleanupWordsString(str)
l = string.split(str)
for word in l:
- if len(word) > 2:
- try:
- r = addWordHTML(word, resource, id, section, relevance)
- if r < 0:
- print "addWordHTML failed: %s %s" % (word, resource)
- ret = ret + r
- except:
- print "addWordHTML failed: %s %s %d" % (word, resource, relevance)
- print sys.exc_type, sys.exc_value
+ if len(word) > 2:
+ try:
+ r = addWordHTML(word, resource, id, section, relevance)
+ if r < 0:
+ print "addWordHTML failed: %s %s" % (word, resource)
+ ret = ret + r
+ except:
+ print "addWordHTML failed: %s %s %d" % (word, resource, relevance)
+ print sys.exc_type, sys.exc_value
return ret
@@ -637,14 +637,14 @@ def addWordArchive(word, id, relevance):
if wordsDictArchive.has_key(word):
d = wordsDictArchive[word]
- if d == None:
- print "skipped %s" % (word)
- return 0
- try:
- r = d[id]
- relevance = relevance + r
- except:
- pass
+ if d == None:
+ print "skipped %s" % (word)
+ return 0
+ try:
+ r = d[id]
+ relevance = relevance + r
+ except:
+ pass
else:
wordsDictArchive[word] = {}
d = wordsDictArchive[word];
@@ -659,22 +659,22 @@ def addStringArchive(str, id, relevance):
l = string.split(str)
for word in l:
i = len(word)
- if i > 2:
- try:
- r = addWordArchive(word, id, relevance)
- if r < 0:
- print "addWordArchive failed: %s %s" % (word, id)
- else:
- ret = ret + r
- except:
- print "addWordArchive failed: %s %s %d" % (word, id, relevance)
- print sys.exc_type, sys.exc_value
+ if i > 2:
+ try:
+ r = addWordArchive(word, id, relevance)
+ if r < 0:
+ print "addWordArchive failed: %s %s" % (word, id)
+ else:
+ ret = ret + r
+ except:
+ print "addWordArchive failed: %s %s %d" % (word, id, relevance)
+ print sys.exc_type, sys.exc_value
return ret
#########################################################################
-# #
-# XML API description analysis #
-# #
+# #
+# XML API description analysis #
+# #
#########################################################################
def loadAPI(filename):
@@ -690,7 +690,7 @@ def foundExport(file, symbol):
addFunction(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
def analyzeAPIFile(top):
@@ -699,12 +699,12 @@ def analyzeAPIFile(top):
cur = top.children
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "exports":
- count = count + foundExport(name, cur.prop("symbol"))
- else:
- print "unexpected element %s in API doc <file name='%s'>" % (name)
+ cur = cur.next
+ continue
+ if cur.name == "exports":
+ count = count + foundExport(name, cur.prop("symbol"))
+ else:
+ print "unexpected element %s in API doc <file name='%s'>" % (name)
cur = cur.next
return count
@@ -714,12 +714,12 @@ def analyzeAPIFiles(top):
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "file":
- count = count + analyzeAPIFile(cur)
- else:
- print "unexpected element %s in API doc <files>" % (cur.name)
+ cur = cur.next
+ continue
+ if cur.name == "file":
+ count = count + analyzeAPIFile(cur)
+ else:
+ print "unexpected element %s in API doc <files>" % (cur.name)
cur = cur.next
return count
@@ -734,7 +734,7 @@ def analyzeAPIEnum(top):
addEnum(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
@@ -749,7 +749,7 @@ def analyzeAPIConst(top):
addConst(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
@@ -764,7 +764,7 @@ def analyzeAPIType(top):
addType(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
def analyzeAPIFunctype(top):
@@ -778,7 +778,7 @@ def analyzeAPIFunctype(top):
addFunctype(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
def analyzeAPIStruct(top):
@@ -792,16 +792,16 @@ def analyzeAPIStruct(top):
addStruct(symbol, file)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
info = top.prop("info")
if info != None:
- info = string.replace(info, "'", " ")
- info = string.strip(info)
- l = string.split(info)
- for word in l:
- if len(word) > 2:
- addWord(word, file, symbol, 5)
+ info = string.replace(info, "'", " ")
+ info = string.strip(info)
+ l = string.split(info)
+ for word in l:
+ if len(word) > 2:
+ addWord(word, file, symbol, 5)
return 1
def analyzeAPIMacro(top):
@@ -818,19 +818,19 @@ def analyzeAPIMacro(top):
cur = top.children
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "info":
- info = cur.content
- break
+ cur = cur.next
+ continue
+ if cur.name == "info":
+ info = cur.content
+ break
cur = cur.next
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
if info == None:
- addMacro(symbol, file)
+ addMacro(symbol, file)
print "Macro %s description has no <info>" % (symbol)
return 0
@@ -839,8 +839,8 @@ def analyzeAPIMacro(top):
addMacro(symbol, file, info)
l = string.split(info)
for word in l:
- if len(word) > 2:
- addWord(word, file, symbol, 5)
+ if len(word) > 2:
+ addWord(word, file, symbol, 5)
return 1
def analyzeAPIFunction(top):
@@ -857,40 +857,40 @@ def analyzeAPIFunction(top):
cur = top.children
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "info":
- info = cur.content
- elif cur.name == "return":
- rinfo = cur.prop("info")
- if rinfo != None:
- rinfo = string.replace(rinfo, "'", " ")
- rinfo = string.strip(rinfo)
- addString(rinfo, file, symbol, 7)
- elif cur.name == "arg":
- ainfo = cur.prop("info")
- if ainfo != None:
- ainfo = string.replace(ainfo, "'", " ")
- ainfo = string.strip(ainfo)
- addString(ainfo, file, symbol, 5)
- name = cur.prop("name")
- if name != None:
- name = string.replace(name, "'", " ")
- name = string.strip(name)
- addWord(name, file, symbol, 7)
+ cur = cur.next
+ continue
+ if cur.name == "info":
+ info = cur.content
+ elif cur.name == "return":
+ rinfo = cur.prop("info")
+ if rinfo != None:
+ rinfo = string.replace(rinfo, "'", " ")
+ rinfo = string.strip(rinfo)
+ addString(rinfo, file, symbol, 7)
+ elif cur.name == "arg":
+ ainfo = cur.prop("info")
+ if ainfo != None:
+ ainfo = string.replace(ainfo, "'", " ")
+ ainfo = string.strip(ainfo)
+ addString(ainfo, file, symbol, 5)
+ name = cur.prop("name")
+ if name != None:
+ name = string.replace(name, "'", " ")
+ name = string.strip(name)
+ addWord(name, file, symbol, 7)
cur = cur.next
if info == None:
print "Function %s description has no <info>" % (symbol)
- addFunction(symbol, file, "")
+ addFunction(symbol, file, "")
else:
info = string.replace(info, "'", " ")
- info = string.strip(info)
- addFunction(symbol, file, info)
+ info = string.strip(info)
+ addFunction(symbol, file, info)
addString(info, file, symbol, 5)
l = splitIdentifier(symbol)
for word in l:
- addWord(word, file, symbol, 10)
+ addWord(word, file, symbol, 10)
return 1
@@ -900,24 +900,24 @@ def analyzeAPISymbols(top):
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "macro":
- count = count + analyzeAPIMacro(cur)
- elif cur.name == "function":
- count = count + analyzeAPIFunction(cur)
- elif cur.name == "const":
- count = count + analyzeAPIConst(cur)
- elif cur.name == "typedef":
- count = count + analyzeAPIType(cur)
- elif cur.name == "struct":
- count = count + analyzeAPIStruct(cur)
- elif cur.name == "enum":
- count = count + analyzeAPIEnum(cur)
- elif cur.name == "functype":
- count = count + analyzeAPIFunctype(cur)
- else:
- print "unexpected element %s in API doc <files>" % (cur.name)
+ cur = cur.next
+ continue
+ if cur.name == "macro":
+ count = count + analyzeAPIMacro(cur)
+ elif cur.name == "function":
+ count = count + analyzeAPIFunction(cur)
+ elif cur.name == "const":
+ count = count + analyzeAPIConst(cur)
+ elif cur.name == "typedef":
+ count = count + analyzeAPIType(cur)
+ elif cur.name == "struct":
+ count = count + analyzeAPIStruct(cur)
+ elif cur.name == "enum":
+ count = count + analyzeAPIEnum(cur)
+ elif cur.name == "functype":
+ count = count + analyzeAPIFunctype(cur)
+ else:
+ print "unexpected element %s in API doc <files>" % (cur.name)
cur = cur.next
return count
@@ -932,22 +932,22 @@ def analyzeAPI(doc):
cur = root.children
while cur != None:
if cur.type == 'text':
- cur = cur.next
- continue
- if cur.name == "files":
- pass
-# count = count + analyzeAPIFiles(cur)
- elif cur.name == "symbols":
- count = count + analyzeAPISymbols(cur)
- else:
- print "unexpected element %s in API doc" % (cur.name)
+ cur = cur.next
+ continue
+ if cur.name == "files":
+ pass
+# count = count + analyzeAPIFiles(cur)
+ elif cur.name == "symbols":
+ count = count + analyzeAPISymbols(cur)
+ else:
+ print "unexpected element %s in API doc" % (cur.name)
cur = cur.next
return count
#########################################################################
-# #
-# Web pages parsing and analysis #
-# #
+# #
+# Web pages parsing and analysis #
+# #
#########################################################################
import glob
@@ -955,8 +955,8 @@ import glob
def analyzeHTMLText(doc, resource, p, section, id):
words = 0
try:
- content = p.content
- words = words + addStringHTML(content, resource, id, section, 5)
+ content = p.content
+ words = words + addStringHTML(content, resource, id, section, 5)
except:
return -1
return words
@@ -964,8 +964,8 @@ def analyzeHTMLText(doc, resource, p, section, id):
def analyzeHTMLPara(doc, resource, p, section, id):
words = 0
try:
- content = p.content
- words = words + addStringHTML(content, resource, id, section, 5)
+ content = p.content
+ words = words + addStringHTML(content, resource, id, section, 5)
except:
return -1
return words
@@ -973,8 +973,8 @@ def analyzeHTMLPara(doc, resource, p, section, id):
def analyzeHTMLPre(doc, resource, p, section, id):
words = 0
try:
- content = p.content
- words = words + addStringHTML(content, resource, id, section, 5)
+ content = p.content
+ words = words + addStringHTML(content, resource, id, section, 5)
except:
return -1
return words
@@ -982,8 +982,8 @@ def analyzeHTMLPre(doc, resource, p, section, id):
def analyzeHTML(doc, resource, p, section, id):
words = 0
try:
- content = p.content
- words = words + addStringHTML(content, resource, id, section, 5)
+ content = p.content
+ words = words + addStringHTML(content, resource, id, section, 5)
except:
return -1
return words
@@ -992,36 +992,36 @@ def analyzeHTML(doc, resource):
para = 0;
ctxt = doc.xpathNewContext()
try:
- res = ctxt.xpathEval("//head/title")
- title = res[0].content
+ res = ctxt.xpathEval("//head/title")
+ title = res[0].content
except:
title = "Page %s" % (resource)
addPage(resource, title)
try:
- items = ctxt.xpathEval("//h1 | //h2 | //h3 | //text()")
- section = title
- id = ""
- for item in items:
- if item.name == 'h1' or item.name == 'h2' or item.name == 'h3':
- section = item.content
- if item.prop("id"):
- id = item.prop("id")
- elif item.prop("name"):
- id = item.prop("name")
- elif item.type == 'text':
- analyzeHTMLText(doc, resource, item, section, id)
- para = para + 1
- elif item.name == 'p':
- analyzeHTMLPara(doc, resource, item, section, id)
- para = para + 1
- elif item.name == 'pre':
- analyzeHTMLPre(doc, resource, item, section, id)
- para = para + 1
- else:
- print "Page %s, unexpected %s element" % (resource, item.name)
+ items = ctxt.xpathEval("//h1 | //h2 | //h3 | //text()")
+ section = title
+ id = ""
+ for item in items:
+ if item.name == 'h1' or item.name == 'h2' or item.name == 'h3':
+ section = item.content
+ if item.prop("id"):
+ id = item.prop("id")
+ elif item.prop("name"):
+ id = item.prop("name")
+ elif item.type == 'text':
+ analyzeHTMLText(doc, resource, item, section, id)
+ para = para + 1
+ elif item.name == 'p':
+ analyzeHTMLPara(doc, resource, item, section, id)
+ para = para + 1
+ elif item.name == 'pre':
+ analyzeHTMLPre(doc, resource, item, section, id)
+ para = para + 1
+ else:
+ print "Page %s, unexpected %s element" % (resource, item.name)
except:
print "Page %s: problem analyzing" % (resource)
- print sys.exc_type, sys.exc_value
+ print sys.exc_type, sys.exc_value
return para
@@ -1029,35 +1029,35 @@ def analyzeHTMLPages():
ret = 0
HTMLfiles = glob.glob("*.html") + glob.glob("tutorial/*.html") + \
glob.glob("CIM/*.html") + glob.glob("ocaml/*.html") + \
- glob.glob("ruby/*.html")
+ glob.glob("ruby/*.html")
for html in HTMLfiles:
- if html[0:3] == "API":
- continue
- if html == "xml.html":
- continue
- try:
- doc = libxml2.parseFile(html)
- except:
- doc = libxml2.htmlParseFile(html, None)
- try:
- res = analyzeHTML(doc, html)
- print "Parsed %s : %d paragraphs" % (html, res)
- ret = ret + 1
- except:
- print "could not parse %s" % (html)
+ if html[0:3] == "API":
+ continue
+ if html == "xml.html":
+ continue
+ try:
+ doc = libxml2.parseFile(html)
+ except:
+ doc = libxml2.htmlParseFile(html, None)
+ try:
+ res = analyzeHTML(doc, html)
+ print "Parsed %s : %d paragraphs" % (html, res)
+ ret = ret + 1
+ except:
+ print "could not parse %s" % (html)
return ret
#########################################################################
-# #
-# Mail archives parsing and analysis #
-# #
+# #
+# Mail archives parsing and analysis #
+# #
#########################################################################
import time
def getXMLDateArchive(t = None):
if t == None:
- t = time.time()
+ t = time.time()
T = time.gmtime(t)
month = time.strftime("%B", T)
year = T[0]
@@ -1073,9 +1073,9 @@ def scanXMLMsgArchive(url, title, force = 0):
return 0
if ID == -1:
- ID = addXMLMsgArchive(url, title)
- if ID == -1:
- return 0
+ ID = addXMLMsgArchive(url, title)
+ if ID == -1:
+ return 0
try:
print "Loading %s" % (url)
@@ -1084,7 +1084,7 @@ def scanXMLMsgArchive(url, title, force = 0):
doc = None
if doc == None:
print "Failed to parse %s" % (url)
- return 0
+ return 0
addStringArchive(title, ID, 20)
ctxt = doc.xpathNewContext()
@@ -1102,41 +1102,41 @@ def scanXMLDateArchive(t = None, force = 0):
url = getXMLDateArchive(t)
print "loading %s" % (url)
try:
- doc = libxml2.htmlParseFile(url, None);
+ doc = libxml2.htmlParseFile(url, None);
except:
doc = None
if doc == None:
print "Failed to parse %s" % (url)
- return -1
+ return -1
ctxt = doc.xpathNewContext()
anchors = ctxt.xpathEval("//a[@href]")
links = 0
newmsg = 0
for anchor in anchors:
- href = anchor.prop("href")
- if href == None or href[0:3] != "msg":
- continue
+ href = anchor.prop("href")
+ if href == None or href[0:3] != "msg":
+ continue
try:
- links = links + 1
+ links = links + 1
- msg = libxml2.buildURI(href, url)
- title = anchor.content
- if title != None and title[0:4] == 'Re: ':
- title = title[4:]
- if title != None and title[0:6] == '[xml] ':
- title = title[6:]
- newmsg = newmsg + scanXMLMsgArchive(msg, title, force)
+ msg = libxml2.buildURI(href, url)
+ title = anchor.content
+ if title != None and title[0:4] == 'Re: ':
+ title = title[4:]
+ if title != None and title[0:6] == '[xml] ':
+ title = title[6:]
+ newmsg = newmsg + scanXMLMsgArchive(msg, title, force)
- except:
- pass
+ except:
+ pass
return newmsg
#########################################################################
-# #
-# Main code: open the DB, the API XML and analyze it #
-# #
+# #
+# Main code: open the DB, the API XML and analyze it #
+# #
#########################################################################
def analyzeArchives(t = None, force = 0):
global wordsDictArchive
@@ -1147,14 +1147,14 @@ def analyzeArchives(t = None, force = 0):
i = 0
skipped = 0
for word in wordsDictArchive.keys():
- refs = wordsDictArchive[word]
- if refs == None:
- skipped = skipped + 1
- continue;
- for id in refs.keys():
- relevance = refs[id]
- updateWordArchive(word, id, relevance)
- i = i + 1
+ refs = wordsDictArchive[word]
+ if refs == None:
+ skipped = skipped + 1
+ continue;
+ for id in refs.keys():
+ relevance = refs[id]
+ updateWordArchive(word, id, relevance)
+ i = i + 1
print "Found %d associations in HTML pages" % (i)
@@ -1167,14 +1167,14 @@ def analyzeHTMLTop():
i = 0
skipped = 0
for word in wordsDictHTML.keys():
- refs = wordsDictHTML[word]
- if refs == None:
- skipped = skipped + 1
- continue;
- for resource in refs.keys():
- (relevance, id, section) = refs[resource]
- updateWordHTML(word, resource, section, id, relevance)
- i = i + 1
+ refs = wordsDictHTML[word]
+ if refs == None:
+ skipped = skipped + 1
+ continue;
+ for resource in refs.keys():
+ (relevance, id, section) = refs[resource]
+ updateWordHTML(word, resource, section, id, relevance)
+ i = i + 1
print "Found %d associations in HTML pages" % (i)
@@ -1183,26 +1183,26 @@ def analyzeAPITop():
global API
try:
- doc = loadAPI(API)
- ret = analyzeAPI(doc)
- print "Analyzed %d blocs" % (ret)
- doc.freeDoc()
+ doc = loadAPI(API)
+ ret = analyzeAPI(doc)
+ print "Analyzed %d blocs" % (ret)
+ doc.freeDoc()
except:
- print "Failed to parse and analyze %s" % (API)
- print sys.exc_type, sys.exc_value
- sys.exit(1)
+ print "Failed to parse and analyze %s" % (API)
+ print sys.exc_type, sys.exc_value
+ sys.exit(1)
print "Indexed %d words" % (len(wordsDict))
i = 0
skipped = 0
for word in wordsDict.keys():
- refs = wordsDict[word]
- if refs == None:
- skipped = skipped + 1
- continue;
- for (module, symbol) in refs.keys():
- updateWord(word, symbol, refs[(module, symbol)])
- i = i + 1
+ refs = wordsDict[word]
+ if refs == None:
+ skipped = skipped + 1
+ continue;
+ for (module, symbol) in refs.keys():
+ updateWord(word, symbol, refs[(module, symbol)])
+ i = i + 1
print "Found %d associations, skipped %d words" % (i, skipped)
@@ -1212,53 +1212,53 @@ def usage():
def main():
try:
- openMySQL()
+ openMySQL()
except:
- print "Failed to open the database"
- print sys.exc_type, sys.exc_value
- sys.exit(1)
+ print "Failed to open the database"
+ print sys.exc_type, sys.exc_value
+ sys.exit(1)
args = sys.argv[1:]
force = 0
if args:
i = 0
- while i < len(args):
- if args[i] == '--force':
- force = 1
- elif args[i] == '--archive':
- analyzeArchives(None, force)
- elif args[i] == '--archive-year':
- i = i + 1;
- year = args[i]
- months = ["January" , "February", "March", "April", "May",
- "June", "July", "August", "September", "October",
- "November", "December"];
- for month in months:
- try:
- str = "%s-%s" % (year, month)
- T = time.strptime(str, "%Y-%B")
- t = time.mktime(T) + 3600 * 24 * 10;
- analyzeArchives(t, force)
- except:
- print "Failed to index month archive:"
- print sys.exc_type, sys.exc_value
- elif args[i] == '--archive-month':
- i = i + 1;
- month = args[i]
- try:
- T = time.strptime(month, "%Y-%B")
- t = time.mktime(T) + 3600 * 24 * 10;
- analyzeArchives(t, force)
- except:
- print "Failed to index month archive:"
- print sys.exc_type, sys.exc_value
- elif args[i] == '--API':
- analyzeAPITop()
- elif args[i] == '--docs':
- analyzeHTMLTop()
- else:
- usage()
- i = i + 1
+ while i < len(args):
+ if args[i] == '--force':
+ force = 1
+ elif args[i] == '--archive':
+ analyzeArchives(None, force)
+ elif args[i] == '--archive-year':
+ i = i + 1;
+ year = args[i]
+ months = ["January" , "February", "March", "April", "May",
+ "June", "July", "August", "September", "October",
+ "November", "December"];
+ for month in months:
+ try:
+ str = "%s-%s" % (year, month)
+ T = time.strptime(str, "%Y-%B")
+ t = time.mktime(T) + 3600 * 24 * 10;
+ analyzeArchives(t, force)
+ except:
+ print "Failed to index month archive:"
+ print sys.exc_type, sys.exc_value
+ elif args[i] == '--archive-month':
+ i = i + 1;
+ month = args[i]
+ try:
+ T = time.strptime(month, "%Y-%B")
+ t = time.mktime(T) + 3600 * 24 * 10;
+ analyzeArchives(t, force)
+ except:
+ print "Failed to index month archive:"
+ print sys.exc_type, sys.exc_value
+ elif args[i] == '--API':
+ analyzeAPITop()
+ elif args[i] == '--docs':
+ analyzeHTMLTop()
+ else:
+ usage()
+ i = i + 1
else:
usage()
diff --git a/python/generator.py b/python/generator.py
index 15751bd..ee9dfe4 100755
--- a/python/generator.py
+++ b/python/generator.py
@@ -90,8 +90,8 @@ class docParser(xml.sax.handler.ContentHandler):
self.function_arg_info = None
if attrs.has_key('name'):
self.function_arg_name = attrs['name']
- if self.function_arg_name == 'from':
- self.function_arg_name = 'frm'
+ if self.function_arg_name == 'from':
+ self.function_arg_name = 'frm'
if attrs.has_key('type'):
self.function_arg_type = attrs['type']
if attrs.has_key('info'):
@@ -409,8 +409,8 @@ def print_function_wrapper(name, output, export, include):
if name in skip_function:
return 0
if name in skip_impl:
- # Don't delete the function entry in the caller.
- return 1
+ # Don't delete the function entry in the caller.
+ return 1
c_call = "";
format=""
@@ -426,8 +426,8 @@ def print_function_wrapper(name, output, export, include):
c_args = c_args + " %s %s;\n" % (arg[1], arg[0])
if py_types.has_key(arg[1]):
(f, t, n, c) = py_types[arg[1]]
- if (f == 'z') and (name in foreign_encoding_args) and (num_bufs == 0):
- f = 't#'
+ if (f == 'z') and (name in foreign_encoding_args) and (num_bufs == 0):
+ f = 't#'
if f != None:
format = format + f
if t != None:
@@ -438,10 +438,10 @@ def print_function_wrapper(name, output, export, include):
arg[1], t, arg[0]);
else:
format_args = format_args + ", &%s" % (arg[0])
- if f == 't#':
- format_args = format_args + ", &py_buffsize%d" % num_bufs
- c_args = c_args + " int py_buffsize%d;\n" % num_bufs
- num_bufs = num_bufs + 1
+ if f == 't#':
+ format_args = format_args + ", &py_buffsize%d" % num_bufs
+ c_args = c_args + " int py_buffsize%d;\n" % num_bufs
+ num_bufs = num_bufs + 1
if c_call != "":
c_call = c_call + ", ";
c_call = c_call + "%s" % (arg[0])
@@ -459,14 +459,14 @@ def print_function_wrapper(name, output, export, include):
if ret[0] == 'void':
if file == "python_accessor":
- if args[1][1] == "char *":
- c_call = "\n free(%s->%s);\n" % (
- args[0][0], args[1][0], args[0][0], args[1][0])
- c_call = c_call + " %s->%s = (%s)strdup((const xmlChar *)%s);\n" % (args[0][0],
- args[1][0], args[1][1], args[1][0])
- else:
- c_call = "\n %s->%s = %s;\n" % (args[0][0], args[1][0],
- args[1][0])
+ if args[1][1] == "char *":
+ c_call = "\n free(%s->%s);\n" % (
+ args[0][0], args[1][0], args[0][0], args[1][0])
+ c_call = c_call + " %s->%s = (%s)strdup((const xmlChar *)%s);\n" % (args[0][0],
+ args[1][0], args[1][1], args[1][0])
+ else:
+ c_call = "\n %s->%s = %s;\n" % (args[0][0], args[1][0],
+ args[1][0])
else:
c_call = "\n %s(%s);\n" % (name, c_call);
ret_convert = " Py_INCREF(Py_None);\n return(Py_None);\n"
@@ -508,24 +508,24 @@ def print_function_wrapper(name, output, export, include):
if file == "python":
# Those have been manually generated
- if cond != None and cond != "":
- include.write("#endif\n");
- export.write("#endif\n");
- output.write("#endif\n");
+ if cond != None and cond != "":
+ include.write("#endif\n");
+ export.write("#endif\n");
+ output.write("#endif\n");
return 1
if file == "python_accessor" and ret[0] != "void" and ret[2] is None:
# Those have been manually generated
- if cond != None and cond != "":
- include.write("#endif\n");
- export.write("#endif\n");
- output.write("#endif\n");
+ if cond != None and cond != "":
+ include.write("#endif\n");
+ export.write("#endif\n");
+ output.write("#endif\n");
return 1
output.write("PyObject *\n")
output.write("libvirt_%s(PyObject *self ATTRIBUTE_UNUSED," % (name))
output.write(" PyObject *args")
if format == "":
- output.write(" ATTRIBUTE_UNUSED")
+ output.write(" ATTRIBUTE_UNUSED")
output.write(") {\n")
if ret[0] != 'void':
output.write(" PyObject *py_retval;\n")
@@ -557,38 +557,38 @@ def buildStubs():
global unknown_types
try:
- f = open(os.path.join(srcPref,"libvirt-api.xml"))
- data = f.read()
- (parser, target) = getparser()
- parser.feed(data)
- parser.close()
+ f = open(os.path.join(srcPref,"libvirt-api.xml"))
+ data = f.read()
+ (parser, target) = getparser()
+ parser.feed(data)
+ parser.close()
except IOError, msg:
- try:
- f = open(os.path.join(srcPref,"..","docs","libvirt-api.xml"))
- data = f.read()
- (parser, target) = getparser()
- parser.feed(data)
- parser.close()
- except IOError, msg:
- print file, ":", msg
- sys.exit(1)
+ try:
+ f = open(os.path.join(srcPref,"..","docs","libvirt-api.xml"))
+ data = f.read()
+ (parser, target) = getparser()
+ parser.feed(data)
+ parser.close()
+ except IOError, msg:
+ print file, ":", msg
+ sys.exit(1)
n = len(functions.keys())
print "Found %d functions in libvirt-api.xml" % (n)
py_types['pythonObject'] = ('O', "pythonObject", "pythonObject", "pythonObject")
try:
- f = open(os.path.join(srcPref,"libvirt-override-api.xml"))
- data = f.read()
- (parser, target) = getparser()
- parser.feed(data)
- parser.close()
+ f = open(os.path.join(srcPref,"libvirt-override-api.xml"))
+ data = f.read()
+ (parser, target) = getparser()
+ parser.feed(data)
+ parser.close()
except IOError, msg:
- print file, ":", msg
+ print file, ":", msg
print "Found %d functions in libvirt-override-api.xml" % (
- len(functions.keys()) - n)
+ len(functions.keys()) - n)
nb_wrap = 0
failed = 0
skipped = 0
@@ -604,17 +604,17 @@ def buildStubs():
wrapper.write("#include \"typewrappers.h\"\n")
wrapper.write("#include \"libvirt.h\"\n\n")
for function in functions.keys():
- ret = print_function_wrapper(function, wrapper, export, include)
- if ret < 0:
- failed = failed + 1
- functions_failed.append(function)
- del functions[function]
- if ret == 0:
- skipped = skipped + 1
- functions_skipped.append(function)
- del functions[function]
- if ret == 1:
- nb_wrap = nb_wrap + 1
+ ret = print_function_wrapper(function, wrapper, export, include)
+ if ret < 0:
+ failed = failed + 1
+ functions_failed.append(function)
+ del functions[function]
+ if ret == 0:
+ skipped = skipped + 1
+ functions_skipped.append(function)
+ del functions[function]
+ if ret == 1:
+ nb_wrap = nb_wrap + 1
include.close()
export.close()
wrapper.close()
@@ -623,7 +623,7 @@ def buildStubs():
print "Missing type converters: "
for type in unknown_types.keys():
- print "%s:%d " % (type, len(unknown_types[type])),
+ print "%s:%d " % (type, len(unknown_types[type])),
print
for f in functions_failed:
@@ -955,7 +955,7 @@ def buildWrappers():
global functions_noexcept
for type in classes_type.keys():
- function_classes[classes_type[type][2]] = []
+ function_classes[classes_type[type][2]] = []
#
# Build the list of C types to look for ordered to start
@@ -966,46 +966,46 @@ def buildWrappers():
ctypes_processed = {}
classes_processed = {}
for classe in primary_classes:
- classes_list.append(classe)
- classes_processed[classe] = ()
- for type in classes_type.keys():
- tinfo = classes_type[type]
- if tinfo[2] == classe:
- ctypes.append(type)
- ctypes_processed[type] = ()
+ classes_list.append(classe)
+ classes_processed[classe] = ()
+ for type in classes_type.keys():
+ tinfo = classes_type[type]
+ if tinfo[2] == classe:
+ ctypes.append(type)
+ ctypes_processed[type] = ()
for type in classes_type.keys():
- if ctypes_processed.has_key(type):
- continue
- tinfo = classes_type[type]
- if not classes_processed.has_key(tinfo[2]):
- classes_list.append(tinfo[2])
- classes_processed[tinfo[2]] = ()
+ if ctypes_processed.has_key(type):
+ continue
+ tinfo = classes_type[type]
+ if not classes_processed.has_key(tinfo[2]):
+ classes_list.append(tinfo[2])
+ classes_processed[tinfo[2]] = ()
- ctypes.append(type)
- ctypes_processed[type] = ()
+ ctypes.append(type)
+ ctypes_processed[type] = ()
for name in functions.keys():
- found = 0;
- (desc, ret, args, file, cond) = functions[name]
- for type in ctypes:
- classe = classes_type[type][2]
-
- if name[0:3] == "vir" and len(args) >= 1 and args[0][1] == type:
- found = 1
- func = nameFixup(name, classe, type, file)
- info = (0, func, name, ret, args, file)
- function_classes[classe].append(info)
- elif name[0:3] == "vir" and len(args) >= 2 and args[1][1] == type \
- and file != "python_accessor" and not name in function_skip_index_one:
- found = 1
- func = nameFixup(name, classe, type, file)
- info = (1, func, name, ret, args, file)
- function_classes[classe].append(info)
- if found == 1:
- continue
- func = nameFixup(name, "None", file, file)
- info = (0, func, name, ret, args, file)
- function_classes['None'].append(info)
+ found = 0;
+ (desc, ret, args, file, cond) = functions[name]
+ for type in ctypes:
+ classe = classes_type[type][2]
+
+ if name[0:3] == "vir" and len(args) >= 1 and args[0][1] == type:
+ found = 1
+ func = nameFixup(name, classe, type, file)
+ info = (0, func, name, ret, args, file)
+ function_classes[classe].append(info)
+ elif name[0:3] == "vir" and len(args) >= 2 and args[1][1] == type \
+ and file != "python_accessor" and not name in function_skip_index_one:
+ found = 1
+ func = nameFixup(name, classe, type, file)
+ info = (1, func, name, ret, args, file)
+ function_classes[classe].append(info)
+ if found == 1:
+ continue
+ func = nameFixup(name, "None", file, file)
+ info = (0, func, name, ret, args, file)
+ function_classes['None'].append(info)
classes = open("libvirt.py", "w")
@@ -1032,60 +1032,60 @@ def buildWrappers():
extra.close()
if function_classes.has_key("None"):
- flist = function_classes["None"]
- flist.sort(functionCompare)
- oldfile = ""
- for info in flist:
- (index, func, name, ret, args, file) = info
- if file != oldfile:
- classes.write("#\n# Functions from module %s\n#\n\n" % file)
- oldfile = file
- classes.write("def %s(" % func)
- n = 0
- for arg in args:
- if n != 0:
- classes.write(", ")
- classes.write("%s" % arg[0])
- n = n + 1
- classes.write("):\n")
- writeDoc(name, args, ' ', classes);
-
- for arg in args:
- if classes_type.has_key(arg[1]):
- classes.write(" if %s is None: %s__o = None\n" %
- (arg[0], arg[0]))
- classes.write(" else: %s__o = %s%s\n" %
- (arg[0], arg[0], classes_type[arg[1]][0]))
- if ret[0] != "void":
- classes.write(" ret = ");
- else:
- classes.write(" ");
- classes.write("libvirtmod.%s(" % name)
- n = 0
- for arg in args:
- if n != 0:
- classes.write(", ");
- classes.write("%s" % arg[0])
- if classes_type.has_key(arg[1]):
- classes.write("__o");
- n = n + 1
- classes.write(")\n");
+ flist = function_classes["None"]
+ flist.sort(functionCompare)
+ oldfile = ""
+ for info in flist:
+ (index, func, name, ret, args, file) = info
+ if file != oldfile:
+ classes.write("#\n# Functions from module %s\n#\n\n" % file)
+ oldfile = file
+ classes.write("def %s(" % func)
+ n = 0
+ for arg in args:
+ if n != 0:
+ classes.write(", ")
+ classes.write("%s" % arg[0])
+ n = n + 1
+ classes.write("):\n")
+ writeDoc(name, args, ' ', classes);
+
+ for arg in args:
+ if classes_type.has_key(arg[1]):
+ classes.write(" if %s is None: %s__o = None\n" %
+ (arg[0], arg[0]))
+ classes.write(" else: %s__o = %s%s\n" %
+ (arg[0], arg[0], classes_type[arg[1]][0]))
+ if ret[0] != "void":
+ classes.write(" ret = ");
+ else:
+ classes.write(" ");
+ classes.write("libvirtmod.%s(" % name)
+ n = 0
+ for arg in args:
+ if n != 0:
+ classes.write(", ");
+ classes.write("%s" % arg[0])
+ if classes_type.has_key(arg[1]):
+ classes.write("__o");
+ n = n + 1
+ classes.write(")\n");
if ret[0] != "void":
- if classes_type.has_key(ret[0]):
- #
- # Raise an exception
- #
- if functions_noexcept.has_key(name):
- classes.write(" if ret is None:return None\n");
- else:
- classes.write(
- " if ret is None:raise libvirtError('%s() failed')\n" %
- (name))
-
- classes.write(" return ");
- classes.write(classes_type[ret[0]][1] % ("ret"));
- classes.write("\n");
+ if classes_type.has_key(ret[0]):
+ #
+ # Raise an exception
+ #
+ if functions_noexcept.has_key(name):
+ classes.write(" if ret is None:return None\n");
+ else:
+ classes.write(
+ " if ret is None:raise libvirtError('%s() failed')\n" %
+ (name))
+
+ classes.write(" return ");
+ classes.write(classes_type[ret[0]][1] % ("ret"));
+ classes.write("\n");
# For functions returning an integral type there are
# several things that we can do, depending on the
@@ -1099,7 +1099,7 @@ def buildWrappers():
classes.write ((" if " + test +
": raise libvirtError ('%s() failed')\n") %
("ret", name))
- classes.write(" return ret\n")
+ classes.write(" return ret\n")
elif is_list_type (ret[0]):
if not functions_noexcept.has_key (name):
@@ -1110,30 +1110,30 @@ def buildWrappers():
classes.write ((" if " + test +
": raise libvirtError ('%s() failed')\n") %
("ret", name))
- classes.write(" return ret\n")
+ classes.write(" return ret\n")
- else:
- classes.write(" return ret\n")
+ else:
+ classes.write(" return ret\n")
- classes.write("\n");
+ classes.write("\n");
for classname in classes_list:
- if classname == "None":
- pass
- else:
- if classes_ancestor.has_key(classname):
- classes.write("class %s(%s):\n" % (classname,
- classes_ancestor[classname]))
- classes.write(" def __init__(self, _obj=None):\n")
- if reference_keepers.has_key(classname):
- rlist = reference_keepers[classname]
- for ref in rlist:
- classes.write(" self.%s = None\n" % ref[1])
- classes.write(" self._o = _obj\n")
- classes.write(" %s.__init__(self, _obj=_obj)\n\n" % (
- classes_ancestor[classname]))
- else:
- classes.write("class %s:\n" % (classname))
+ if classname == "None":
+ pass
+ else:
+ if classes_ancestor.has_key(classname):
+ classes.write("class %s(%s):\n" % (classname,
+ classes_ancestor[classname]))
+ classes.write(" def __init__(self, _obj=None):\n")
+ if reference_keepers.has_key(classname):
+ rlist = reference_keepers[classname]
+ for ref in rlist:
+ classes.write(" self.%s = None\n" % ref[1])
+ classes.write(" self._o = _obj\n")
+ classes.write(" %s.__init__(self, _obj=_obj)\n\n" % (
+ classes_ancestor[classname]))
+ else:
+ classes.write("class %s:\n" % (classname))
if classname in [ "virDomain", "virNetwork", "virInterface", "virStoragePool",
"virStorageVol", "virNodeDevice", "virSecret","virStream",
"virNWFilter" ]:
@@ -1142,10 +1142,10 @@ def buildWrappers():
classes.write(" def __init__(self, dom, _obj=None):\n")
else:
classes.write(" def __init__(self, _obj=None):\n")
- if reference_keepers.has_key(classname):
- list = reference_keepers[classname]
- for ref in list:
- classes.write(" self.%s = None\n" % ref[1])
+ if reference_keepers.has_key(classname):
+ list = reference_keepers[classname]
+ for ref in list:
+ classes.write(" self.%s = None\n" % ref[1])
if classname in [ "virDomain", "virNetwork", "virInterface",
"virNodeDevice", "virSecret", "virStream",
"virNWFilter" ]:
@@ -1156,16 +1156,16 @@ def buildWrappers():
" self._conn = conn._conn\n")
elif classname in [ "virDomainSnapshot" ]:
classes.write(" self._dom = dom\n")
- classes.write(" if _obj != None:self._o = _obj;return\n")
- classes.write(" self._o = None\n\n");
- destruct=None
- if classes_destructors.has_key(classname):
- classes.write(" def __del__(self):\n")
- classes.write(" if self._o != None:\n")
- classes.write(" libvirtmod.%s(self._o)\n" %
- classes_destructors[classname]);
- classes.write(" self._o = None\n\n");
- destruct=classes_destructors[classname]
+ classes.write(" if _obj != None:self._o = _obj;return\n")
+ classes.write(" self._o = None\n\n");
+ destruct=None
+ if classes_destructors.has_key(classname):
+ classes.write(" def __del__(self):\n")
+ classes.write(" if self._o != None:\n")
+ classes.write(" libvirtmod.%s(self._o)\n" %
+ classes_destructors[classname]);
+ classes.write(" self._o = None\n\n");
+ destruct=classes_destructors[classname]
if not class_skip_connect_impl.has_key(classname):
# Build python safe 'connect' method
@@ -1176,99 +1176,99 @@ def buildWrappers():
classes.write(" def domain(self):\n")
classes.write(" return self._dom\n\n")
- flist = function_classes[classname]
- flist.sort(functionCompare)
- oldfile = ""
- for info in flist:
- (index, func, name, ret, args, file) = info
- #
- # Do not provide as method the destructors for the class
- # to avoid double free
- #
- if name == destruct:
- continue;
- if file != oldfile:
- if file == "python_accessor":
- classes.write(" # accessors for %s\n" % (classname))
- else:
- classes.write(" #\n")
- classes.write(" # %s functions from module %s\n" % (
- classname, file))
- classes.write(" #\n\n")
- oldfile = file
- classes.write(" def %s(self" % func)
- n = 0
- for arg in args:
- if n != index:
- classes.write(", %s" % arg[0])
- n = n + 1
- classes.write("):\n")
- writeDoc(name, args, ' ', classes);
- n = 0
- for arg in args:
- if classes_type.has_key(arg[1]):
- if n != index:
- classes.write(" if %s is None: %s__o = None\n" %
- (arg[0], arg[0]))
- classes.write(" else: %s__o = %s%s\n" %
- (arg[0], arg[0], classes_type[arg[1]][0]))
- n = n + 1
- if ret[0] != "void":
- classes.write(" ret = ");
- else:
- classes.write(" ");
- classes.write("libvirtmod.%s(" % name)
- n = 0
- for arg in args:
- if n != 0:
- classes.write(", ");
- if n != index:
- classes.write("%s" % arg[0])
- if classes_type.has_key(arg[1]):
- classes.write("__o");
- else:
- classes.write("self");
- if classes_type.has_key(arg[1]):
- classes.write(classes_type[arg[1]][0])
- n = n + 1
- classes.write(")\n");
+ flist = function_classes[classname]
+ flist.sort(functionCompare)
+ oldfile = ""
+ for info in flist:
+ (index, func, name, ret, args, file) = info
+ #
+ # Do not provide as method the destructors for the class
+ # to avoid double free
+ #
+ if name == destruct:
+ continue;
+ if file != oldfile:
+ if file == "python_accessor":
+ classes.write(" # accessors for %s\n" % (classname))
+ else:
+ classes.write(" #\n")
+ classes.write(" # %s functions from module %s\n" % (
+ classname, file))
+ classes.write(" #\n\n")
+ oldfile = file
+ classes.write(" def %s(self" % func)
+ n = 0
+ for arg in args:
+ if n != index:
+ classes.write(", %s" % arg[0])
+ n = n + 1
+ classes.write("):\n")
+ writeDoc(name, args, ' ', classes);
+ n = 0
+ for arg in args:
+ if classes_type.has_key(arg[1]):
+ if n != index:
+ classes.write(" if %s is None: %s__o = None\n" %
+ (arg[0], arg[0]))
+ classes.write(" else: %s__o = %s%s\n" %
+ (arg[0], arg[0], classes_type[arg[1]][0]))
+ n = n + 1
+ if ret[0] != "void":
+ classes.write(" ret = ");
+ else:
+ classes.write(" ");
+ classes.write("libvirtmod.%s(" % name)
+ n = 0
+ for arg in args:
+ if n != 0:
+ classes.write(", ");
+ if n != index:
+ classes.write("%s" % arg[0])
+ if classes_type.has_key(arg[1]):
+ classes.write("__o");
+ else:
+ classes.write("self");
+ if classes_type.has_key(arg[1]):
+ classes.write(classes_type[arg[1]][0])
+ n = n + 1
+ classes.write(")\n");
if name == "virConnectClose":
classes.write(" self._o = None\n")
# For functions returning object types:
if ret[0] != "void":
- if classes_type.has_key(ret[0]):
- #
- # Raise an exception
- #
- if functions_noexcept.has_key(name):
- classes.write(
- " if ret is None:return None\n");
- else:
+ if classes_type.has_key(ret[0]):
+ #
+ # Raise an exception
+ #
+ if functions_noexcept.has_key(name):
+ classes.write(
+ " if ret is None:return None\n");
+ else:
if classname == "virConnect":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', conn=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', conn=self)\n" %
(name))
elif classname == "virDomain":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', dom=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', dom=self)\n" %
(name))
elif classname == "virNetwork":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', net=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', net=self)\n" %
(name))
elif classname == "virInterface":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', net=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', net=self)\n" %
(name))
elif classname == "virStoragePool":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', pool=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', pool=self)\n" %
(name))
elif classname == "virStorageVol":
classes.write(
- " if ret is None:raise libvirtError('%s() failed', vol=self)\n" %
+ " if ret is None:raise libvirtError('%s() failed', vol=self)\n" %
(name))
elif classname == "virDomainSnapshot":
classes.write(
@@ -1276,54 +1276,54 @@ def buildWrappers():
(name))
else:
classes.write(
- " if ret is None:raise libvirtError('%s() failed')\n" %
+ " if ret is None:raise libvirtError('%s() failed')\n" %
(name))
- #
- # generate the returned class wrapper for the object
- #
- classes.write(" __tmp = ");
- classes.write(classes_type[ret[0]][1] % ("ret"));
- classes.write("\n");
+ #
+ # generate the returned class wrapper for the object
+ #
+ classes.write(" __tmp = ");
+ classes.write(classes_type[ret[0]][1] % ("ret"));
+ classes.write("\n");
#
- # Sometime one need to keep references of the source
- # class in the returned class object.
- # See reference_keepers for the list
- #
- tclass = classes_type[ret[0]][2]
- if reference_keepers.has_key(tclass):
- list = reference_keepers[tclass]
- for pref in list:
- if pref[0] == classname:
- classes.write(" __tmp.%s = self\n" %
- pref[1])
+ # Sometime one need to keep references of the source
+ # class in the returned class object.
+ # See reference_keepers for the list
+ #
+ tclass = classes_type[ret[0]][2]
+ if reference_keepers.has_key(tclass):
+ list = reference_keepers[tclass]
+ for pref in list:
+ if pref[0] == classname:
+ classes.write(" __tmp.%s = self\n" %
+ pref[1])
# Post-processing - just before we return.
if function_post.has_key(name):
classes.write(" %s\n" %
(function_post[name]));
- #
- # return the class
- #
- classes.write(" return __tmp\n");
- elif converter_type.has_key(ret[0]):
- #
- # Raise an exception
- #
- if functions_noexcept.has_key(name):
- classes.write(
- " if ret is None:return None");
+ #
+ # return the class
+ #
+ classes.write(" return __tmp\n");
+ elif converter_type.has_key(ret[0]):
+ #
+ # Raise an exception
+ #
+ if functions_noexcept.has_key(name):
+ classes.write(
+ " if ret is None:return None");
# Post-processing - just before we return.
if function_post.has_key(name):
classes.write(" %s\n" %
(function_post[name]));
- classes.write(" return ");
- classes.write(converter_type[ret[0]] % ("ret"));
- classes.write("\n");
+ classes.write(" return ");
+ classes.write(converter_type[ret[0]] % ("ret"));
+ classes.write("\n");
# For functions returning an integral type there
# are several things that we can do, depending on
@@ -1412,15 +1412,15 @@ def buildWrappers():
classes.write (" return ret\n")
- else:
+ else:
# Post-processing - just before we return.
if function_post.has_key(name):
classes.write(" %s\n" %
(function_post[name]));
- classes.write(" return ret\n");
+ classes.write(" return ret\n");
- classes.write("\n");
+ classes.write("\n");
# Append "<classname>.py" to class def, iff it exists
try:
extra = open(os.path.join(srcPref,"libvirt-override-" + classname + ".py"), "r")
diff --git a/python/tests/create.py b/python/tests/create.py
index d2c434e..4e5f644 100755
--- a/python/tests/create.py
+++ b/python/tests/create.py
@@ -23,7 +23,7 @@ osroot = None
for root in osroots:
if os.access(root, os.R_OK):
osroot = root
- break
+ break
if osroot == None:
print "Could not find a guest OS root, edit to add the path in osroots"
@@ -124,11 +124,11 @@ while i < 30:
time.sleep(1)
i = i + 1
try:
- t = dom.info()[4]
+ t = dom.info()[4]
except:
okay = 0
- t = -1
- break;
+ t = -1
+ break;
if t == 0:
break
--
1.7.4.1
13 years, 9 months
[libvirt] [PATCH v2] virsh: freecell --all getting wrong NUMA nodes count
by Michal Privoznik
Virsh freecell --all was not olny getting wrong NUMA nodes count, but
even the NUMA nodes ids. They doesn't have to be continuous, as I've
found out during testing this. Therefore a modification of
nodeGetCellsFreeMemory() error message.
---
src/nodeinfo.c | 3 +-
tools/virsh.c | 70 ++++++++++++++++++++++++++++++++++++++++++++------------
2 files changed, 57 insertions(+), 16 deletions(-)
diff --git a/src/nodeinfo.c b/src/nodeinfo.c
index 22d53e5..ba2f793 100644
--- a/src/nodeinfo.c
+++ b/src/nodeinfo.c
@@ -468,7 +468,8 @@ nodeGetCellsFreeMemory(virConnectPtr conn ATTRIBUTE_UNUSED,
long long mem;
if (numa_node_size64(n, &mem) < 0) {
nodeReportError(VIR_ERR_INTERNAL_ERROR,
- "%s", _("Failed to query NUMA free memory"));
+ "%s: %d", _("Failed to query NUMA free memory "
+ "for node"), n);
goto cleanup;
}
freeMems[numCells++] = mem;
diff --git a/tools/virsh.c b/tools/virsh.c
index 50d5e33..165a791 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -2283,9 +2283,15 @@ cmdFreecell(vshControl *ctl, const vshCmd *cmd)
int ret;
int cell, cell_given;
unsigned long long memory;
- unsigned long long *nodes = NULL;
+ xmlNodePtr *nodes = NULL;
+ unsigned long nodes_cnt;
+ unsigned long *nodes_id = NULL;
+ unsigned long long *nodes_free = NULL;
int all_given;
- virNodeInfo info;
+ int i;
+ char *cap_xml = NULL;
+ xmlDocPtr xml = NULL;
+ xmlXPathContextPtr ctxt = NULL;
if (!vshConnectionUsability(ctl, ctl->conn))
@@ -2301,32 +2307,60 @@ cmdFreecell(vshControl *ctl, const vshCmd *cmd)
}
if (all_given) {
- if (virNodeGetInfo(ctl->conn, &info) < 0) {
- vshError(ctl, "%s", _("failed to get NUMA nodes count"));
+ cap_xml = virConnectGetCapabilities(ctl->conn);
+ if (!cap_xml) {
+ vshError(ctl, "%s", _("unable to get node capabilities"));
goto cleanup;
}
- if (!info.nodes) {
- vshError(ctl, "%s", _("no NUMA nodes present"));
+ xml = xmlReadDoc((const xmlChar *) cap_xml, "node.xml", NULL,
+ XML_PARSE_NOENT | XML_PARSE_NONET |
+ XML_PARSE_NOWARNING);
+
+ if (!xml) {
+ vshError(ctl, "%s", _("unable to get node capabilities"));
goto cleanup;
}
- if (VIR_ALLOC_N(nodes, info.nodes) < 0) {
- vshError(ctl, "%s", _("could not allocate memory"));
+ ctxt = xmlXPathNewContext(xml);
+ nodes_cnt = virXPathNodeSet("/capabilities/host/topology/cells/cell",
+ ctxt, &nodes);
+
+ if (nodes_cnt == -1) {
+ vshError(ctl, "%s", _("could not get information about "
+ "NUMA topology"));
goto cleanup;
}
- ret = virNodeGetCellsFreeMemory(ctl->conn, nodes, 0, info.nodes);
- if (ret != info.nodes) {
- vshError(ctl, "%s", _("could not get information about "
- "all NUMA nodes"));
+ if ((VIR_ALLOC_N(nodes_free, nodes_cnt) < 0) ||
+ (VIR_ALLOC_N(nodes_id, nodes_cnt) < 0)){
+ vshError(ctl, "%s", _("could not allocate memory"));
goto cleanup;
}
+ for (i = 0; i < nodes_cnt; i++) {
+ unsigned long id;
+ char *val = virXMLPropString(nodes[i], "id");
+ char *conv = NULL;
+ id = strtoul(val, &conv, 10);
+ if (*conv !='\0') {
+ vshError(ctl, "%s", _("conversion from string failed"));
+ goto cleanup;
+ }
+ nodes_id[i]=id;
+ ret = virNodeGetCellsFreeMemory(ctl->conn, &(nodes_free[i]), id, 1);
+ if (ret != 1) {
+ vshError(ctl, "%s %lu", _("failed to get free memory for "
+ "NUMA node nuber:"), id);
+ goto cleanup;
+ }
+ }
+
memory = 0;
- for (cell = 0; cell < info.nodes; cell++) {
- vshPrint(ctl, "%5d: %10llu kB\n", cell, (nodes[cell]/1024));
- memory += nodes[cell];
+ for (cell = 0; cell < nodes_cnt; cell++) {
+ vshPrint(ctl, "%5lu: %10llu kB\n", nodes_id[cell],
+ (nodes_free[cell]/1024));
+ memory += nodes_free[cell];
}
vshPrintExtra(ctl, "--------------------\n");
@@ -2351,7 +2385,13 @@ cmdFreecell(vshControl *ctl, const vshCmd *cmd)
func_ret = TRUE;
cleanup:
+ xmlXPathFreeContext(ctxt);
+ if (xml)
+ xmlFreeDoc(xml);
VIR_FREE(nodes);
+ VIR_FREE(nodes_free);
+ VIR_FREE(nodes_id);
+ VIR_FREE(cap_xml);
return func_ret;
}
--
1.7.3.5
13 years, 9 months
[libvirt] [PATCH] qemu: Remove redundant error reporting codes
by Osier Yang
As virDomainDefParseString already reported the error if it
fails, and the redundant error reports codes will override
error reported by virDomainDefParseString with some unclear
messages, removed them.
* src/qemu/qemu_driver.c
---
src/qemu/qemu_driver.c | 33 +++++++++++++--------------------
1 files changed, 13 insertions(+), 20 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ab664a0..fd8e401 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -3600,8 +3600,10 @@ static virDomainPtr qemudDomainCreate(virConnectPtr conn, const char *xml,
virCheckFlags(VIR_DOMAIN_START_PAUSED, NULL);
qemuDriverLock(driver);
- if (!(def = virDomainDefParseString(driver->caps, xml,
- VIR_DOMAIN_XML_INACTIVE)))
+
+ def = virDomainDefParseString(driver->caps, xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
+ /* virDomainDefParseString reports the error. */
goto cleanup;
if (virSecurityManagerVerify(driver->securityManager, def) < 0)
@@ -5746,12 +5748,9 @@ qemudDomainSaveImageOpen(struct qemud_driver *driver,
}
/* Create a domain from this XML */
- if (!(def = virDomainDefParseString(driver->caps, xml,
- VIR_DOMAIN_XML_INACTIVE))) {
- qemuReportError(VIR_ERR_OPERATION_FAILED,
- "%s", _("failed to parse XML"));
+ def = virDomainDefParseString(driver->caps, xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
goto error;
- }
VIR_FREE(xml);
@@ -6412,8 +6411,9 @@ static virDomainPtr qemudDomainDefine(virConnectPtr conn, const char *xml) {
int dupVM;
qemuDriverLock(driver);
- if (!(def = virDomainDefParseString(driver->caps, xml,
- VIR_DOMAIN_XML_INACTIVE)))
+
+ def = virDomainDefParseString(driver->caps, xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
goto cleanup;
if (virSecurityManagerVerify(driver->securityManager, def) < 0)
@@ -8046,13 +8046,9 @@ qemudDomainMigratePrepareTunnel(virConnectPtr dconn,
}
/* Parse the domain XML. */
- if (!(def = virDomainDefParseString(driver->caps, dom_xml,
- VIR_DOMAIN_XML_INACTIVE))) {
- qemuReportError(VIR_ERR_OPERATION_FAILED,
- "%s", _("failed to parse XML, libvirt version may be "
- "different between source and destination host"));
+ def = virDomainDefParseString(driver->caps, dom_xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
goto cleanup;
- }
if (!qemuDomainIsMigratable(def))
goto cleanup;
@@ -8320,12 +8316,9 @@ qemudDomainMigratePrepare2 (virConnectPtr dconn,
VIR_DEBUG("Generated uri_out=%s", *uri_out);
/* Parse the domain XML. */
- if (!(def = virDomainDefParseString(driver->caps, dom_xml,
- VIR_DOMAIN_XML_INACTIVE))) {
- qemuReportError(VIR_ERR_OPERATION_FAILED,
- "%s", _("failed to parse XML"));
+ def = virDomainDefParseString(driver->caps, dom_xml, VIR_DOMAIN_XML_INACTIVE);
+ if (!def)
goto cleanup;
- }
if (!qemuDomainIsMigratable(def))
goto cleanup;
--
1.7.4
13 years, 9 months
[libvirt] [PATCH 0/2] virsh: freecell --all getting wrong NUMA nodes count
by Michal Privoznik
This is fix for virsh freecell command. I was using virNodeGetInfo()
instead of virConnectGetCapabilities() which returns XML with the right
number. Therefore we need XML/XPath function to pick node values from XML
kept in buffer. Moreover, this function might be used to replace bad
practise in cmdVcpucount().
Michal Privoznik (2):
virsh.c: new xPathULong() to parse XML returning ulong value of
selected node.
virsh: fix wrong NUMA nodes count getting
tools/virsh.c | 76 ++++++++++++++++++++++++++++++++++++++++++++++++--------
1 files changed, 65 insertions(+), 11 deletions(-)
--
1.7.3.5
13 years, 9 months