[libvirt] [libvirt-php 0/3] Eliminated complilation warnings
by Lyre
Hi all:
These patch added the -Wall option for compiler, and eliminated all warnings.
Lyre (3):
Added -Wall option
Fixed some compilation warnings
Eliminated ununsed variables
src/Makefile.am | 4 +-
src/libvirt.c | 65 +++++++++---------------------------------------------
2 files changed, 13 insertions(+), 56 deletions(-)
13 years, 8 months
[libvirt] [libvirt-php 0/2] Updated building system
by Lyre
Fixed the install location of libvirt-php.ini;
Added an spec file for openSuSE Build Service.
Lyre (2):
Fixed the php configuration file
Added libvirt-php.obs.spec
Makefile.am | 2 +
aclocal.m4 | 4 +-
libvirt-php.obs.spec | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++
src/Makefile.am | 7 ++--
4 files changed, 88 insertions(+), 5 deletions(-)
create mode 100644 libvirt-php.obs.spec
13 years, 8 months
[libvirt] CfP 6th Workshop on Virtualization in High-Performance Cloud Computing (VHPC'11)
by VHPC2011
Apologies if you received multiple copies of this message.
=================================================================
CALL FOR PAPERS
6th Workshop on
Virtualization in High-Performance Cloud Computing
VHPC'11
as part of Euro-Par 2011, Bordeaux, France
=================================================================
Date: August 30, 2011
Euro-Par 2011: http://europar2011.bordeaux.inria.fr/
Workshop URL: http://vhpc.org
SUBMISSION DEADLINE:
Abstracts: May 2, 2011
Full Paper: June 13, 2011
Scope:
Virtualization has become a common abstraction layer in modern data
centers, enabling resource owners to manage complex infrastructure
independently of their applications. Conjointly virtualization is
becoming a driving technology for a manifold of industry grade IT
services. The cloud concept includes the notion of a separation
between resource owners and users, adding services such as hosted
application frameworks and queuing. Utilizing the same infrastructure,
clouds carry significant potential for use in high-performance
scientific computing. The ability of clouds to provide for
requests and releases of vast computing resource dynamically and
close to the marginal cost of providing the services is unprecedented
in the history of scientific and commercial computing.
Distributed computing concepts that leverage federated resource access
are popular within the grid community, but have not seen previously
desired deployed levels so far. Also, many of the scientific
datacenters have not adopted virtualization or cloud concepts yet.
This workshop aims to bring together industrial providers with the
scientific community in order to foster discussion, collaboration and
mutual exchange of knowledge and experience.
The workshop will be one day in length, composed of 20 min paper
presentations, each followed by 10 min discussion sections.
Presentations may be accompanied by interactive demonstrations. It
concludes with a 30 min panel discussion by presenters.
TOPICS
Topics include, but are not limited to, the following subjects:
- Virtualization in cloud, cluster and grid environments
- VM-based cloud performance modeling
- Workload characterizations for VM-based environments
- Software as a Service (SaaS)
- Cloud reliability, fault-tolerance, and security
- Cloud, cluster and grid filesystems
- QoS and and service levels
- Cross-layer VM optimizations
- Virtualized I/O and storage
- Virtualization and HPC architectures including NUMA
- System and process/bytecode VM convergence
- Paravirtualized driver development
- Research and education use cases
- VM cloud, cluster distribution algorithms
- MPI on virtual machines and clouds
- Cloud frameworks and API sets
- Checkpointing of large compute jobs
- Cloud load balancing
- Accelerator virtualization
- Instrumentation interfaces and languages
- Hardware support for virtualization
- High-performance network virtualization
- Auto-tuning of VMM and VM parameters
- High-speed interconnects
- Hypervisor extensions and tools for cluster and grid computing
- VMMs/Hypervisors
- Cloud use cases including optimizations
- Performance modeling
- Fault tolerant VM environments
- VMM performance tuning on various load types
- Cloud provisioning
- Virtual machine monitor platforms
- Pass-through VM device access
- Management, deployment of VM-based environments
PAPER SUBMISSION
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF, accepted papers will be requested to provided
source files.
Format Guidelines: http://www.springer.de/comp/lncs/authors.html
Submission Link: http://edas.info/newPaper.php?c=10155
CHAIR
Michael Alexander (chair), IBM, Austria
Gianluigi Zanetti (co-chair), CRS4, Italy
PROGRAM COMMITTEE
Paolo Anedda, CRS4, Italy
Volker Buege, University of Karlsruhe, Germany
Giovanni Busonera, CRS4, Italy
Roberto Canonico, University of Napoli, Italy
Tommaso Cucinotta, Scuola Superiore Sant'Anna, Italy
William Gardner, University of Guelph, Canada
Werner Fischer, Thomas-Krenn AG, Germany
Wolfgang Gentzsch, Max Planck Gesellschaft, Germany
Marcus Hardt, Forschungszentrum Karlsruhe, Germany
Sverre Jarp, CERN, Switzerland
Shantenu Sjha, Louisiana State University, USA
Xuxian Jiang, NC State, USA
Kenji Kaneda, Google, USA
Simone Leo, CRS4, Italy
Ignancio Llorente, Universidad Complutense de Madrid, Spain,
Naoya Maruyama, Tokyo Institute of Technology, Japan
Jean-Marc Menaud, Ecole des Mines de Nantes, France
Anastassios Nanos, National Technical University of Athens, Greece
Jose Renato Santos, HP Labs, USA
Deepak Singh, Amazon Webservices, USA
Boria Sotomayor, University of Chicago, USA
Yoshio Turner, HP Labs, USA
Kurt Tutschku, University of Vienna, Austria
Lizhe Wang, Indiana University, USA
Chao-Tung Yang, Tunghai University, China
DURATION: Workshop Duration is one day.
GENERAL INFORMATION
The workshop will be held as part of Euro-Par 2011,
organized by INRIA, CNRS and the University of Bordeaux I, II, France.
Euro-Par 2011: http://europar2011.bordeaux.inria.fr/
13 years, 8 months
[libvirt] virDomainMigrate, "suitable default" for omitted bandwidth parameter
by Thomas Treutner
Hi,
does somebody know what the following paragraph exactly means resp. what
it should mean?
"The maximum bandwidth (in Mbps) that will be used to do migration can
be specified with the bandwidth parameter. *If set to 0, libvirt will
choose a suitable default*."
http://libvirt.org/html/libvirt-libvirt.html#virDomainMigrate
What is the "suitable default"? I looked through the code for qemu and
the only call to qemuMonitorSetMigrationSpeed() I can find is in
./src/qemu/qemu_driver.c:8406, using libvirt 0.8.7. When I remember
correctly the second condition in a conjunction will not be evaluated if
the first one evaluates to false? So if resource == 0, no limit will be set?
I ask because I discovered that qemu is live migrating with a hard coded
throttle of 32MiB/s for historic reasons, which is an activated
handbrake if you have GBit Ethernet and additionally annoying when
thinking about qemu's broken way of live migration (no maximum amount of
iterations, no forced action, no error message, no abortion - no
*nothing*.). Effectively *using* GBit Ethernet often solves this problem
as the bandwidth to transfer dirty pages is quadrupled.
Also see qemu mailing list, Message-ID: <4D52D95D.3030300(a)scripty.at>
There was a short discussion on IRC where concerns of "breaking libvirt"
when deactivating the default limit were stated. If there really are
applications that depend on handbraked live migration, I think these
applications just should pass the limit they need to virDomainMigrate().
What do you think?
regards,
-t
13 years, 8 months
[libvirt] Html docs for win32 build?
by Justin Clift
Hi Matthias,
Thinking we should include the generated html docs
in the win32 installer.
Any thoughts/objections?
Also, thinking that if we do, it shouldn't be too hard
to install the xhtml1-dtds and use them during the libvirt
compile process.
As a quick test, since I'd already worked out the steps for
the xhtml1 bits when packaging OSX, it was pretty easy to
cut-n-paste the bits into a rough script to do it (attached).
Any use?
Regards and best wishes,
Justin Clift
13 years, 8 months
[libvirt] [PATCH] docs: fix typos
by Eric Blake
* docs/drvopenvz.html.in: Spell administrator correctly.
* docs/drvuml.html.in: Likewise.
* src/qemu/qemu.conf: Likewise. Fix other typos, too.
---
Pushing under the obvious rule.
docs/drvopenvz.html.in | 2 +-
docs/drvuml.html.in | 2 +-
src/qemu/qemu.conf | 25 +++++++++++++------------
3 files changed, 15 insertions(+), 14 deletions(-)
diff --git a/docs/drvopenvz.html.in b/docs/drvopenvz.html.in
index 485d209..ddd6ac1 100644
--- a/docs/drvopenvz.html.in
+++ b/docs/drvopenvz.html.in
@@ -55,7 +55,7 @@ openvz+ssh://root@example.com/system (remote access, SSH tunnelled)
OpenVZ releases later than 3.0.23 ship with a standard network device
setup script that is able to setup bridging, named
<code>/usr/sbin/vznetaddbr</code>. For releases prior to 3.0.23, this
- script must be created manually by the host OS adminstrator. The
+ script must be created manually by the host OS administrator. The
simplest way is to just download the latest version of this script
from a newer OpenVZ release, or upstream source repository. Then
a generic configuration file <code>/etc/vz/vznetctl.conf</code>
diff --git a/docs/drvuml.html.in b/docs/drvuml.html.in
index 9e5db95..d18e9cc 100644
--- a/docs/drvuml.html.in
+++ b/docs/drvuml.html.in
@@ -7,7 +7,7 @@
guests built for User Mode Linux. UML requires no special support in
the host kernel, so can be used by any user of any linux system, provided
they have enough free RAM for their guest's needs, though there are
- certain restrictions on network connectivity unless the adminstrator
+ certain restrictions on network connectivity unless the administrator
has pre-created TAP devices.
</p>
diff --git a/src/qemu/qemu.conf b/src/qemu/qemu.conf
index 66310d4..8c6b996 100644
--- a/src/qemu/qemu.conf
+++ b/src/qemu/qemu.conf
@@ -47,7 +47,7 @@
# The default TLS configuration only uses certificates for the server
# allowing the client to verify the server's identity and establish
-# and encrypted channel.
+# an encrypted channel.
#
# It is possible to use x509 certificates for authentication too, by
# issuing a x509 certificate to every client who needs to connect.
@@ -62,9 +62,9 @@
# VNC passwords. This parameter is only used if the per-domain
# XML config does not already provide a password. To allow
# access without passwords, leave this commented out. An empty
-# string will still enable passwords, but be rejected by QEMU
+# string will still enable passwords, but be rejected by QEMU,
# effectively preventing any use of VNC. Obviously change this
-# example here before you set this
+# example here before you set this.
#
# vnc_password = "XYZ12345"
@@ -115,7 +115,7 @@
# server-cert.pem - the server certificate signed with ca-cert.pem
# server-key.pem - the server private key
#
-# This option allows the certificate directory to be changed
+# This option allows the certificate directory to be changed.
#
# spice_tls_x509_cert_dir = "/etc/pki/libvirt-spice"
@@ -124,8 +124,8 @@
# per-domain XML config does not already provide a password. To
# allow access without passwords, leave this commented out. An
# empty string will still enable passwords, but be rejected by
-# QEMU effectively preventing any use of SPICE. Obviously change
-# this example here before you set this
+# QEMU, effectively preventing any use of SPICE. Obviously change
+# this example here before you set this.
#
# spice_password = "XYZ12345"
@@ -134,15 +134,15 @@
# on the host, then the security driver will automatically disable
# itself. If you wish to disable QEMU SELinux security driver while
# leaving SELinux enabled for the host in general, then set this
-# to 'none' instead
+# to 'none' instead.
#
# security_driver = "selinux"
-# The user ID for QEMU processes run by the system instance
+# The user ID for QEMU processes run by the system instance.
#user = "root"
-# The group ID for QEMU processes run by the system instance
+# The group ID for QEMU processes run by the system instance.
#group = "root"
# Whether libvirt should dynamically change file ownership
@@ -155,14 +155,15 @@
#
# - 'cpu' - use for schedular tunables
# - 'devices' - use for device whitelisting
+# - 'memory' - use for memory tunables
#
# NB, even if configured here, they won't be used unless
-# the adminsitrator has mounted cgroups. eg
+# the administrator has mounted cgroups, e.g.:
#
# mkdir /dev/cgroup
# mount -t cgroup -o devices,cpu,memory none /dev/cgroup
#
-# They can be mounted anywhere, and different controlers
+# They can be mounted anywhere, and different controllers
# can be mounted in different locations. libvirt will detect
# where they are located.
#
@@ -175,7 +176,7 @@
# all sound device, and all PTY devices are allowed.
#
# This will only need setting if newer QEMU suddenly
-# wants some device we don't already know a bout.
+# wants some device we don't already know about.
#
#cgroup_device_acl = [
# "/dev/null", "/dev/full", "/dev/zero",
--
1.7.4
13 years, 8 months
[libvirt] [PATCH 0/2] Fix tests failures on some architectures
by Jiri Denemark
This fixes the following test failures seen on some architectures:
TEST: qemuxml2argvtest
........................................ 40
........................................ 80
.............................!.!!!!! 116 FAIL
Jiri Denemark (2):
tests: Fake host capabilities properly
qemu: Fix command line generation with faked host CPU
src/qemu/qemu_command.c | 8 +++++---
tests/testutilsqemu.c | 8 +++++---
2 files changed, 10 insertions(+), 6 deletions(-)
--
1.7.4.1
13 years, 8 months
[libvirt] [PATCH] build: address clang reports about virCommand
by Eric Blake
clang had 5 reports against virCommand; three were false positives
(a NULL deref in ProcessIO solved by sa_assert, and two uninitialized
memory operations solved by adding an initializer), but two were real.
* src/util/command.c (virCommandProcessIO): Fix real bug of
possible NULL dereference. Teach clang that buf is never NULL.
(virCommandRun): Teach clang that infd is only ever accessed when
initialized.
---
src/util/command.c | 10 ++++++----
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/src/util/command.c b/src/util/command.c
index abd2dc4..0845db4 100644
--- a/src/util/command.c
+++ b/src/util/command.c
@@ -1,7 +1,7 @@
/*
* command.c: Child command execution
*
- * Copyright (C) 2010 Red Hat, Inc.
+ * Copyright (C) 2010-2011 Red Hat, Inc.
*
* This library is free software; you can redistribute it and/or
* modify it under the terms of the GNU Lesser General Public
@@ -881,6 +881,8 @@ virCommandProcessIO(virCommandPtr cmd)
buf = cmd->errbuf;
len = &errlen;
}
+ /* Silence a false positive from clang. */
+ sa_assert(buf);
done = read(fds[i].fd, data, sizeof(data));
if (done < 0) {
@@ -930,9 +932,9 @@ virCommandProcessIO(virCommandPtr cmd)
ret = 0;
cleanup:
- if (*cmd->outbuf)
+ if (cmd->outbuf && *cmd->outbuf)
(*cmd->outbuf)[outlen] = '\0';
- if (*cmd->errbuf)
+ if (cmd->errbuf && *cmd->errbuf)
(*cmd->errbuf)[errlen] = '\0';
return ret;
}
@@ -950,7 +952,7 @@ virCommandRun(virCommandPtr cmd, int *exitstatus)
int ret = 0;
char *outbuf = NULL;
char *errbuf = NULL;
- int infd[2];
+ int infd[2] = { -1, -1 };
struct stat st;
bool string_io;
bool async_io = false;
--
1.7.4
13 years, 8 months