[libvirt] Submission Deadline Extension
by VHPC 12
we apologize if you receive multiple copies of this CFP
===================================================================
CALL FOR PAPERS
7th Workshop on
Virtualization in High-Performance Cloud Computing
VHPC '12
as part of Euro-Par 2012, Rhodes Island, Greece
===================================================================
Date: August 28, 2012
Workshop URL: http://vhpc.org
SUBMISSION DEADLINE:
June 11, 2012 - Full paper submission (extended)
SCOPE:
Virtualization has become a common abstraction layer in modern
data centers, enabling resource owners to manage complex
infrastructure independently of their applications. Conjointly,
virtualization is becoming a driving technology for a manifold of
industry grade IT services. The cloud concept includes the notion
of a separation between resource owners and users, adding services
such as hosted application frameworks and queueing. Utilizing the
same infrastructure, clouds carry significant potential for use in
high-performance scientific computing. The ability of clouds to provide
for requests and releases of vast computing resources dynamically and
close to the marginal cost of providing the services is unprecedented in
the history of scientific and commercial computing.
Distributed computing concepts that leverage federated resource
access are popular within the grid community, but have not seen
previously desired deployed levels so far. Also, many of the scientific
data centers have not adopted virtualization or cloud concepts yet.
This workshop aims to bring together industrial providers with the
scientific community in order to foster discussion, collaboration
and mutual exchange of knowledge and experience.
The workshop will be one day in length, composed of 20 min
paper presentations, each followed by 10 min discussion sections.
Presentations may be accompanied by interactive demonstrations.
TOPICS
Topics of interest include, but are not limited to:
Higher-level cloud architectures, focusing on issues such as:
- Languages for describing highly-distributed compute jobs
- Workload characterization for VM-based environments
- Optimized communication libraries/protocols in the cloud
- Cross-layer optimization of numeric algorithms on VM infrastructure
- System and process/bytecode VM convergence
- Cloud frameworks and API sets
- Checkpointing/migration of large compute jobs
- Instrumentation interfaces and languages
- VMM performance (auto-)tuning on various load types
- Cloud reliability, fault-tolerance, and security
- Software as a Service (SaaS) architectures
- Research and education use cases
- Virtualization in cloud, cluster and grid environments
- Cross-layer VM optimizations
- Cloud use cases including optimizations
- VM-based cloud performance modelling
- Performance and cost modelling
Lower-level design challenges for Hypervisors, VM-aware I/O devices,
hardware accelerators or filesystems in VM environments, especially:
- Cloud, grid and distributed filesystems
- Hardware for I/O virtualization (storage/network/accelerators)
- Storage and network I/O subsystems in virtualized environments
- Novel software approaches to I/O virtualization
- Paravirtualized I/O subsystems for modified/unmodified guests
- Virtualization-aware cluster interconnects
- Direct device assignment
- NUMA-aware subsystems in virtualized environments
- Hardware Accelerators in virtualization (GPUs/FPGAs)
- Hardware extensions for virtualization
- VMMs/Hypervisors for embedded systems
Data Center management methods, including:
- QoS and and service levels
- VM cloud and cluster distribution algorithms
- VM load-balancing in Clouds
- Hypervisor extensions and tools for cluster and grid computing
- Fault tolerant VM environments
- Virtual machine monitor platforms
- Management, deployment and monitoring of VM-based environments
- Cluster provisioning in the Cloud
PAPER SUBMISSION
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines: http://www.springer.de/comp/lncs/authors.html
Style template:
ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip
Abstract Submission Link: http://edas.info/newPaper.php?c=11943
IMPORTANT DATES
Rolling abstract submission
June 11, 2012 - Full paper submission (extended)
June 29, 2012 - Acceptance notification
July 20, 2012 - Camera-ready version due
August 28, 2012 - Workshop Date
CHAIR
Michael Alexander (chair), TU Wien, Austria
Gianluigi Zanetti (co-chair), CRS4, Italy
Anastassios Nanos (co-chair), NTUA, Greece
PROGRAM COMMITTEE
Paolo Anedda, CRS4, Italy
Giovanni Busonera, CRS4, Italy
Brad Calder, Microsoft, USA
Roberto Canonico, University of Napoli Federico II, Italy
Tommaso Cucinotta, Alcatel-Lucent Bell Labs, Ireland
Werner Fischer, Thomas-Krenn AG, Germany
William Gardner, University of Guelph, USA
Marcus Hardt, Forschungszentrum Karlsruhe, Germany
Sverre Jarp, CERN, Switzerland
Shantenu Jha, Louisiana State University, USA
Xuxian Jiang, NC State, USA
Nectarios Koziris, National Technical University of Athens, Greece
Simone Leo, CRS4, Italy
Ignacio Llorente, Universidad Complutense de Madrid, Spain
Naoya Maruyama, Tokyo Institute of Technology, Japan
Jean-Marc Menaud, Ecole des Mines de Nantes, France
Dimitrios Nikolopoulos, Foundation for Research&Technology Hellas, Greece
Jose Renato Santos, HP Labs, USA
Walter Schwaiger, TU Wien, Austria
Yoshio Turner, HP Labs, USA
Kurt Tutschku, University of Vienna, Austria
Lizhe Wang, Indiana University, USA
Chao-Tung Yang, Tunghai University, Taiwan
DURATION: Workshop Duration is one day.
GENERAL INFORMATION
The workshop will be held as part of Euro-Par 2012.
Euro-Par 2012: http://europar2012.cti.gr/
13 years, 1 month
[libvirt] [PATCH] qemu: don't modify domain on failed blockiotune
by Eric Blake
If you have a qemu build that lacks the blockio tune monitor command,
then this command:
$ virsh blkdeviotune rhel6u2 hda --total_bytes_sec 1000
error: Unable to change block I/O throttle
error: internal error Unexpected error
fails as expected (well, the error message is lousy), but the next
dumpxml shows that the domain was modified anyway. Worse, that means
if you save the domain then restore it, the restore will likely fail
due to throttling being unsupported, even though no throttling should
even be active because the monitor command failed in the first place.
* src/qemu/qemu_driver.c (qemuDomainSetBlockIoTune): Check for
error before making modification permanent.
---
src/qemu/qemu_driver.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index a36c348..2fde15b 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -12011,9 +12011,9 @@ qemuDomainSetBlockIoTune(virDomainPtr dom,
qemuDomainObjEnterMonitorWithDriver(driver, vm);
ret = qemuMonitorSetBlockIoThrottle(priv->mon, device, &info);
qemuDomainObjExitMonitorWithDriver(driver, vm);
- vm->def->disks[idx]->blkdeviotune = info;
if (ret < 0)
goto endjob;
+ vm->def->disks[idx]->blkdeviotune = info;
}
if (flags & VIR_DOMAIN_AFFECT_CONFIG) {
--
1.7.7.6
13 years, 1 month
[libvirt] [PATCHv2] util: fix libvirtd startup failure due to netlink error
by Laine Stump
This solves the problem detailed in:
https://bugzilla.redhat.com/show_bug.cgi?id=816465
and further detailed in
https://www.redhat.com/archives/libvir-list/2012-May/msg00202.htm
A short explanation is included in the comments of the patch itself.
Even with ACK, I will wait to push this until I have verification that
it does not break lldpad<-->libvirtd communication (if it does, I may
need to use the nl_handle allocated during virNetlinkStartup() for
virNetlinkEventServiceStart()).
---
daemon/libvirtd.c | 6 +++++
src/libvirt_private.syms | 2 ++
src/util/virnetlink.c | 67 +++++++++++++++++++++++++++++++++++++++++++++-
src/util/virnetlink.h | 5 +++-
4 files changed, 78 insertions(+), 2 deletions(-)
diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c
index b098f6a..5d57b50 100644
--- a/daemon/libvirtd.c
+++ b/daemon/libvirtd.c
@@ -1007,6 +1007,11 @@ int main(int argc, char **argv) {
goto cleanup;
}
+ if (virNetlinkStartup() < 0) {
+ ret = VIR_DAEMON_ERR_INIT;
+ goto cleanup;
+ }
+
if (!(srv = virNetServerNew(config->min_workers,
config->max_workers,
config->prio_workers,
@@ -1143,6 +1148,7 @@ cleanup:
virNetServerProgramFree(qemuProgram);
virNetServerClose(srv);
virNetServerFree(srv);
+ virNetlinkShutdown();
if (statuswrite != -1) {
if (ret != 0) {
/* Tell parent of daemon what failed */
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index d4038b2..e911774 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -1330,6 +1330,8 @@ virNetlinkEventRemoveClient;
virNetlinkEventServiceIsRunning;
virNetlinkEventServiceStop;
virNetlinkEventServiceStart;
+virNetlinkShutdown;
+virNetlinkStartup;
# virnetmessage.h
diff --git a/src/util/virnetlink.c b/src/util/virnetlink.c
index b2e9d51..a249e94 100644
--- a/src/util/virnetlink.c
+++ b/src/util/virnetlink.c
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2010-2011 Red Hat, Inc.
+ * Copyright (C) 2010-2012 Red Hat, Inc.
* Copyright (C) 2010-2012 IBM Corporation
*
* This library is free software; you can redistribute it and/or
@@ -88,10 +88,63 @@ static int nextWatch = 1;
# define NETLINK_EVENT_ALLOC_EXTENT 10
static virNetlinkEventSrvPrivatePtr server = NULL;
+static struct nl_handle *placeholder_nlhandle = NULL;
/* Function definitions */
/**
+ * virNetlinkStartup:
+ *
+ * Perform any initialization that needs to take place before the
+ * program starts up worker threads. This is currently used to assure
+ * that an nl_handle is allocated prior to any attempts to bind a
+ * netlink socket. For a discussion of why this is necessary, please
+ * see the following email message:
+ *
+ * https://www.redhat.com/archives/libvir-list/2012-May/msg00202.html
+ *
+ * The short version is that, without this placeholder allocation of
+ * an nl_handle that is never used, it is possible for nl_connect() in
+ * one thread to collide with a direct bind() of a netlink socket in
+ * another thread, leading to failure of the operation (which could
+ * lead to failure of libvirtd to start). Since getaddrinfo() (used by
+ * libvirtd in virSocketAddrParse, which is called quite frequently
+ * during startup) directly calls bind() on a netlink socket, this is
+ * actually a very common occurence (15-20% failure rate on some
+ * hardware).
+ *
+ * Returns 0 on success, -1 on failure.
+ */
+int
+virNetlinkStartup(void)
+{
+ if (placeholder_nlhandle)
+ return 0;
+ placeholder_nlhandle = nl_handle_alloc();
+ if (!placeholder_nlhandle) {
+ virReportSystemError(errno, "%s",
+ _("cannot allocate placeholder nlhandle for netlink"));
+ return -1;
+ }
+ return 0;
+}
+
+/**
+ * virNetlinkShutdown:
+ *
+ * Undo any initialization done by virNetlinkStartup. This currently
+ * destroys the placeholder nl_handle.
+ */
+void
+virNetlinkShutdown(void)
+{
+ if (placeholder_nlhandle) {
+ nl_handle_destroy(placeholder_nlhandle);
+ placeholder_nlhandle = NULL;
+ }
+}
+
+/**
* virNetlinkCommand:
* @nlmsg: pointer to netlink message
* @respbuf: pointer to pointer where response buffer will be allocated
@@ -535,6 +588,18 @@ static const char *unsupported = N_("libnl was not available at build time");
static const char *unsupported = N_("not supported on non-linux platforms");
# endif
+int
+virNetlinkStartup(void)
+{
+ return 0;
+}
+
+void
+virNetlinkShutdown(void)
+{
+ return;
+}
+
int virNetlinkCommand(struct nl_msg *nl_msg ATTRIBUTE_UNUSED,
unsigned char **respbuf ATTRIBUTE_UNUSED,
unsigned int *respbuflen ATTRIBUTE_UNUSED,
diff --git a/src/util/virnetlink.h b/src/util/virnetlink.h
index a72612e..93df59a 100644
--- a/src/util/virnetlink.h
+++ b/src/util/virnetlink.h
@@ -1,5 +1,5 @@
/*
- * Copyright (C) 2010-2011 Red Hat, Inc.
+ * Copyright (C) 2010-2012 Red Hat, Inc.
* Copyright (C) 2010-2012 IBM Corporation
*
* This library is free software; you can redistribute it and/or
@@ -35,6 +35,9 @@ struct nlattr;
# endif /* __linux__ */
+int virNetlinkStartup(void);
+void virNetlinkShutdown(void);
+
int virNetlinkCommand(struct nl_msg *nl_msg,
unsigned char **respbuf, unsigned int *respbuflen,
int nl_pid);
--
1.7.10
13 years, 1 month
Re: [libvirt] Virsh Command Reference -- Feedback
by Eric Blake
[adding the mailing list]
On 04/24/2012 09:02 PM, Justin Clift wrote:
> On 20/04/2012, at 11:58 PM, Robert Urban wrote:
>> Hello,
>>
>> at the bottom of
>>
>> http://libvirt.org/sources/virshcmdref/html-single/
>>
>> someone has written "we need feedback!", so I thought I'd give you a little feedback.
>>
>> I suggest deleting everything and replacing it with the string "Needs to be written". It would save people a lot of time that might be under the mistaken impression that there is anything of use to be found here.
>
> Ouch, guess that means you didn't find the few pages that actually have been done?
>
> The Virsh Command Reference was an effort that started some time ago when I was
> working on the Libvirt team. But, it's not received much attention since I
> moved teams. A few external contributors have updated or expanded things here
> and there.
>
> Don't suppose you'd be interested in helping get it in shape? As you kinda point
> out, the present version of it kinda sucks. :( All help appreciated. :)
Reiterating this point to a wider audience. Is there anyone willing to
help with improving our documentation?
--
Eric Blake eblake(a)redhat.com +1-919-301-3266
Libvirt virtualization library http://libvirt.org
13 years, 1 month
[libvirt] This patch mounts tmpfs on /run iff /run directory exists in libvirt-lxc containers.
by Daniel J Walsh
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
We do not want to share /run with containers in order to prevent information
leakage and applications within the containers attempting to communicate with
applications outside of the container.
It uses the same mount options used for /dev.
We also want to bind mount over /var/run directory since this will either be a
symbolic link to /run but on some installations /run is bind mounted over
/var/run. If we just mount /run we are not guaranteed the /var/run will have
the same content.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.12 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAk+j7skACgkQrlYvE4MpobNSKQCfY2yGP/S+piUJ9VNtSjrliFTp
ucAAoLJOazpcZvBRFnQUa7uqhh+tRagb
=TjAb
-----END PGP SIGNATURE-----
13 years, 1 month
[libvirt] disable usb?
by Gerd Hoffmann
Hi,
Is there some way to disable usb altogether? libvirt used to just pass
in '-usb'. With the arrival of usb2 support that changed into '-device
uhci,...'. Problem is that this breaks with several machine types such
as isapc. '-usb' is silently ignored in case the machine type can't
handle usb. '-device uhci,...' leads to an error message though and the
guest doesn't start.
/me tried "<controller type='usb' model='none'/>" which didn't work.
Just removing the controller from the xml doesn't work too, it gets
automagically readded.
Suggestions?
thanks,
Gerd
13 years, 1 month
Re: [libvirt] [Users] Host missing cpuFlags
by Itamar Heim
On 05/04/2012 09:23 AM, ovirt(a)qip.ru wrote:
> Engine is incorrectly show cpu name (cluster compatibility is set to 3.1)
>
> all hosts in my cluster engine are shown as Conroe, but I have 2 hosts
> from Nehalem family and 2 from SandyBridge
>
> this host must be SandyBridge but shown as Conroe
vdsm reports the model at the end of the cpuFlags based on libvirt
reporting models. cc'ing libvirt list for their thoughts.
>
> # vdsClient -s 0 getVdsCaps
> HBAInventory = {'iSCSI': [{'InitiatorName':
> 'iqn.1994-05.com.redhat:7e8ac59efbe0'}], 'FC': []}
> ISCSIInitiatorName = iqn.1994-05.com.redhat:7e8ac59efbe0
> bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask':
> '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '',
> 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
> '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500',
> 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond2':
> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [],
> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu':
> '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}
> clusterLevels = ['3.0', '3.1']
> cpuCores = 4
> cpuFlags =
> fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,pclmulqdq,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,sse4_1,sse4_2,x2apic,popcnt,tsc_deadline_timer,aes,xsave,avx,lahf_lm,ida,arat,epb,xsaveopt,pln,pts,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_coreduo,model_Conroe
> cpuModel = Intel(R) Core(TM) i5-2400 CPU @ 3.10GHz
> cpuSockets = 1
> cpuSpeed = 1600.000
> emulatedMachines = ['pc-0.14', 'pc', 'fedora-13', 'pc-0.13', 'pc-0.12',
> 'pc-0.11', 'pc-0.10', 'isapc', 'pc-0.14', 'pc', 'fedora-13', 'pc-0.13',
> 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc']
> guestOverhead = 65
> hooks = {}
> kvmEnabled = true
> lastClient = 192.168.131.42
> lastClientIface = ovirtmgmt
> management_ip =
> memSize = 16032
> networks = {'ovirtmgmt': {'addr': '192.168.130.73', 'cfg': {'DELAY':
> '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt',
> 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask':
> '255.255.0.0', 'stp': 'off', 'bridged': 'True', 'gateway':
> '192.168.131.17', 'ports': ['p6p1']}, 'Storage': {'addr':
> '172.16.0.100', 'cfg': {'IPADDR': '172.16.0.100', 'DELAY': '0',
> 'NETMASK': '255.255.255.0', 'STP': 'no', 'DEVICE': 'Storage', 'TYPE':
> 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0',
> 'stp': 'off', 'bridged': 'True', 'gateway': '0.0.0.0', 'ports': ['p4p1']}}
> nics = {'p4p1': {'hwaddr': '00:1B:21:1D:B6:E2', 'netmask': '', 'speed':
> 1000, 'addr': '', 'mtu': '1500'}, 'p6p1': {'hwaddr':
> '54:04:A6:A1:BA:39', 'netmask': '', 'speed': 1000, 'addr': '', 'mtu':
> '1500'}}
> operatingSystem = {'release': '1', 'version': '16', 'name': 'oVirt Node'}
> packages2 = {'kernel': {'release': '6.fc16.x86_64', 'buildtime':
> 1334997800.0, 'version': '3.3.2'}, 'spice-server': {'release': '1.fc16',
> 'buildtime': '1327339129', 'version': '0.10.1'}, 'vdsm': {'release':
> '0.161.git094ec00.fc16', 'buildtime': '1336045451', 'version': '4.9.6'},
> 'qemu-kvm': {'release': '4.fc16', 'buildtime': '1327954752', 'version':
> '0.15.1'}, 'libvirt': {'release': '1.fc16', 'buildtime': '1336048215',
> 'version': '0.9.11.3'}, 'qemu-img': {'release': '4.fc16', 'buildtime':
> '1327954752', 'version': '0.15.1'}}
> reservedMem = 321
> software_revision = 0
> software_version = 4.9
> supportedProtocols = ['2.2', '2.3']
> supportedRHEVMs = ['3.0']
> uuid = 1C8F4D00-5BCB-11D9-8A55-5404A6A1BA39_00:1B:21:1D:B6:E2
> version_name = Snow Man
> vlans = {}
> vmTypes = ['kvm']
>
>
>
> this host must be Nehalem but shown as Conroe
>
> # vdsClient -s 0 getVdsCaps
> HBAInventory = {'iSCSI': [{'InitiatorName':
> 'iqn.1994-05.com.redhat:972b88629c1'}], 'FC': []}
> ISCSIInitiatorName = iqn.1994-05.com.redhat:972b88629c1
> bondings = {'bond4': {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask':
> '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond0': {'addr': '',
> 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [], 'hwaddr':
> '00:00:00:00:00:00'}, 'bond1': {'addr': '', 'cfg': {}, 'mtu': '1500',
> 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}, 'bond2':
> {'addr': '', 'cfg': {}, 'mtu': '1500', 'netmask': '', 'slaves': [],
> 'hwaddr': '00:00:00:00:00:00'}, 'bond3': {'addr': '', 'cfg': {}, 'mtu':
> '1500', 'netmask': '', 'slaves': [], 'hwaddr': '00:00:00:00:00:00'}}
> clusterLevels = ['3.0', '3.1']
> cpuCores = 8
> cpuFlags =
> fpu,vme,de,pse,tsc,msr,pae,mce,cx8,apic,sep,mtrr,pge,mca,cmov,pat,pse36,clflush,dts,acpi,mmx,fxsr,sse,sse2,ss,ht,tm,pbe,syscall,nx,pdpe1gb,rdtscp,lm,constant_tsc,arch_perfmon,pebs,bts,rep_good,nopl,xtopology,nonstop_tsc,aperfmperf,pni,dtes64,monitor,ds_cpl,vmx,smx,est,tm2,ssse3,cx16,xtpr,pdcm,pcid,dca,sse4_1,sse4_2,popcnt,lahf_lm,ida,arat,epb,dts,tpr_shadow,vnmi,flexpriority,ept,vpid,model_coreduo,model_Conroe
> cpuModel = Intel(R) Xeon(R) CPU E5620 @ 2.40GHz
> cpuSockets = 2
> cpuSpeed = 1600.000
> emulatedMachines = ['pc-0.14', 'pc', 'fedora-13', 'pc-0.13', 'pc-0.12',
> 'pc-0.11', 'pc-0.10', 'isapc', 'pc-0.14', 'pc', 'fedora-13', 'pc-0.13',
> 'pc-0.12', 'pc-0.11', 'pc-0.10', 'isapc']
> guestOverhead = 65
> hooks = {}
> kvmEnabled = true
> lastClient = 192.168.131.42
> lastClientIface = ovirtmgmt
> management_ip =
> memSize = 36200
> networks = {'ovirtmgmt': {'addr': '192.168.131.46', 'cfg': {'DELAY':
> '0', 'NM_CONTROLLED': 'no', 'BOOTPROTO': 'dhcp', 'DEVICE': 'ovirtmgmt',
> 'TYPE': 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask':
> '255.255.0.0', 'stp': 'off', 'bridged': 'True', 'gateway':
> '192.168.131.17', 'ports': ['em1']}, 'Storage': {'addr': '172.16.0.104',
> 'cfg': {'IPADDR': '172.16.0.104', 'DELAY': '0', 'NM_CONTROLLED': 'no',
> 'NETMASK': '255.255.255.0', 'STP': 'no', 'DEVICE': 'Storage', 'TYPE':
> 'Bridge', 'ONBOOT': 'yes'}, 'mtu': '1500', 'netmask': '255.255.255.0',
> 'stp': 'off', 'bridged': 'True', 'gateway': '0.0.0.0', 'ports': ['em2']}}
> nics = {'em1': {'hwaddr': '00:25:90:66:53:E8', 'netmask': '', 'speed':
> 1000, 'addr': '', 'mtu': '1500'}, 'em2': {'hwaddr': '00:25:90:66:53:E9',
> 'netmask': '', 'speed': 100, 'addr': '', 'mtu': '1500'}}
> operatingSystem = {'release': '1', 'version': '16', 'name': 'oVirt Node'}
> packages2 = {'kernel': {'release': '6.fc16.x86_64', 'buildtime':
> 1334997800.0, 'version': '3.3.2'}, 'spice-server': {'release': '1.fc16',
> 'buildtime': '1327339129', 'version': '0.10.1'}, 'vdsm': {'release':
> '0.155.git457d1a0.fc16', 'buildtime': '1335883691', 'version': '4.9.6'},
> 'qemu-kvm': {'release': '4.fc16', 'buildtime': '1327954752', 'version':
> '0.15.1'}, 'libvirt': {'release': '1.fc16', 'buildtime': '1335356374',
> 'version': '0.9.11'}, 'qemu-img': {'release': '4.fc16', 'buildtime':
> '1327954752', 'version': '0.15.1'}}
> reservedMem = 321
> software_revision = 0
> software_version = 4.9
> supportedProtocols = ['2.2', '2.3']
> supportedRHEVMs = ['3.0']
> uuid = 49434D53-0200-9066-2500-66902500E853_00:25:90:66:53:E8
> version_name = Snow Man
> vlans = {}
> vmTypes = ['kvm']
>
>
>
> Птн 04 Май 2012 09:14:22 +0400, Itamar Heim <iheim(a)redhat.com> написал:
>
> On 05/04/2012 06:49 AM, Nicholas Kesick wrote:
> > I managed to get a host successfully added into oVirt Manager
> (Fedora16
> > minimum install, then used the wiki RPM install method), but the
> last
> > event reports "Host <hostname> moved to Non-operational state as
> host
> > does not meet the cluster's minimum CPU level. Missing CPU features:
> > CpuFlags"
> >
> > Can anyone shine some light on the problem? The CPU does support
> > virtualization... and as far as I can tell from cat /proc/cpuinfo
> does
> > does have cpu flags.
> > flags : fpu vme de pse tsc msr *pae* mce cx8 apic sep mtrr pge
> mca cmov
> > pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
> nx lm
> > constant_tsc pebs bts nopl pni dtes64 monitor ds_cpl *vmx* est
> cid cx16
> > xtpr pdcm lahf_lm tpr_shadow
>
> what is the cpu level of the cluster?
> what cluster compatibility level?
> what does vdsClient -s 0 getVdsCaps shows for cpu flags?
>
> >
> > Many thanks
> > - Nick
> >
> >
> > _______________________________________________
> > Users mailing list
> > Users(a)ovirt.org
> > nicspellword"="">ovirt.org/mailman/listinfo/users" title="Открыть
> внешнюю ссылку">http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.%3Cspan%20class=>
>
> _______________________________________________
> Users mailing list
> Users(a)ovirt.org
> nicspellword"="">ovirt.org/mailman/listinfo/users" title="Открыть
> внешнюю ссылку">http://lists.ovirt.org/mailman/listinfo/users
> <http://lists.%3Cspan%20class=>
>
>
>
> --
>
13 years, 1 month
[libvirt] [PATCH v3 0/3] usb devices with same vendorID, productID hotplug support
by Guannan Ren
https://bugzilla.redhat.com/show_bug.cgi?id=815755
The set of patch tries to fix the issue when multiple usb devices with
same idVendor, idProduct are availible on host, the usb device with
lowest bus:device will be attached to guest if usb xml file is given like
this:
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<vendor id='0x15e1'/>
<product id='0x2007'/>
</source>
</hostdev>
The reason is that the usb hotplug function searchs usb device in system files
to match vendor and product id, the file with lowest number is always found
first.
After fix, in this case, libvirt will report an error like:
# virsh attach-device rhel6u1 /tmp/usb.xml
error: Failed to attach device from /tmp/usb.xml
error: operation failed: multiple USB devices for 15e1:2007, use <address> to specify one
At the same time, the usb part of domain initilization is also updated in patch 2/3
These patches also fix the problem when using the following xml, the usb device
could be hotplugged in two domains without raising errors
<hostdev mode='subsystem' type='usb' managed='yes'>
<source>
<address bus='6' device='7'/>
</source>
</hostdev>
# virsh attach-device rhel6u12nd /tmp/usb_by_bus.xml
error: Failed to attach device from /tmp/usb_by_bus.xml
error: Requested operation is not valid: USB device 006:007 is in use by domain rhel6u1
13 years, 1 month