[libvirt-users] Clarification about virsh migration options
by Berend Dekens
I am trying to work out what all the options are for migrating a KVM
machine to another KVM machine, without using shared storage. The
documentation is not quite verbose and not intuitive, so I'm hoping
someone can explain this to me. The man pages show this syntax:
migrate optional --live --p2p --direct --tunnelled --persistent
--undefinesource --suspend --copy-storage-all --copy-storage-inc
domain-id desturi migrateuri dname
The 'live' and 'suspend' options are clear. The 'undefinesource' option
is straightforward as well.
But what does 'persistent' mean? I mean, when transferring a VM to a
destination, it will be available on the destination when migration
completes, so what does 'persistent' mean in this context?
The p2p, direct and tunneled options are not. When migrating, I assume
it is possible to let the underlying virtualisation framework handle the
migration - so I assume 'direct' means Qemu-KVM gets to migrate the VM
and 'tunneled' means the RPC mechanism of libvirt migrates the machine.
But what does 'p2p' mean? Normally, peer-to-peer implies direct
communication, but since there is a 'direct' mode, I'm clueless what
this option does.
On a side note: is it possible to use tunneled mode to transfer VMs from
for example Xen to KVM?
Then I have some questions about the non-shared storage migration. I
really like this option as most of my virtualized servers are run on one
or two physical systems without shared VM storage. Migrating those VMs
without downtime would be awesome.
If I migrate the VMs, I assume the storage of the VM is places in the
libvirt default storage location. But what is the difference between
'copy-storage-all' and 'copy-storage-inc'? Incremental hints that it
would save incremental changes without actually transferring the storage
between hosts. But how is that possible if the storage was not shared in
the first place?
Lots of questions but I hope someone who has done this before can answer
some of these things.
Regards,
Berend Dekens
13 years, 10 months
[libvirt-users] CfP 6th Workshop on Virtualization in High-Performance Cloud Computing (VHPC'11)
by VHPC2011
Apologies if you received multiple copies of this message.
=================================================================
CALL FOR PAPERS
6th Workshop on
Virtualization in High-Performance Cloud Computing
VHPC'11
as part of Euro-Par 2011, Bordeaux, France
=================================================================
Date: August 30, 2011
Euro-Par 2011: http://europar2011.bordeaux.inria.fr/
Workshop URL: http://vhpc.org
SUBMISSION DEADLINE:
Abstracts: May 2, 2011
Full Paper: June 13, 2011
Scope:
Virtualization has become a common abstraction layer in modern data
centers, enabling resource owners to manage complex infrastructure
independently of their applications. Conjointly virtualization is
becoming a driving technology for a manifold of industry grade IT
services. The cloud concept includes the notion of a separation
between resource owners and users, adding services such as hosted
application frameworks and queuing. Utilizing the same infrastructure,
clouds carry significant potential for use in high-performance
scientific computing. The ability of clouds to provide for
requests and releases of vast computing resource dynamically and
close to the marginal cost of providing the services is unprecedented
in the history of scientific and commercial computing.
Distributed computing concepts that leverage federated resource access
are popular within the grid community, but have not seen previously
desired deployed levels so far. Also, many of the scientific
datacenters have not adopted virtualization or cloud concepts yet.
This workshop aims to bring together industrial providers with the
scientific community in order to foster discussion, collaboration and
mutual exchange of knowledge and experience.
The workshop will be one day in length, composed of 20 min paper
presentations, each followed by 10 min discussion sections.
Presentations may be accompanied by interactive demonstrations. It
concludes with a 30 min panel discussion by presenters.
TOPICS
Topics include, but are not limited to, the following subjects:
- Virtualization in cloud, cluster and grid environments
- VM-based cloud performance modeling
- Workload characterizations for VM-based environments
- Software as a Service (SaaS)
- Cloud reliability, fault-tolerance, and security
- Cloud, cluster and grid filesystems
- QoS and and service levels
- Cross-layer VM optimizations
- Virtualized I/O and storage
- Virtualization and HPC architectures including NUMA
- System and process/bytecode VM convergence
- Paravirtualized driver development
- Research and education use cases
- VM cloud, cluster distribution algorithms
- MPI on virtual machines and clouds
- Cloud frameworks and API sets
- Checkpointing of large compute jobs
- Cloud load balancing
- Accelerator virtualization
- Instrumentation interfaces and languages
- Hardware support for virtualization
- High-performance network virtualization
- Auto-tuning of VMM and VM parameters
- High-speed interconnects
- Hypervisor extensions and tools for cluster and grid computing
- VMMs/Hypervisors
- Cloud use cases including optimizations
- Performance modeling
- Fault tolerant VM environments
- VMM performance tuning on various load types
- Cloud provisioning
- Virtual machine monitor platforms
- Pass-through VM device access
- Management, deployment of VM-based environments
PAPER SUBMISSION
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work.
Accepted papers will be published in the Springer LNCS series - the
format must be according to the Springer LNCS Style. Initial
submissions are in PDF, accepted papers will be requested to provided
source files.
Format Guidelines: http://www.springer.de/comp/lncs/authors.html
Submission Link: http://edas.info/newPaper.php?c=10155
CHAIR
Michael Alexander (chair), IBM, Austria
Gianluigi Zanetti (co-chair), CRS4, Italy
PROGRAM COMMITTEE
Paolo Anedda, CRS4, Italy
Volker Buege, University of Karlsruhe, Germany
Giovanni Busonera, CRS4, Italy
Roberto Canonico, University of Napoli, Italy
Tommaso Cucinotta, Scuola Superiore Sant'Anna, Italy
William Gardner, University of Guelph, Canada
Werner Fischer, Thomas-Krenn AG, Germany
Wolfgang Gentzsch, Max Planck Gesellschaft, Germany
Marcus Hardt, Forschungszentrum Karlsruhe, Germany
Sverre Jarp, CERN, Switzerland
Shantenu Sjha, Louisiana State University, USA
Xuxian Jiang, NC State, USA
Kenji Kaneda, Google, USA
Simone Leo, CRS4, Italy
Ignancio Llorente, Universidad Complutense de Madrid, Spain,
Naoya Maruyama, Tokyo Institute of Technology, Japan
Jean-Marc Menaud, Ecole des Mines de Nantes, France
Anastassios Nanos, National Technical University of Athens, Greece
Jose Renato Santos, HP Labs, USA
Deepak Singh, Amazon Webservices, USA
Boria Sotomayor, University of Chicago, USA
Yoshio Turner, HP Labs, USA
Kurt Tutschku, University of Vienna, Austria
Lizhe Wang, Indiana University, USA
Chao-Tung Yang, Tunghai University, China
DURATION: Workshop Duration is one day.
GENERAL INFORMATION
The workshop will be held as part of Euro-Par 2011,
organized by INRIA, CNRS and the University of Bordeaux I, II, France.
Euro-Par 2011: http://europar2011.bordeaux.inria.fr/
13 years, 10 months
[libvirt-users] monitiring cpu usage via cgroup
by Zvi Dubitzky
thanks Kame,
1) the policy of my process(test) is indeed SCHED_NORMAL .
When I use sched_setscheduler() to set the pid sched policy to
SCHED_FIFO then the cpu usage via cgroup rt_runtime_us and
rt_period_us is affected . Still the ratio I specified is 1:10 but
I see via top that my process
,that gets 1 cpu ,shows 20% usage
2) What about memory and cpuset subsystems of cgroup. Do their settings
apply also only in the cases of processes with policy =
SCHED_FIFO/SCHED_RR ?
3) lastly:
What is the meaning of the ‘cap’ parameter in the schedinfo virsh
command (what are the units and range of values ) and what is
the ‘weigth’ parameter .I did not find any documentation of libvirt
about it ?
thanks
Zvi Dubitzky
Email:dubi@il.ibm.com
13 years, 10 months
[libvirt-users] monitiring cpu usage via cgroup
by Zvi Dubitzky
Hi
I was asking about the fedora 14 kernel if it is good enough for cgroup
usage because
I am trying to set a cgroup under cpu subsytem ( /dev/cgroup/cpu/group1/
) that have /cpu.rt_runtime_us of 100000
while cpu.rt_period_us has a value of 1000000 i.e a ratio of 1/10 . still
when I run a task (endless loop) in that group
(cgexec -g cpu,cpuset:group1 ./test) it gets all the cpu core time that
is assigned to it ( i watch via top utility) , so it seems
that the quota set via the group does not take effect although I restart
the cgconfig service .
I also verified libcgroup is installed by: rpm -q libcgroup
Am I missing something or top is not the right utility to watch the cpu
usage in this case ?
thanks
Zvi Dubitzky
Email:dubi@il.ibm.com
13 years, 10 months
[libvirt-users] Xen disk device detach fails as non-root [libvirt-0.8.7 and older versions]
by Iain MacDonnell
Hi All,
I find that I am able to attach a disk device do a Xen domain, using
virDomainAttachDevice(), running as a non-root user, but I am unable
to use virDomainDetachDevice() - it results in an "unknown failure".
Using "virsh [attach|detach]-device" exhibits this behviour.
$ virsh attach-device domu1 attach.xml
Device attached successfully
$ virsh detach-device domu1 attach.xml
error: Failed to detach device from attach.xml
error: Unknown failure
$
With some digging, I determined that the problem arises when libvirt
tries to translate the device name to a number, using the XenStore API
(xenStoreDomainGetDiskID()), which requires use of the "xenstored"
UNIX socket, and that socket is only accessible by root. On making
that socket accessible to the user (by group), virDomainAttachDevice()
starts working, but I'm then unable to list domains, because
xenStoreDoListDomains() waits to verify each domain using
xenHypervisorHasDomain(), and that requires access to another socket -
"/proc/xen/privcmd"
My question, before going down the path of trying to hack permissions
for these sockets permanently ..... is this how it's supposed to be,
or could, perhaps, libvirtd, which runs as root, access these sockets
on behalf of the user? It seems it should at least fail more
gracefully....
TIA for any pointers....
~iain
13 years, 10 months
[libvirt-users] connecting to virtualbox with libvirt
by Gary Scarborough
I am trying to connect to the virtualbox hypervisor through libvirt. If I
am doing it correctly,
virsh -c vbox:///session
should get me connected. I am on F14 64 bit. I tried this with both vbox
3.2 and 4.02. I keep getting the following error:
error: no connection driver available for vbox:///session.
What am I missing?
Thanks,
Gary
13 years, 10 months
[libvirt-users] cgroup
by Zvi Dubitzky
Hi
Should libvirtd be in a cgroup hierarchy rooted at /dev/cgroup or it can
be rooted also at /cgroup
i,e can /etc/cgconfig.conf look like his (without the /dev) :
mount {
cpuset = /cgroup/cpuset;
cpu = /cgroup/cpu;
cpuacct = /cgroup/cpuacct;
memory = /cgroup/memory;
devices = /cgroup/devices;
freezer = /cgroup/freezer;
net_cls = /cgroup/net_cls;
ns = /cgroup/ns;
blkio = /cgroup/blkio;
}
thanks
Zvi Dubitzky
Email:dubi@il.ibm.com
13 years, 10 months
[libvirt-users] Using NFS to read image file
by Marcela Castro León
Hello:
When I start the guest with an image on NFS file system, the ubuntu doesn't
boot, It's remain on inittrams.
I've tried to redefine the guest without
<apic>
<pae>
configuration as I see on a blog, and I've increased the parameters of the
mount to rsize=32768 and wsize=32768, but it still doesn't work.
Can anyone give any advice to get it?
Thank you very much.
Marcela
13 years, 10 months