[libvirt-users] Regarding persistence of VM's after live migration (virDomainMigrateToURI() problem)
by Coding Geek
Hello
I am working with 3 host machines each running xen with shared NFS storage.
I am working on automatic load balancing if one host is over utilized and
another is under utilized by measuring the utilization from xentop. I am
facing a problem after migration of VM. I am setting the flags ( 1| 8| 16)
in order to do live migration, persist VM on destination, undefine host
from source. After migration if I shut off the migrated VM on destination
host it does not persist. I am using the migrateToURI() API for migration.
Please help how to make VM persist on destination.
Thank you,
Tasvinder Singh
--
--
"To Iterate is Human, To Recurse is Divine"
12 years, 7 months
[libvirt-users] Snooze energy-efficient cloud manager is available as open-source!
by Eugen Feller
Dear all,
herewith we proudly announce the first public release of Snooze.
Snooze is an open-source scalable, autonomic, and energy-efficient
virtual machine (VM) management framework for private clouds based on
libvirt. Similarly to other VM management frameworks such as Nimbus,
OpenNebula, Eucalyptus, and OpenStack it allows to build compute
infrastructures from virtualized resources. Particularly, once installed
and configured users can submit and control the life-cycle of a large
number of VMs. However, contrary to existing frameworks for scalability
and fault tolerance, Snooze employs a self-organizing and healing
hierarchical architecture. Moreover, it performs distributed VM
management and is designed to be energy efficient. Therefore, it
implements features to monitor and estimate VM resource (CPU, memory,
network Rx, network Tx) demands, detect and resolve overload/underload
situations, perform dynamic VM consolidation through live migration, and
finally power management to save energy. Last but not least, it
integrates a generic scheduler which allows to implement any VM
placement algorithms. The system can be either used to manage production
data centers or as an experimental testbed for advanced (i.e. requiring
live migration support) VM placement algorithms.
Please see our website "http://snooze.inria.fr" for more information.
We are looking forward for your feedback and contributions!
Best regards,
Eugen
--
Eugen Feller
PhD student, University of Rennes I
INRIA Rennes Bretagne Atlantique - MYRIADS project
Tel: +33 (0) 2 99 84 72 68
Web: http://www.irisa.fr/myriads/members/efeller
12 years, 7 months
[libvirt-users] serial console locking for a domain (concurrent access = garbled output)
by Dusty Mabe
Hi,
I am seeing an issue in libvirt-0.9.4-23.el6_2.7.x86_64.rpm where my
virsh console output for a domain gets garbled because I have
virt-manager running and it is also trying to access the serial device
for the guest domain. Basically there is concurrent serial access and
half the text ends up on one end while the rest ends up on the other
end.
Closing virt-manager does not help. Disconnecting and reconnecting
using virsh console does not help. The only way to recover is to
destroy the domain and then start the guest again and never start
virt-manager. Note that the domain does not have VNC graphics so
virt-manager defaults to try to access the serial port when started
for a particular domain.
Should there be locking within libvirt to only allow for a single
client to access a specific serial port of a domain?
Thanks,
Dusty Mabe
12 years, 7 months
[libvirt-users] Console to RHEL6.1 container
by Sukadev Bhattiprolu
We are trying to create RHEL6.1 container and having trouble getting a
proper console to the container.
I have used lxc containers in the past and lxc has a script (lxc-fedora)
to setup a basic Fedora container. Is there something similar or docs
with virtmgr ?
We started out making a copy of the root fs and using that copy as the
root for the container. I made a couple of tweaks to the container rootfs:
1. ensured that /dev/ptmx is a symlink to pts/ptmx
2. Added 'newinstance' option to the devpts mount in fstab
The container xml file is attached.
If devpts entry in container's fstab has the 'newinstance' option, I get a
brief "domain test01 started" message on the stdout.
When I run 'virsh consle test01', I only see a few messages
---
Setting hostname node03: [ OK ]
Setting up Logical Volume Management: No volume groups found
[ OK ]
Checking filesystems
[ OK ]
mount: can't find / in /etc/fstab or /etc/mtab
Mounting local filesystems: [ OK ]
Enabling local filesystem quotas: [ OK ]
Enabling /etc/fstab swaps: [ OK ]
---
and then a hang wihtout a shell. At ths point sshd or any of the other
services don't appear to be running.
If I remove the 'newinstance' entry in fstab and run 'virsh start test01'
the container starts up ok. I get all "console" messages on whichever
is my current /dev/pts/0 !
At this point sshd and many other services are running and I can ssh in.
Even in this case, though 'virsh console test01' shows the same _few_
messages as above and no login shell.
Back to the newinstance case, by putting some debug messages in the
container's /etc/rc.sysinit, I see that devpts is already mounted
with newinstance.
With lxc, in setup_pts(), we unmount devpts in the container and then
mount with newinstance. I wonder what that sequence is with virtmgr.
I double checked that /dev/ptmx in container is a symlink to pts/ptmx
and that CONFIG_DEVPTS_MULTIPLE_INSTANCES=y.
Appreciate any tips on how I can get access to the console of the RHEL6.1
container.
Suka
Suka
12 years, 7 months