[libvirt-users] issue with kvm host crashing
by David M. Barlieb
I have a dell r910 with rhel6.2 on it and libvirt 0.8. This machine is hosting 12 virtual guests. Every 3 - 5 months the server crashes for no apparent reason. The logs show no kernel panics or other issues causing the crash. The sar logs show a very high context switch count ( approx. 170000). and also high runq-sz (approx. 10 - 18). The cpu's were mostly idle, memory usage low, no swapping, disk io very low as well. This also occurs on a number of other rhel6.2 servers I have using KVM/libvirt/qemu for virtualization. I am curious if anyone else has reported incidents like this. After the crash the servers all come backup, but as you can imagine, it is troubling to see this kind of behavior, especially with these machines hosting production guests.
Any help or suggestions on what to look for would be helpful.
Regards
12 years, 2 months
[libvirt-users] dumpxml and edit the vm xml are not match.
by Timon Wang
I use vnc with password for the vm graphics, it works well.
When I want to dump the xml of the vm, I found that the dumped xml does not
contains the password properties in the graphics node.
But when I use virsh edit to check the xml of the vm, I found the password
is displayed.
It's it a bug or it's just what libvirt designed to perform.
--
Focus on: Server Vitualization, Network security,Scanner,NodeJS,JAVA,WWW
Blog: http://www.nohouse.net
12 years, 2 months
[libvirt-users] Reg: Libvirt API does not return, after qemu-kvm is hungup.
by Alphonse Hansel Anthony
Hi All,
I am facing the following issue, while using Libvirt version 0.8.4.
(1) Libvirt API does not return, since the qemu-kvm process is hung up.
Based on the information available on mail thread
*https://bugzilla.redhat.com/show_bug.cgi?id=676205* (comment 26 - scenario
1),
this has been fixed in later version of Libvirt, specifically in RHEL
6.2(libvirt 0.9.4).
I have checked the changelogs between version 0.8.7(RHEL 6.1) & 0.9.4 and
was not able to identify the related patch information.
Please let me know, If I am looking at the wrong version for this fix?
Any help in this regard, would be useful.
Thanks,
Alphonse
Comment reference:
--SNIP--
1. The QEMU process has hung.
QEMU won't respond to monitor commands. The API call making
the first monitor command will wait forever, any subsequent API calls
issuing monitor commands will timeout after ~30 seconds with this
libvirt error message.
This is expected behaviour when QEMU has hung.
--SNIP--
In RHEL-6.2 we have done a number of things to address / mitigate these problems
- It is now always possible to destroy a guest, even if the monitor
is stuck. This lets you destroy a guest in scenario 1, which is not
always possible with RHEL-5 libvirt, without restarting libvirtd.
--SNIP--
--EOF--
12 years, 2 months
[libvirt-users] How to properly test watchdog?
by Russell Jones
CentOS 6
Hi all,
I am working on setting up sanlock + watchdog on a 2 node KVM pair.
Sanlock is working beautifully and is preventing access to the VM disks
by more than one process as it should across both boxes. I am attempting
to test failure scenarios involving watchdog, but I am having a hard
time getting it to actually reset the server.
I am running wdmd with -D so I can see the register requests. When I
start sanlock with "service sanlock start" I can see it talk to the wdmd
process, as debug logs "register ............ sanlock_daemon". I then
send a "kill -9" signal to the daemon to try to simulate a crash, and
wdmd logs "client_pid_dead".
I would expect Watchdog to see that the PID is dead and as a result
start the timer and reboot the server. It does not do that however. Am I
misunderstanding how it reacts to TERM requests? How can I properly test
wdmd?
Thanks for the help!
12 years, 2 months
[libvirt-users] Kernel unresponsive after booting 700+ vm's on a single host
by Alfred Bratterud
For a research project we are trying to boot a very large amount of tiny, custom built VM's on KVM/ubuntu. The maximum VM-count achieved was 1000, but with substantial slowness, and eventually kernel failure, while the cpu/memory loads were nowhere near limits. Where is the likely bottleneck? Any solutions, workarounds, hacks or dirty tricks?
A few more details here (tumbleweed question), and the possibility of an upvote:
http://stackoverflow.com/questions/12243129/kvm-qemu-maximum-vm-count-limit
Any tips would be much appreciated!
Best regards,
Alfred Bratterud
Assistant Professor
Dept. of Computer Science,
Oslo and Akershus University College
of Applied Sciences
P: (+47) 2245 3263
M: (+47) 4102 0222
12 years, 2 months
[libvirt-users] libvirt failed to respond after quite a few vnet bridges were created
by Yih Chuang
During concurrent 80 VM creation, each VM takes two network connections,
libvirtd failed to respond. Neither restart libvirtd nor restart CIMOM
helped.
There were 152 vnet bridges found on this RHEL KVM host. After all the
vnet bridges were manually removed, libvirtd responded normally again.
Is there a known issue about the number limit of vnet bridges regarding
libvirtd?
Thank you.
--Yih
12 years, 2 months
[libvirt-users] boot device
by vivek hari
Hi,
I am trying to create a guest machine on xen virtualized environment and
install rhel on it by booting from cdrom.
I face a issue in booting device. I gave boot order as hd,cdrom. When doing
maiden installation, it will find nothing in hard disk and boots from cd.
Once installation is completed and when it boots next time it boots from
hard disk. It is the expected behaviour.
But it fails to boot from hard disk and again boots from cdrom.
I also tried specifying per boot device option. I specified disk image as
boot order 1 and cd image as boot order 2. But when i power on for the
first time, it tries to boot from harddisk and finds nothing there and says
" No bootable device. Going to power off in 30 seconds". But expected is,
it should boot from cd after trying hard disk.
Below is my domain configuration xml file i used to create guest domain.
<domain type='xen'>
<name>vm-2</name>
<memory>1048576</memory>
<vcpu>1</vcpu>
<os>
<type arch='x86_64'>hvm</type>
<loader>/usr/lib/xen/boot/hvmloader</loader>
<!-- <boot dev='hd' />
<boot dev='cdrom'/> -->
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/lib/xen/bin/qemu-dm</emulator>
<disk type='file' device='disk'>
<source file='/var/lib/libvirt/images/vm-2.img'/>
<target dev='xvda' bus='ide' />
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0'
unit='0'/>
</disk>
<disk type='file' device='cdrom'>
<source file='/var/lib/libvirt/images/Rhel53.iso'/>
<target dev='hdc' bus='ide' />
<boot order='2'/>
<readonly/>
<address type='drive' controller='0' bus='1' target='0'
unit='0' />
</disk>
<interface type='bridge'>
<source bridge='br0'/>
</interface>
<graphics type='vnc' port='-1' autoport='yes'/>
</devices>
</domain>
I don't know where i am going wrong. So need help in resolving this issue.
Thanks,
12 years, 2 months
[libvirt-users] Live Block Migration with additional attached storage
by Chun-Hung Chen
Dear all,
I am planning to use live block migration with one VM running on local disk
and also attached additional disk from iSCSI or other shared storage.
When block migration, not only local VM disk is copied to the destination
but also the attached additional disk from shared storage. It is not
desired in this situation.
I just want the local VM disk is copied. Is there anyway to do this
scenario? Does the concept of storage pool help this? I browse the source
code but don't find hints right now.
Thanks.
Regards,
Arnose
12 years, 2 months
[libvirt-users] QEMU cmdline for default -smp reasonable (qemuBuildSmpArgs) ?
by sanjay
Hi! When only the vCPU tag is specified, libvirt by default generates a
multi-socket topology(sockets=vCPUs,cores=1,threads=1) for the qemu command
line. Is there a specific reason behind choice of this default
behavior? Could a default of (socket=1,cores=vCPUs,threads=1) would have
been a better choice ?
--
Regards,
Sanjay
12 years, 2 months
[libvirt-users] defining directly to the wan/lan
by gary artim
I have a fedora system configured and running with virsh using
kvm/qemu. All I want is to have a fixed ipaddress for that virtual
machine. I know that there is the Host and Guest sides to the network.
The host has virbr0 interface, which i guess is a bridge. Seem when
ever I define a interface to the guest with a type=routed it hoses my
other interfaces on the host and requires a console reboot --
completely hanging the network. I'm no networking expert, but there
must be a simple N step procedure for defining a static IPaddress to
route to a guest machine? Has anyone got this working and have a
procedure. My network has 2 nic, eth0 and eth1 (10.0.0.253 and
10.0.1.253), one nic has a route to an nfs machine, the other passes
through a linux based router to the wan. I can alias the 10.0.1.253
(eth0) to another address in the subnet, like 10.0.1.251 (eth0:0) and
would like to use it in some way to connect to the guest -- I have the
nat/routing working fine from the router to eth0:0. The network for
libvirt is confusing. It not obvious which side (guest or host) your
defining and seem to keep multiple definitions around, even after a
restart of libvirtd.service. An help would be great!
12 years, 2 months