[libvirt-users] Clone VM
by Boris Tobotras
Greets,
I've began to try libvirt API and looking for most correct way to clone
existing VM (Xen server if it matters).
Since API lacks this functionality how does one usually go about it?
Thanks in advance,
--
Best regards, -- Boris.
We have a lot of strategies, but always the same tactics.
12 years, 10 months
[libvirt-users] virt-install Error
by Nitin Nikam
Hi All,
I'm trying to install a VM on host OS (Redhat 6.1, KVM hypervisor).
I've installed all the required RPMs for KVM and libvirt tools.
Here is the list of libvirt RPMs and KVM modules
*kvm.ko
kvm-intel.ko
libvirt-0.8.7-18.el6.x86_64.rpm
libvirt-java-devel-0.4.7-1.el6.noarch.rpm virt-top-1.0.4-3.8.el6.x86_64.rpm
libvirt-client-0.8.7-18.el6.x86_64.rpm
libvirt-python-0.8.7-18.el6.x86_64.rpm
virt-viewer-0.2.1-3.el6.x86_64.rpm
libvirt-devel-0.8.7-18.el6.x86_64.rpm
python-virtinst-0.500.5-3.el6.noarch.rpm virt-what-1.3-4.4.el6.x86_64.rpm
libvirt-java-0.4.7-1.el6.noarch.rpm virt-manager-0.8.6-4.el6.noarch.rpm
qemu-img-0.12.1.2-2.160.el6.x86_64.rpm
qemu-kvm-0.12.1.2-2.160.el6.x86_64.rpm
*When I execute virt-install command I see the below error:
*[root@nsn /]# virt-install*
*ERROR Host does not support any virtualization options*
I've already cross checked the VMX setting in BIOS and it is enabled.
In VMX Feature MSR register , VT bit is set.
Following is some info about the platform:
*[root@nsn /]# uname -a*
Linux rsp3-linux 2.6.32.27 #170 SMP PREEMPT Thu Jan 12 09:03:12 IST 2012
x86_64 GNU/Linux
*[root@nsn /]# cat /proc/cpuinfo*
processor : 3
vendor_id : GenuineIntel
cpu family : 6
model : 30
model name : Intel(R) Xeon(R) CPU C5528 @ 2.13GHz
stepping : 4
cpu MHz : 2127.723
cache size : 8192 KB
physical id : 0
siblings : 4
core id : 3
cpu cores : 4
apicid : 6
initial apicid : 6
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
rdtscp lm constant_tsc arch_perfmon pebs bts rep_good xtopology nonstop_tsc
aperfmperf pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca
sse4_1 sse4_2 popcnt lahf_lm ida tpr_shadow vnmi flexpriority ept vpid
bogomips : 4259.94
clflush size : 64
cache_alignment : 64
address sizes : 40 bits physical, 48 bits virtual
power management:
*[root@nsn /]# lsmod | grep kvm*
kvm_intel 36839 0 - Live 0xffffffffa016c000
kvm 149478 1 kvm_intel, Live 0xffffffffa0137000
has anyone encountered similar issues while using libvirt tools?
am I missing something w.r.t installation?
Appreciate your feedback.
Thanks,
kenden
12 years, 10 months
[libvirt-users] (no subject)
by Алексей Беляев
Hello!
I just installed the qemu package 1.0.
Trying to start my vms with libvirt (virt-manager) resulted in the following
error message:
internal error cannot parse /usr/local/bin/qemu-system-arm version
number in 'QEMU emulator version 1.0, Copyright (c) 2003-2008 Fabrice
Bellard'
After that I installed the qemu package 0.15 and i didn't receive error.
But i have problems with device virtio-balloon-pci:
internal error process exited while connecting to monitor:
qemu-system-arm: -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x4: Bus 'pci.0' not
found
Libvirt doesn't support qemu 1.0 or it only couldn't parse version of emulator?
--
Regards, Belyaev Alex
12 years, 10 months
[libvirt-users] Libvirt API for C/C++ : Monitoring Support
by pratik.patel@snstech.com
Greetings,
I am a developer, trying to develop a system for monitoring Servers and Virtual Machines using Libvirt API for C/C++. I am facing an issue that there are no functions for monitoring some parameters which I have listed below. I would like to know if there is any there way I can retrieve these parameters since they are critical to my Monitoring Application. Also I am using CentOS 5.6 as a development machine which only supports up to libvirt-devel package version 0.8.2 due to which I am unable to use the newer development binaries (which are only built for newer RedHat kernels) which have support for certain functions that do not work on the one I am using.
List of parameters:
For Server Monitoring:
1. Memory Swap-In
2. Memory swap-Out
3. Disk Usage
4. Disk Read
5. Disk write
6. Disk Total Read Latency
7. Disk total Write Latency
8. Network Usage
9. Network Packets Received
10. Network Packets Sent
11. CPU utilization
12. CPU Usage
For Virtual Machine Monitoring:
1. Memory Utilization
2. Memory Consumption
3. Memory swap-In
4. Memory Swap-Out
5. Disk Usage
6. Disk Read
7. Disk Write
8. Disk Total Read Latency
9. Disk total write Latency
10. Network Usage
11. Network Packets Received
12. Network Packets Transmitted
13. CPU utilization
14. CPU Usage
Regards,
Pratik.
Email Disclaimer: http://www.snstech.com/disclaimer.html
12 years, 10 months
[libvirt-users] LXC with RHEL 6.1
by John Paul Walters
I'm trying to start a RHEL 6.1 container on an RHEL 6.1 host through libvirt 0.9.3. I've removed most of the startup services from the guest container, except the network and sshd. I can get the system to boot to a login prompt and can access the console through virsh; however, as soon as I authenticate, I get the following error message:
Cannot make/remove an entry for the specified session
init: tty (/dev/tty1) main process (372) terminated with status 1
init: tty (/dev/tty1) main process ended, respawning
After this, I'm again left at the login prompt. I'm also unable to establish an ssh session, which I assume is related to the issue I'm having with the virsh console. Can anyone offer any suggestions as to what's going on here? I'm using the following XML for my domain creation:
<domain type='lxc'>
<name>RHEL6</name>
<memory>512000</memory>
<os>
<type>exe</type>
<init>/sbin/init</init>
</os>
<vcpu>1</vcpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/libexec/libvirt_lxc</emulator>
<filesystem type='mount'>
<source dir='/media'/>
<target dir='/'/>
</filesystem>
<interface type='network'>
<source network='default'/>
</interface>
<console type='pty' />
</devices>
</domain>
Any thoughts?
best,
JP
12 years, 10 months
[libvirt-users] When IPv6 is disabled, libvirtd cannot start on boot
by Lei Yang
Hi all,
When IPv6 is disabled in modprobed.conf through:
> options ipv6 disable=1
libvirtd cannot start on boot, it produces the following debug message:
> libvirtd: 1545: error : virCommandWait:2192 : internal error Child process (/bin/sh -c IPT=/usr/sbin/ip6tables
> cmd='$IPT -n -L FORWARD'
> eval res=\$\("${cmd} 2>&1"\)
> if [ $? -ne 0 ]; then echo "Failure to execute command '${cmd}' : '${res}'."; exit 1;fi
>
> libvirtd: 851: error : virNetSocketNewListenTCP:226 : Unable to create socket: Address family not supported by protocol
>
>
However, I can successfuly start libvirtd later on without IPv6 support. I think IPv6 is disabled in the early stage during system boot, so libvirtd should not assume the IPv6 is in exist during service start. But I could be wrong, please point me out, Thanks.
My OS: Linux 3.1.7-1-ARCH
/etc/rc.conf: DAEMONS=(hwclock dbus syslog-ng network netfs crond sshd libvirtd)
--
Lei Yang
12 years, 10 months
[libvirt-users] libvirt + qemu-system-arm
by Алексей Беляев
Greetings.
Is it possible to run virtual machines for arm architecture under libvirt?
I run my machines with qemu-system-arm and i try to make XML file for
them, but receive errors.
In this manual "How To: Running Fedora-ARM under QEMU" (
http://fedoraproject.org/wiki/Architectures/ARM/HowToQemu#Using_networkin...
)
i found xml for VM under arm(http://cdot.senecac.on.ca/arm/arm1.xml),
but virsh define arm1.xml - print error:
" internal error os type 'hvm' & arch 'arm' combination is not supported "
--
Regards, Belyaev Alex
12 years, 10 months
[libvirt-users] Unable to close open libvirt connections
by Jatin Kumar
Hello,
I was getting the following error in syslog:
libvirtd: 21:19:12.116: 10955: error : qemudDispatchServer:1355 : Too many
active clients (20), dropping connection from 127.0.0.1;0
I investigated a bit and tried the following in a python console:
import libvirt
~~~~
conn=libvirt.openReadOnly("qemu+ssh://HOST_IP/system<http://10.16.71.1/system>
")
//now check the no. of connections (by lsof|grep ESTABLISHED in the HOST),
the count will increase by 1
conn.close()
*//the count will decrease by 1, it returns 0*
~~~~
conn=libvirt.openReadOnly("qemu+ssh://HOST_IP/system<http://10.16.71.1/system>
")
//now check the no. of connections, the count will increase by 1
dom=conn.lookupByName("sowmya")
print dom.info()
conn.close()
*//the count will not change, it returns 1*
*
*
This is creating a lot of trouble to me but the reason is unknown.
If someone could please point me out, if i am missing something.
--
Jatin
12 years, 10 months
[libvirt-users] Haskell bindings for libvirt
by Michael Litchard
I'm exploring the possibility of developing bindings for libvirt. I've
never done this kind of project before. Could someone point me to the
documentation of particular interest concerning this goal?
12 years, 10 months
[libvirt-users] Looking for feedback
by Zorg
Hello list!
I'm currently digging into libvirt-based virtualisation
infrastructure (KVM/Qemu Hypervisors so far).
I have a little concern here:
Let's say we have two hypervisors (I and II), running 3 VMs
each - I runs a,b,c and II runs d,e,f. Virtual disks
are stored on a centralized storage solution, let's say a good old
cluster of NFS filers. Network configuration is homogenous on the
two hypervisors -- same bridge names to the same ethernet segments.
Basically, a, b and c are defined on I - the XML description is
stored on I. So far, so good.
Then let's say I crashes badly.
The XML descriptions for a,b, and c are lost, the process
of starting these VMs on II would involve re-defining it on II
from a backup of XML descriptions.
That's not quite an automated/fluent process, and the idea of doubling
hypervisors into a HA cluster just for that matter is clearly overkill
to me.
Are you, by chance, aware of an OpenSource libvirt-based solution that is
able to keep track of each VM xmldesc, starting lost guests as transient
domains on another hypervisor upon failure of the original hypervisor ?
12 years, 10 months