[libvirt-users] gluster store and autostart - but fails
by lejeczek
hi all
I've a few guest which work/run perfectly fine, I believe,
except for autostart.
Configuration of system, gluster and libvirt is pretty
regular and not complex.
Errors I see:
...
failed to initialize gluster connection (src=0x7f9424266350
priv=0x7f94242922b0): Transport endpoint is
internal error: Failed to autostart VM 'rhel-work2': failed
to initialize gluster connection (src=0x7f9
failed to initialize gluster connection (src=0x7f942423fef0
priv=0x7f9424256320): Transport endpoint is
internal error: Failed to autostart VM 'rhel-work3': failed
to initialize gluster connection (src=0x7f9
failed to initialize gluster connection (src=0x7f9424261b20
priv=0x7f94242a18b0): Transport endpoint is
internal error: Failed to autostart VM 'rhel-work1': failed
to initialize gluster connection (src=0x7f9
...
I tried to make systemd libvirtd to wait for gluster:
After=glusterd.service
but if that's all required then, well, still fails.
Would you have any suggestions?
Many thanks,
L.
7 years, 10 months
[libvirt-users] Controlling the name of the 'tap0' device, in a bridged networking setup
by Govert
Hi,
I'm trying to control the name of the 'tap0' device that gets created as I
start a domain that uses bridged networking. The XML specification of the
domain contains the following configuration
<interface type='bridge'>
<source bridge='br0'/>
</interface>
The libvirt documentation (
http://libvirt.org/formatdomain.html#elementsNICSBridge) and other
discussions online tell me that I just need to include the <target
dev='desired_dev_name'/> tag in the XML specification of the domain under
the <interface> tag. Unfortunately doing so appears to have no effect; the
tun device created and 'enslaved' in the bridge is still called 'tap0'.
Interestingly, I never get a tun device with a name prefixed by 'vnet' or
'vif' which, according to the documentation, is the default behaviour (?).
The host is running CentOS 7, and virsh is used to start the domain.
Best,
Govert
7 years, 10 months
[libvirt-users] io=native & io=threads
by W Kern
Googling provides lots of interesting info on the use of these in
various situations such as SSD, number of VMs in the pool etc.
What is the default in Libvirt (or is the default 'neither')
Sincerely
W Kern
7 years, 10 months
[libvirt-users] Regarding Migration Statistics
by Anubhav Guleria
Greetings,
I am writing a code using libvirt API to migrate VM between two physical
hosts *(QEMU/KVM) , *say some *n *number of times.
*1)* I am using right now* virDomainPtr virDomainMigrate (.......) *and to
calculate the total migration time I am using something like this:
*clock_gettime(CLOCK_MONOTONIC_RAW,&begin); *
*migrate*(domainToMigrate,nodeToMigrate);
*clock_gettime(CLOCK_MONOTONIC_RAW,&end);*
*Total Migration Time = end.tv_sec-begin.tv_sec*
Is this correct way to calculate total migration time. And is there some
way to calculate the downtime (not how to set it)?
*2) *I am interested in identifying in particular other statistics of
migration like :
*Number of iterations in Pre Copy*, *Memory transferred in each iteration*
etc.
I was going through the API and found* virDomainJobInfo
<http://libvirt.org/html/libvirt-libvirt-domain.html#virDomainJobInfo> and
virDomainGetJobStats
<http://libvirt.org/html/libvirt-libvirt-domain.html#virDomainGetJobStats>
*functions.But
how to use them is not very clear. Can anyone point me to right place to
achieve this objective?
Thanks in advance.
And sorry if that was too silly to ask.
Anubhav
7 years, 10 months
[libvirt-users] changing to cache='none' on the fly?
by W Kern
As is already well documented, we find that we need cache='none' to
support migration, otherwise there is the chance of a hang and/or
failure to pivot.
However we prefer the default of cache=writethrough when operating in
production.
Our practice is to 'shutdown' the VM completely make the change with
virsh edit, then restart. Then we have to repeat the process to revert
back once we migrate.
Is it possible to change that function on the fly and avoid the
shutdown/start process?
Note: we have had inconsistent results with virsh edit first and then a
reboot. A complete shutdown seems to be necessary, but I am hoping there
is some other procedure available.
Our images are all qcow2 (1.1)
Sincerely,
W Kern
PixelGate Networks
7 years, 10 months
[libvirt-users] 回复: [libvirt] xml config nested
by 放牛班的春天
thanks your mail,now below:
If the KVM supports nested VMX and QEMU starts with the arguments -enable-kvm and
-cpu ..., + vmx, then the LOCK bit of the guest MSR_IA32_FEATURE_CONTROL and
Enable VMX out of SMX operation bit will be set.
Well, I should add -enable-kvm parameters of the document which xml position, Exactly speaking, how to amend xml configuration file?
------------------ 原始邮件 ------------------
发件人: "Daniel P. Berrange";<berrange(a)redhat.com>;
发送时间: 2017年1月5日(星期四) 晚上8:53
收件人: "放牛班的春天"<446844717(a)qq.com>;
抄送: "libvirt-users"<libvirt-users(a)redhat.com>;
主题: Re: [libvirt] xml config nested
NB, in future please don't CC all possible mailing lists at once.
Just pick the most appropriate mailing list for your question. I've
dropped libvirt-list & libvirt-announce from the CC, since this is
a question most suited for libvirt-users.
On Thu, Jan 05, 2017 at 11:44:29AM +0800, 放牛班的春天 wrote:
> How to configure libvirt, so qemu-kvm support nested virtualization, virtual machine installed operating system is windows_7_ultimate_sp1_x64_dvd_618537.iso, configure libvirt xml file is as follows:
>
> <cpu mode='custom' match='exact'>
> <model fallback='allow'>core2duo</model>
> <feature policy='require' name='vmx'/>
> </cpu>
>
>
> or
>
> <cpu mode='host-model'>
> <model fallback='allow'/>
> </cpu>
>
> or
> <cpu mode='host-passthrough'>
> <topology sockets='2' cores='2' threads='2'/>
> </cpu>
Yes, that's the key guest configuration step - exposing the 'vmx' feature
to the guest.
In addition to that though, you need to make sure the kvm-intel kernel
module in your host has the "nested=1" parameter set.
eg in /etc/modprobe.d/kvm.conf you want
options kvm_intel nested=1
If loaded correctly you should see
# cat /sys/module/kvm_intel/parameters/nested
Y
if it says "N", then nested VMX will be disabled.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
7 years, 10 months
[libvirt-users] xml config nested
by 放牛班的春天
How to configure libvirt, so qemu-kvm support nested virtualization, virtual machine installed operating system is windows_7_ultimate_sp1_x64_dvd_618537.iso, configure libvirt xml file is as follows:
<cpu mode='custom' match='exact'>
<model fallback='allow'>core2duo</model>
<feature policy='require' name='vmx'/>
</cpu>
or
<cpu mode='host-model'>
<model fallback='allow'/>
</cpu>
or
<cpu mode='host-passthrough'>
<topology sockets='2' cores='2' threads='2'/>
</cpu>
We install libvirt in centos 7, qemu-kvm version is:
[Root @ localhost libexec] # ./qemu-kvm --version
QEMU emulator version 2.6.0 (qemu-kvm-ev-2.6.0-27.1.el7), Copyright (c) 2003-2008 Fabrice Bellard
Above is the basic environment and libvirt configuration, but in this environment to install the internal implementation of the following windows7 virtual machine kernel interface calls are as follows:
if (!FeatrueControlMsr.fields.enable_vmxon)
{
MyWriteFile(FileHanle, "Bios设置没有开启虚拟化\n", strlen("Bios设置没有开启虚拟化\n"), &ReturnLen);
MyCloseFile(FileHanle);
KdPrint(("Bios设置没有开启虚拟化"));
return FALSE;
}
in conclusion:
Through this kernel interface to determine the result is: Bios settings do not turn on virtualization
How to solve this problem? Hope to get your help.
thank you very much。
7 years, 10 months
[libvirt-users] Regarding Migration Code
by Anubhav Guleria
Greetings,
I was trying to understand the flow of Migration Code in libvirt and
have few doubts:
1) libvirt talks to QEMU/KVM guests via QEMU API. So overall, in
order to manage QEMU/KVM guests I can either use libvirt (or tools
based on libvirt like virsh) or QEMU monitor. Is it so?
2) Since libvirt is Hypervisor neutral so actual migration
algorithm(precopy or postcopy) is present in the hypervisor ,i.e. in
case of QEMU it should be present in QEMU code base . I was going
through the code starting from libvirt-domain api and not able to
follow after virDomainMigrateVersion3Full(.....) . Kindly help.
Thanks in advance. And sorry if that was too basic.
Anubhav
7 years, 10 months
[libvirt-users] libvirtError: block copy still active: disk not ready for pivot yet
by Ala Hino
Hi guys,
When performing live merge, in few cases, we see the following exception:
Traceback (most recent call last):
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 736, in wrapper
return f(*a, **kw)
File "/usr/share/vdsm/virt/vm.py", line 5278, in run
self.tryPivot()
File "/usr/share/vdsm/virt/vm.py", line 5247, in tryPivot
ret = self.vm._dom.blockJobAbort(self.drive.name, flags)
File "/usr/share/vdsm/virt/virdomain.py", line 68, in f
ret = attr(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/libvirtconnection.py",
line 124, in wrapper
ret = f(*args, **kwargs)
File "/usr/lib/python2.7/site-packages/vdsm/utils.py", line 1313, in wrapper
return func(inst, *args, **kwargs)
File "/usr/lib64/python2.7/site-packages/libvirt.py", line 733, in
blockJobAbort
if ret == -1: raise libvirtError ('virDomainBlockJobAbort()
failed', dom=self)
libvirtError: block copy still active: disk 'vdb' not ready for pivot yet
That exception observed in following BZs:
https://bugzilla.redhat.com/1376580
https://bugzilla.redhat.com/1397122
I am trying to understand what this exception indicates in order to handle
it appropriately when thrown by libvirt.
Thanks,
Ala
7 years, 10 months