[libvirt-users] How to set manually starting date/time for guest kvm
by Matt Xtlm
Hi there,
is there a way to set a guest to start with a specific date and time. (e.g.
when guest boots, starting guest time should always be
(2017-02-05T12:00:00)).
I've tried something like this in my kvm domain:
<clock offset='variable' adjustment='-86400' basis='localtime'>
<timer name='rtc' tickpolicy='delay' track='guest'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
However, i can't insert a specific date and time statement (like starting
kvm with -rtc base="2017-02-05T12:00:00").
The date/time must be set before starting the vm and must always be the
same.
Cheers,
Matt
7 years, 10 months
[libvirt-users] Unsafe migration with copy-storage-all (non shared storage) and writeback cache
by Gionatan Danti
Hi list,
I would like to understand if, and why, the --unsafe flag is needed when
using --copy-storage-all when migrating guests which uses writeback
cache mode.
Background: I want to live migrate guests with writeback cache from host
A to host B and these hosts only have local storage (ie: no shared
storage at all).
From my understanding, --unsafe should be only required when migrating
writeback-enabled guests between two hosts which share non-cluster-aware
storage (ie: NFS), but it should not be necessary when not sharing storage.
However, when trying to migrate from a CentOS6 host to a CentOS7 machine
with command
[root@source ~]# virsh migrate Totem --live --copy-storage-all
--persistent --verbose qemu+ssh://root@172.31.255.11/system
I have the following error:
error: Unsafe migration: Migration may lead to data corruption if disks
use cache != none
So, my questions are:
1) it is safe to do a live migration with --copy-storage-all when guests
are using writeback cache mode?
2) if so, why libvirt complains about that?
3) if not, what is the best method to migrate a running quest using
writeback cache mode?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
7 years, 10 months
[libvirt-users] "virsh list" ahngs
by Yunchih Chen
`virsh list` hangs on my server that hosts a bunch of VMs.
This might be due to the Debian upgrade I did on Feb 15, which upgrades
`libvirt` from 2.4.0-1 to 3.0.0-2.
I have tried restarting libvirtd for a few times, without luck.
Attached below are some relevant logs; let me know if you need some more
for debugging.
Thanks for your help!!
root@vm-host:~# uname -a
Linux vm-host 4.6.0-1-amd64 #1 SMP Debian 4.6.4-1 (2016-07-18) x86_64
GNU/Linux
root@vm-host:~# apt-cache policy libvirt-daemon
libvirt-daemon:
Installed: 3.0.0-2
Candidate: 3.0.0-2
Version table:
*** 3.0.0-2 500
500 http://debian.csie.ntu.edu.tw/debian testing/main amd64
Packages
100 /var/lib/dpkg/status
root@vm-host:~# strace -o /tmp/trace -e trace=network,file,poll virsh
list # hangs forever .....
^C
root@vm-host:~# tail -10 /tmp/trace
access("/etc/libvirt/libvirt.conf", F_OK) = 0
open("/etc/libvirt/libvirt.conf", O_RDONLY) = 5
access("/proc/vz", F_OK) = -1 ENOENT (No such file or
directory)
socket(AF_UNIX, SOCK_STREAM, 0) = 5
connect(5, {sa_family=AF_UNIX,
sun_path="/var/run/libvirt/libvirt-sock"}, 110) = 0
getsockname(5, {sa_family=AF_UNIX}, [128->2]) = 0
poll([{fd=5, events=POLLOUT}, {fd=6, events=POLLIN}], 2, -1) = 1
([{fd=5, revents=POLLOUT}])
poll([{fd=5, events=POLLIN}, {fd=6, events=POLLIN}], 2, -1) = ?
ERESTART_RESTARTBLOCK (Interrupted by signal)
--- SIGINT {si_signo=SIGINT, si_code=SI_KERNEL} ---
+++ killed by SIGINT +++
root@vm-host:~# lsof /var/run/libvirt/libvirt-sock # hangs too ...
^C
root@vm-host:~# LIBVIRT_DEBUG=1 virsh list
2017-02-17 15:58:36.126+0000: 18505: info : libvirt version: 3.0.0,
package: 2 (Guido Günther <agx(a)sigxcpu.org> Wed, 25 Jan 2017 07:04:08
+0100)
2017-02-17 15:58:36.126+0000: 18505: info : hostname: vm-host
2017-02-17 15:58:36.126+0000: 18505: debug : virGlobalInit:386 :
register drivers
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:684 : driver=0x7f1e5aca2c40 name=Test
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:695 : registering Test as driver 0
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:684 : driver=0x7f1e5aca4ac0 name=OPENVZ
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:695 : registering OPENVZ as driver 1
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:684 : driver=0x7f1e5aca5260 name=VMWARE
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:695 : registering VMWARE as driver 2
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:684 : driver=0x7f1e5aca3720 name=remote
2017-02-17 15:58:36.127+0000: 18505: debug :
virRegisterConnectDriver:695 : registering remote as driver 3
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventRegisterDefaultImpl:267 : registering default event implementation
2017-02-17 15:58:36.127+0000: 18505: debug : virEventPollAddHandle:115 :
Used 0 handle slots, adding at least 10 more
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventPollInterruptLocked:722 : Skip interrupt, 0 0
2017-02-17 15:58:36.127+0000: 18505: info : virEventPollAddHandle:140 :
EVENT_POLL_ADD_HANDLE: watch=1 fd=3 events=1 cb=0x7f1e5a7fc140
opaque=(nil) ff=(nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virEventRegisterImpl:234 :
addHandle=0x7f1e5a7fc860 updateHandle=0x7f1e5a7fcb90
removeHandle=0x7f1e5a7fc1a0 addTimeout=0x7f1e5a7fc310
updateTimeout=0x7f1e5a7fc510 removeTimeout=0x7f1e5a7fc6e0
2017-02-17 15:58:36.127+0000: 18505: debug : virEventPollAddTimeout:230
: Used 0 timeout slots, adding at least 10 more
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventPollInterruptLocked:722 : Skip interrupt, 0 0
2017-02-17 15:58:36.127+0000: 18505: info : virEventPollAddTimeout:253 :
EVENT_POLL_ADD_TIMEOUT: timer=1 frequency=-1 cb=0x563a29758360
opaque=0x7fff70941380 ff=(nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenAuth:1245 :
name=<null>, auth=0x7f1e5aca2a00, flags=0
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f5f50 classname=virConnect
2017-02-17 15:58:36.127+0000: 18505: debug : virConfLoadConfig:1604 :
Loading config file '/etc/libvirt/libvirt.conf'
2017-02-17 15:58:36.127+0000: 18505: debug : virConfReadFile:778 :
filename=/etc/libvirt/libvirt.conf
2017-02-17 15:58:36.127+0000: 18506: debug : virThreadJobSet:99 : Thread
18506 is now running job vshEventLoop
2017-02-17 15:58:36.127+0000: 18506: debug : virEventRunDefaultImpl:311
: running default event implementation
2017-02-17 15:58:36.127+0000: 18505: debug : virFileClose:108 : Closed fd 5
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=0 w=1, f=3 e=1 d=0
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCalculateTimeout:338 : Calculate expiry of 1 timers
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCalculateTimeout:371 : No timeout is pending
2017-02-17 15:58:36.127+0000: 18506: info : virEventPollRunOnce:640 :
EVENT_POLL_RUN: nhandles=1 timeout=-1
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfAddEntry:241 : Add
entry (null) (nil)
2017-02-17 15:58:36.127+0000: 18505: debug : virConfGetValueString:932 :
Get value string (nil) 0
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1040
: no name, allowing driver auto-select
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1083
: trying driver 0 (Test) ...
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1098
: driver 0 Test returned DECLINED
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1083
: trying driver 1 (OPENVZ) ...
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1098
: driver 1 OPENVZ returned DECLINED
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1083
: trying driver 2 (VMWARE) ...
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1098
: driver 2 VMWARE returned DECLINED
2017-02-17 15:58:36.127+0000: 18505: debug : virConnectOpenInternal:1083
: trying driver 3 (remote) ...
2017-02-17 15:58:36.127+0000: 18505: debug : remoteConnectOpen:1343 :
Auto-probe remote URI
2017-02-17 15:58:36.127+0000: 18505: debug : doRemoteOpen:907 :
proceeding with name =
2017-02-17 15:58:36.127+0000: 18505: debug : doRemoteOpen:916 :
Connecting with transport 1
2017-02-17 15:58:36.127+0000: 18505: debug : doRemoteOpen:1051 :
Proceeding with sockname /var/run/libvirt/libvirt-sock
2017-02-17 15:58:36.127+0000: 18505: debug :
virNetSocketNewConnectUNIX:639 : path=/var/run/libvirt/libvirt-sock
spawnDaemon=0 binary=<null>
2017-02-17 15:58:36.127+0000: 18505: debug :
virNetSocketNewConnectUNIX:703 : connect() succeeded
2017-02-17 15:58:36.127+0000: 18505: debug : virNetSocketNew:235 :
localAddr=0x7fff70940d00 remoteAddr=0x7fff70940d90 fd=5 errfd=-1 pid=0
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7980 classname=virNetSocket
2017-02-17 15:58:36.127+0000: 18505: info : virNetSocketNew:291 :
RPC_SOCKET_NEW: sock=0x563a2a7f7980 fd=5 errfd=-1 pid=0
localAddr=127.0.0.1;0, remoteAddr=127.0.0.1;0
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7d80 classname=virNetClient
2017-02-17 15:58:36.127+0000: 18505: info : virNetClientNew:328 :
RPC_CLIENT_NEW: client=0x563a2a7f7d80 sock=0x563a2a7f7980
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7d80
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7980
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventPollInterruptLocked:726 : Interrupting
2017-02-17 15:58:36.127+0000: 18505: info : virEventPollAddHandle:140 :
EVENT_POLL_ADD_HANDLE: watch=2 fd=5 events=1 cb=0x7f1e5a96cd10
opaque=0x563a2a7f7980 ff=0x7f1e5a96ccc0
2017-02-17 15:58:36.127+0000: 18505: debug : virKeepAliveNew:199 :
client=0x563a2a7f7d80, interval=-1, count=0
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f8080 classname=virKeepAlive
2017-02-17 15:58:36.127+0000: 18505: info : virKeepAliveNew:218 :
RPC_KEEPALIVE_NEW: ka=0x563a2a7f8080 client=0x563a2a7f7d80
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7d80
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f6740 classname=virConnectCloseCallbackData
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollRunOnce:650 :
Poll got 1 event(s)
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchTimeouts:432 : Dispatch 1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f6740
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchHandles:478 : Dispatch 1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7fa0 classname=virNetClientProgram
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7b60 classname=virNetClientProgram
2017-02-17 15:58:36.127+0000: 18505: info : virObjectNew:202 :
OBJECT_NEW: obj=0x563a2a7f7910 classname=virNetClientProgram
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchHandles:492 : i=0 w=1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7fa0
2017-02-17 15:58:36.127+0000: 18506: info :
virEventPollDispatchHandles:506 : EVENT_POLL_DISPATCH_HANDLE: watch=1
events=1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7b60
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18505: info : virObjectRef:296 :
OBJECT_REF: obj=0x563a2a7f7910
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 2
2017-02-17 15:58:36.127+0000: 18505: debug : doRemoteOpen:1170 : Trying
authentication
2017-02-17 15:58:36.127+0000: 18506: debug : virEventRunDefaultImpl:311
: running default event implementation
2017-02-17 15:58:36.127+0000: 18505: debug : virNetMessageNew:46 :
msg=0x563a2a7fa470 tracked=0
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 2
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=0 w=1, f=3 e=1 d=0
2017-02-17 15:58:36.127+0000: 18505: debug :
virNetMessageEncodePayload:386 : Encode length as 28
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=1 w=2, f=5 e=1 d=0
2017-02-17 15:58:36.127+0000: 18505: info :
virNetClientSendInternal:2104 : RPC_CLIENT_MSG_TX_QUEUE:
client=0x563a2a7f7d80 len=28 prog=536903814 vers=1 proc=66 type=0
status=0 serial=0
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCalculateTimeout:338 : Calculate expiry of 1 timers
2017-02-17 15:58:36.127+0000: 18505: debug : virNetClientCallNew:2057 :
New call 0x563a2a7f7340: msg=0x563a2a7fa470, expectReply=1, nonBlock=0
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCalculateTimeout:371 : No timeout is pending
2017-02-17 15:58:36.127+0000: 18505: debug : virNetClientIO:1866 :
Outgoing message prog=536903814 version=1 serial=0 proc=66 type=0
length=28 dispatch=(nil)
2017-02-17 15:58:36.127+0000: 18506: info : virEventPollRunOnce:640 :
EVENT_POLL_RUN: nhandles=2 timeout=-1
2017-02-17 15:58:36.127+0000: 18505: debug : virNetClientIO:1925 : We
have the buck head=0x563a2a7f7340 call=0x563a2a7f7340
2017-02-17 15:58:36.127+0000: 18505: info : virEventPollUpdateHandle:152
: EVENT_POLL_UPDATE_HANDLE: watch=2 events=0
2017-02-17 15:58:36.127+0000: 18505: debug :
virEventPollInterruptLocked:726 : Interrupting
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollRunOnce:650 :
Poll got 1 event(s)
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchTimeouts:432 : Dispatch 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchHandles:478 : Dispatch 2
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollDispatchHandles:492 : i=0 w=1
2017-02-17 15:58:36.127+0000: 18506: info :
virEventPollDispatchHandles:506 : EVENT_POLL_DISPATCH_HANDLE: watch=1
events=1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 2
2017-02-17 15:58:36.127+0000: 18506: debug : virEventRunDefaultImpl:311
: running default event implementation
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupTimeouts:525 : Cleanup 1
2017-02-17 15:58:36.127+0000: 18506: debug :
virEventPollCleanupHandles:574 : Cleanup 2
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=0 w=1, f=3 e=1 d=0
2017-02-17 15:58:36.127+0000: 18506: debug : virEventPollMakePollFDs:401
: Prepare n=1 w=2, f=5 e=0 d=0
2017-02-17 15:58:36.128+0000: 18506: debug :
virEventPollCalculateTimeout:338 : Calculate expiry of 1 timers
2017-02-17 15:58:36.128+0000: 18506: debug :
virEventPollCalculateTimeout:371 : No timeout is pending
2017-02-17 15:58:36.128+0000: 18506: info : virEventPollRunOnce:640 :
EVENT_POLL_RUN: nhandles=1 timeout=-1
^C
--
--
Yun-Chih Chen 陳耘志
Network/Workstation Assistant
Dept. of Computer Science and Information Engineering
National Taiwan University
Tel: +886-2-33664888 ext. 217/204
Email: ta217(a)csie.ntu.edu.tw
Website: http://wslab.csie.ntu.edu.tw/
7 years, 10 months
[libvirt-users] Libvirt behavior when mixing io=native and cache=writeback
by Gionatan Danti
Hi all,
I write about libvirt inconsistent behavior when mixing io=native and
cache=writeback. This post can be regarded as an extension, or
clarification request, of BZ 1086704
(https://bugzilla.redhat.com/show_bug.cgi?id=1086704)
On a fully upgraded CentOS6 x86-64 machine, starting a guest with
io=native and cache=writeback is permitted: no errors are raised and the
VM (qemu, really) silently use "io=threads" instead. A warning should be
raised, but I find reasonable that the VM is permitted to run.
On a fully upgraded CentOS7 x86-64 machine, starting a guest with
io=native and cache=writeback is *not* permitted: an error is raised and
the guest is not started. While it is absolutely OK to alert the user
about the configuration error, I feel the VM should be started as on
CentOS6. Anyway, this is not a big problem when dealing with a single host.
A more concerning behavior is that when trying to migrate a guest from
the very first CentOS6 machine, which starts and runs such a guest with
io=native and cache=writeback without problems, to a CentOS7 host, live
migration aborts with an error:
[root@source] virsh migrate Totem --live --copy-storage-all --persistent
--verbose --unsafe qemu+ssh://root@172.31.255.11/system
error: unsupported configuration: native I/O needs either no disk cache
or directsync cache mode, QEMU will fallback to aio=threads
This error persists even if the VM config file is changed to use
"io=threads". Of course, the running image continue to have "io=native"
on its parameters list, but it does not really honor the "io=native"
directive, so I find strange that live migration can not proceed. I
think it is very reasonable to raise an error, but such error should not
block the live migration process.
In short: there is some method to live migrate a running VM configured
with "io=native" (but which is really using "io=threads" internally) and
"cache=writeback"?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
7 years, 10 months
[libvirt-users] Why Guest does not retain Host CPU flags
by akhilesh rawat
hi ,
I am creating Guest by Libvirt tool " virt-install" with cpu model as
host.
After Guest creation i see Guest Missing quite no of flags which Host was
having .
What could be the reason for this ? I am expecting all flags of Host to e
present in Guest as well when choosing cpu model as host .
Guest :
processor : 19
vendor_id : GenuineIntel
cpu family : 6
model : 42
model name : Intel Xeon E312xx (Sandy Bridge)
stepping : 1
microcode : 0x1
cpu MHz : 1995.266
cache size : 4096 KB
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm
constant_tsc rep_good nopl eagerfpu pni pclmulqdq vmx ssse3 cx16 pcid
sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx hypervisor
lahf_lm xsaveopt tpr_shadow vnmi flexpriority ept vpid
bogomips : 4013.27
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
HOst :
processor : 23
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
stepping : 7
microcode : 0x710
cpu MHz : 1400.000
cache size : 15360 KB
physical id : 1
siblings : 12
core id : 5
cpu cores : 6
apicid : 43
initial apicid : 43
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic
popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid xsaveopt
bogomips : 4004.01
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
br aki
7 years, 10 months
[libvirt-users] vm running slowly in powerful host
by Lentes, Bernd
Hi,
i have a vm which has a poor performance.
E.g. top needs seconds to refresh its output on the console. Same with netstat.
The guest is hosting a MySQL DB with a webfrontend, its response is poor too.
I'm looking for the culprit.
Following top in the guest i get these hints:
Memory is free enough, system is not swapping.
System has 8GB RAM and two cpu's.
Cpu 0 is struggling with a lot of software interrupts, between 50% and 80%.
Cpu1 is often waiting for IO (wa), between 0% and 20%.
No application is consuming much cpu time.
Here is an example:
top - 11:19:18 up 18:19, 11 users, load average: 1.44, 0.94, 0.66
Tasks: 95 total, 1 running, 94 sleeping, 0 stopped, 0 zombie
Cpu0 : 0.0%us, 0.0%sy, 0.0%ni, 20.0%id, 0.0%wa, 0.0%hi, 80.0%si, 0.0%st
Cpu1 : 1.9%us, 13.8%sy, 0.0%ni, 73.8%id, 10.5%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 7995216k total, 6385176k used, 1610040k free, 177772k buffers
Swap: 2104472k total, 0k used, 2104472k free, 5940884k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6470 root 16 0 12844 1464 804 S 12 0.0 2:17.13 screen
6022 root 15 0 41032 3052 2340 S 3 0.0 1:10.99 sshd
8322 root 0 -20 10460 4976 2268 S 3 0.1 19:20.38 atop
10806 root 16 0 5540 1216 880 R 0 0.0 0:00.51 top
126 root 15 0 0 0 0 S 0 0.0 0:23.33 pdflush
3531 postgres 15 0 68616 1600 792 S 0 0.0 0:41.24 postmaster
The host in which the guest runs has 96GB RAM and 8 cores.
It does not seem to do much:
top - 11:21:19 up 15 days, 15:53, 14 users, load average: 1.40, 1.39, 1.40
Tasks: 221 total, 2 running, 219 sleeping, 0 stopped, 0 zombie
Cpu0 : 15.9%us, 2.7%sy, 0.0%ni, 81.4%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu1 : 5.0%us, 3.0%sy, 0.0%ni, 92.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu2 : 2.0%us, 0.3%sy, 0.0%ni, 97.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu3 : 0.3%us, 1.0%sy, 0.0%ni, 98.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu4 : 1.3%us, 0.3%sy, 0.0%ni, 98.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu5 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu6 : 0.3%us, 0.0%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Cpu7 : 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 96738M total, 13466M used, 83272M free, 3M buffers
Swap: 2046M total, 0M used, 2046M free, 3887M cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
21765 root 20 0 105m 15m 4244 S 5 0.0 0:00.15 crm
3180 root 20 0 8572m 8.0g 8392 S 3 8.4 62:25.73 qemu-kvm
8529 hacluste 10 -10 90820 14m 9400 S 0 0.0 29:52.48 cib
21329 root 20 0 9040 1364 940 R 0 0.0 0:00.16 top
28439 root 20 0 0 0 0 S 0 0.0 0:04.51 kworker/4:2
1 root 20 0 10560 828 692 S 0 0.0 0:07.67 init
2 root 20 0 0 0 0 S 0 0.0 0:00.28 kthreadd
3 root 20 0 0 0 0 S 0 0.0 3:03.23 ksoftirqd/0
6 root RT 0 0 0 0 S 0 0.0 0:05.02 migration/0
7 root RT 0 0 0 0 S 0 0.0 0:02.82 watchdog/0
8 root RT 0 0 0 0 S 0 0.0 0:05.18 migration/1
I think the host is not the problem.
The vm resides on a SAN which is attached via FC. The whole system is a two node cluster.
The vm resides in a raw partition without a FS, which i read should be good for the performance.
It runs on the other node slow too. Inside the vm i have logical volumes
(it was a physical system i migrated to a vm). The partitions are formatted with reiserfs
(The system is already some years old, at that time reiserfs was popular ...).
I use iostat on the guest:
This is a typical snapshot:
Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util
vda 0.00 3.05 0.00 2.05 0.00 20.40 19.90 0.09 44.59 31.22 6.40
dm-0 0.00 0.00 0.00 4.55 0.00 18.20 8.00 0.24 52.31 7.74 3.52
dm-1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-2 0.00 0.00 0.00 0.10 0.00 0.40 8.00 0.01 92.00 56.00 0.56
dm-3 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-4 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-5 0.00 0.00 0.00 0.35 0.00 1.40 8.00 0.03 90.29 65.71 2.30
dm-6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
vda has several partitions, one for /, one for swap, and two physical volumes for LVM.
Following "man iostat", the columns await and svctm seem to be important. Man says:
await
The average time (in milliseconds) for I/O requests issued to the device to be served. This includes the time spent by the requests in queue and the time spent servicing them.
svctm
The average service time (in milliseconds) for I/O requests that were issued to the device.
It seems system is waiting a long time for IO. Although the amount of transfered data is small.
I have some suspicions:
- the lvm setup in the guest
- some hardware
- cache mode for the disk is "none". Otherwise i can't do a live migration.
What do you think ? How can i find out from where the high si comes ?
Network and disk are virtio devices (which should be fast):
vm58820-4:~ # lsmod|grep -i virt
virtio_balloon 22788 0
virtio_net 30464 0
virtio_pci 27264 0
virtio_ring 21376 1 virtio_pci
virtio_blk 25224 5
virtio 22916 4 virtio_balloon,virtio_net,virtio_pci,virtio_blk
That's the config of the guest:
<domain type='kvm'>
<name>mausdb_vm</name>
<uuid>f08c2f32-fe35-137a-0e9d-fa7485d57974</uuid>
<memory unit='KiB'>8198144</memory>
<currentMemory unit='KiB'>8197376</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-i440fx-1.4'>hvm</type>
<boot dev='cdrom'/>
<bootmenu enable='yes'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/vg_cluster_01/lv_cluster_01'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</disk>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'/>
<controller type='ide' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:37:92:01'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='-1' autoport='yes' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<video>
<model type='cirrus' vram='9216' heads='1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</memballoon>
</devices>
<seclabel type='none'/>
</domain>
Host OS is SLES 11 SP4, guest os is SLES 10 SP4. Both 64bit.
Thanks for any hint.
Bernd
--
Bernd Lentes
Systemadministration
institute of developmental genetics
Gebäude 35.34 - Raum 208
HelmholtzZentrum München
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 (0)89 3187 1241
fax: +49 (0)89 3187 2294
Erst wenn man sich auf etwas festlegt kann man Unrecht haben
Scott Adams
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir'in Baerbel Brumme-Bothe
Geschaeftsfuehrer: Prof. Dr. Guenther Wess, Heinrich Bassler, Dr. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
7 years, 10 months
[libvirt-users] provisioning with vagrant-libvirt leaves .img file only readable by root
by Hans-Christoph Steiner
I'm using libvirt on Debian/stretch (testing) with vagrant and the
vagrant-libvirt plugin. When I import a vagrant box (jessie64.box), the
resulting file permssions let anyone in the "kvm" group read the img.
But when I build upon that box, only root can read it:
-rw------- 1 root root 20198785024 Sep 19 18:19
buildserver_default.img
-rwxr--r-- 1 libvirt-qemu kvm 2148663296 Sep 5 22:55
jessie64_vagrant_box_image_0.img
How can I control those file permissions as a regular user in the
libvirtd group? I need to read that image in order to use qemu-img to
rebase and create a new vagrant box. The current `vagrant package`
command only works with VirtualBox VMs, but its easy to make a .box if
you have read access to the libvirt QEMU .img file.
/etc/libvirt/storage/default.xml says:
<target>
<path>/var/lib/libvirt/images</path>
<permissions>
<mode>0755</mode>
</permissions>
</target>
You can find the context for this work here:
https://gitlab.com/fdroid/fdroidserver/issues/238
.hc
--
PGP fingerprint: EE66 20C7 136B 0D2C 456C 0A4D E9E2 8DEA 00AA 5556
https://pgp.mit.edu/pks/lookup?op=vindex&search=0xE9E28DEA00AA5556
7 years, 10 months
[libvirt-users] high memory guest issues - virsh start and QEMU_JOB_WAIT_TIME
by Blair Bethwaite
Hi all,
In IRC last night Dan helpfully confirmed my analysis of an issue we are
seeing attempting to launch high memory KVM guests backed by hugepages...
In this case the guests have 240GB of memory allocated from two host NUMA
nodes to two guest NUMA nodes. The trouble is that allocating the hugepage
backed qemu process seems to take longer than the 30s QEMU_JOB_WAIT_TIME
and so libvirt then most unhelpfully kills the barely spawned guest. Dan
said there was currently no workaround available so I'm now looking at
building a custom libvirt which sets QEMU_JOB_WAIT_TIME=60s.
I have two related questions:
1) will this change have any untoward side-effects?
2) if not, then is there any reason not to change it in master until a
better solution comes along (or possibly better, alter
qemuDomainObjBeginJobInternal
to give a domain start job a little longer compared to other jobs)?
--
Cheers,
~Blairo
7 years, 10 months
[libvirt-users] Error in libvirt-GUI
by abhishek jain
Hi,
I am using libvirt --> 0.10.2 and virt-manager --> 0.9.0 version (RHEL 6).
Very rare I get following error message dialogue box. (Image is attached for the error dialogue box )
Error saysError polling connection:'qemu+ssh......' Internal error client socket is closed
TraceBack (most recent calls)engine.py: 440 conn.tickconnection.py: 1433 self.hostinfo() = self.vmm.getinfo()
Is it a problem with libvirt or with my environment
Thanks in advance.
RegardsAbhishek
7 years, 10 months
[libvirt-users] Hyper-V support in Windows client
by masaeedu@gmail.com
Hi there,
I’m trying to connect to Hyper-V on Windows using libvirt. It seems neither the libvirt binaries bundled with Virt Viewer, nor the libvirt binary in the mingw64-libvirt Rawhide RPM have support for Hyper-V. Is it likely that just downloading the source RPM and trying to build –with-hyperv would just work?
Thanks,
Asad
7 years, 10 months