[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] virt-p2v - Windows 10 guest hangs at boot after successful P2V
by JT Edwards
Hi all,
I successfully virt-p2v'ed a Windows 10 laptop to my Centos 7.3 instance
running KVM. However, on boot, the guest hangs. Is there a registry fix
that is needed after the P2V is done? Here is what is in the guest's
logfile:
017-01-08 03:20:37.508+0000: starting up libvirt version: 2.0.0, package:
10.el7_3.2 (CentOS BuildSystem <http://bugs.centos.org>,
2016-12-06-19:53:38, c1bm.rdu2.centos.org), qemu version: 1.5.3
(qemu-kvm-1.5.3-126.el7), hostname: torden40.me.org
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name win10 -S -machine
pc-i440fx-rhel7.0.0,accel=kvm,usb=off -cpu
Conroe,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff -m 4096 -realtime
mlock=off -smp 2,sockets=2,cores=1,threads=1 -uuid
f72e7ad5-98d4-44ab-aa85-347fe232b4e5 -no-user-config -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-9-win10/monitor.sock,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=localtime,driftfix=slew -global kvm-pit.lost_tick_policy=discard
-no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global
PIIX4_PM.disable_s4=1 -boot strict=on -device
ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7 -device
ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/home/tstrike39/Virtuals/tordenmobile.img,format=qcow2,if=none,id=drive-ide0-0-0
-device
ide-hd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=1
-netdev tap,fd=26,id=hostnet0 -device
rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:d6:c8:67,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
spicevmc,id=charchannel0,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0
-device usb-tablet,id=input0,bus=usb.0,port=1 -spice
port=5900,addr=127.0.0.1,disable-ticketing,image-compression=off,seamless-migration=on
-vga qxl -global qxl-vga.ram_size=67108864 -global
qxl-vga.vram_size=67108864 -global qxl-vga.vgamem_mb=16 -device
intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -chardev
spicevmc,id=charredir0,name=usbredir -device
usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 -chardev
spicevmc,id=charredir1,name=usbredir -device
usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
char device redirected to /dev/pts/2 (label charserial0)
main_channel_link: add main channel client
main_channel_handle_parsed: net test: latency 0.140000 ms, bitrate
14222222222 bps (13563.368055 Mbps)
((null):19947): Spice-Warning **:
red_channel.c:542:red_channel_client_send_ping: getsockopt failed,
Operation not supported
red_dispatcher_set_cursor_peer:
inputs_connect: inputs channel client create
((null):19947): Spice-Warning **:
red_channel.c:542:red_channel_client_send_ping: getsockopt failed,
Operation not supported
((null):19947): Spice-Warning **:
red_channel.c:542:red_channel_client_send_ping: getsockopt failed,
Operation not supported
Below is the XML of my migrated instance:
<?xml version='1.0' encoding='utf-8'?>
<domain type='kvm'>
<!-- generated by virt-v2v 1.32.7rhel=7,release=3.el7.centos,libvirt -->
<name>localhost</name>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu>2</vcpu>
<os>
<type arch='x86_64'>hvm</type>
</os>
<features>
<acpi/>
<apic/>
</features>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/home/tstrike39/Virtuals/localhost-sda'/>
<target dev='vda' bus='virtio'/>
</disk>
<interface type='network'>
<source network='default'/>
<model type='virtio'/>
<mac address='f0:de:f1:08:9c:c4'/>
</interface>
<video>
<model type='cirrus' vram='9216' heads='1'/>
</video>
<graphics type='vnc' autoport='yes' port='-1'/>
<input type='tablet' bus='usb'/>
<input type='mouse' bus='ps2'/>
<console type='pty'/>
</devices>
</domain>
Any help would be appreciated!
7 years, 9 months
[libvirt-users] CfP 12th Virtualization in High-Performance Cloud Computing Workshop (VHPC '17)
by VHPC 17
====================================================================
CALL FOR PAPERS
12th Workshop on Virtualization in High-Performance Cloud Computing (VHPC
'17)
held in conjunction with the International Supercomputing Conference - High
Performance,
June 18-22, 2017, Frankfurt, Germany.
(Springer LNCS Proceedings)
====================================================================
Date: June 22, 2017
Workshop URL: http://vhpc.org
Abstract Submission Deadline: February 28, 2017
Paper Submission Deadline: April 25, 2017 (Springer LNCS)
Abstract/Paper Submission Link: https://edas.info/newPaper.php?c=23179
Call for Papers
Virtualization technologies constitute a key enabling factor for flexible
resource management
in modern data centers, and particularly in cloud environments. Cloud
providers need to
manage complex infrastructures in a seamless fashion to support the highly
dynamic and
heterogeneous workloads and hosted applications customers deploy.
Similarly, HPC
environments have been increasingly adopting techniques that enable
flexible management of
vast computing and networking resources, close to marginal provisioning
cost, which is
unprecedented in the history of scientific and commercial computing.
Various virtualization technologies contribute to the overall picture in
different ways: machine
virtualization, with its capability to enable consolidation of multiple
underutilized servers with
heterogeneous software and operating systems (OSes), and its capability to
live-migrate a
fully operating virtual machine (VM) with a very short downtime, enables
novel and dynamic
ways to manage physical servers; OS-level virtualization (i.e.,
containerization), with its
capability to isolate multiple user-space environments and to allow for
their coexistence within
the same OS kernel, promises to provide many of the advantages of machine
virtualization
with high levels of responsiveness and performance; I/O Virtualization
allows physical
NICs/HBAs to take traffic from multiple VMs or containers; network
virtualization, with its
capability to create logical network overlays that are independent of the
underlying physical
topology and IP addressing, provides the fundamental ground on top of which
evolved
network services can be realized with an unprecedented level of dynamicity
and flexibility; the
increasingly adopted paradigm of Software-Defined Networking (SDN)
promises to extend
this flexibility to the control and data planes of network paths.
Publication
Accepted papers will be published in a Springer LNCS proceedings volume.
Topics of Interest
The VHPC program committee solicits original, high-quality submissions
related to
virtualization across the entire software stack with a special focus on the
intersection of HPC
and the cloud.
Major Topics
- Virtualization in supercomputing environments, HPC clusters, HPC in the
cloud and grids
- OS-level virtualization and containers (Docker, rkt, Singularity,
Shifter, i.a.)
- Lightweight/specialized operating systems, unikernels
- Optimizations of virtual machine monitor platforms and hypervisors
- Hypervisor support for heterogenous resources (GPUs, co-processors,
FPGAs, etc.)
- Virtualization support for emerging memory technologies
- Virtualization in enterprise HPC and microvisors
- Software defined networks and network virtualization
- Management, deployment of virtualized environments and orchestration
(Kubernetes i.a.),
- Workflow-pipeline container-based composability
- Performance measurement, modelling and monitoring of virtualized/cloud
workloads
- Virtualization in data intensive computing and Big Data processing - HPC
convergence
- Adaptation of HPC technologies in the cloud (high performance networks,
RDMA, etc.)
- ARM-based hypervisors, ARM virtualization extensions
- I/O virtualization and cloud based storage systems
- GPU, FPGA and many-core accelerator virtualization
- Job scheduling/control/policy and container placement in virtualized
environments
- Cloud reliability, fault-tolerance and high-availability
- QoS and SLA in virtualized environments
- IaaS platforms, cloud frameworks and APIs
- Large-scale virtualization in domains such as finance and government
- Energy-efficient and power-aware virtualization
- Container security
- Configuration management tools for containers (including CFEngine,
Puppet, i.a.)
- Emerging topics including multi-kernel approaches and,NUMA in hypervisors
The Workshop on Virtualization in High-Performance Cloud Computing (VHPC)
aims to
bring together researchers and industrial practitioners facing the
challenges
posed by virtualization in order to foster discussion, collaboration,
mutual exchange
of knowledge and experience, enabling research to ultimately provide novel
solutions for virtualized computing systems of tomorrow.
The workshop will be one day in length, composed of 20 min paper
presentations, each
followed by 10 min discussion sections, plus lightning talks that are
limited to 5 minutes.
Presentations may be accompanied by interactive demonstrations.
Important Dates
February 28, 2017 - Abstract Submission Deadline
April 25, 2017 - Paper submission deadline
May 30, 2017 - Acceptance notification
June 22, 2017 - Workshop Day
June 25, 2017 - Camera-ready version due
Chair
Michael Alexander (chair), scaledinfra technologies, Austria
Anastassios Nanos (co-chair), NTUA, Greece
Balazs Gerofi (co-chair), RIKEN Advanced Institute for Computational
Science, Japan
Program committee
Stergios Anastasiadis, University of Ioannina, Greece
Jakob Blomer, CERN, Europe
Ron Brightwell, Sandia National Laboratories, USA
Eduardo César, Universidad Autonoma de Barcelona, Spain
Julian Chesterfield, OnApp, UK
Stephen Crago, USC ISI, USA
Christoffer Dall, Columbia University, USA
Patrick Dreher, MIT, USA
Robert Futrick, Cycle Computing, USA
Maria Girone, CERN, Europe
Kyle Hale, Northwestern University, USA
Romeo Kinzler, IBM, Switzerland
Brian Kocoloski, University of Pittsburgh, USA
Nectarios Koziris, National Technical University of Athens, Greece
John Lange, University of Pittsburgh, USA
Che-Rung Lee, National Tsing Hua University, Taiwan
Giuseppe Lettieri, University of Pisa, Italy
Qing Liu, Oak Ridge National Laboratory, USA
Nikos Parlavantzas, IRISA, France
Kevin Pedretti, Sandia National Laboratories, USA
Amer Qouneh, University of Florida, USA
Carlos Reaño, Technical University of Valencia, Spain
Thomas Ryd, CFEngine, Norway
Josh Simons, VMWare, USA
Borja Sotomayor, University of Chicago, USA
Craig Stewart, Indiana University, USA
Anata Tiwari, San Diego Supercomputer Center, USA
Kurt Tutschku, Blekinge Institute of Technology, Sweden
Yasuhiro Watashiba, Osaka University, Japan
Nicholas Wright, Lawrence Berkeley National Laboratory, USA
Chao-Tung Yang, Tunghai University, Taiwan
Paper Submission-Publication
Papers submitted to the workshop will be reviewed by at least two
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 10 pages, including tables
and figures at a main font size no smaller than 11 point. Submission
of a paper should be regarded as a commitment that, should the paper
be accepted, at least one of the authors will register and attend the
conference to present the work. Accepted papers will be published in a
Springer LNCS volume. .
The format must be according to the Springer LNCS Style. Initial
submissions are in PDF; authors of accepted papers will be requested
to provide source files.
Format Guidelines:
ftp://ftp.springer.de/pub/tex/latex/llncs/latex2e/llncs2e.zip
Abstract, Paper Submission Link:
https://edas.info/newPaper.php?c=23179
Lightning Talks
Lightning Talks are non-paper track, synoptical in nature and are strictly
limited to 5 minutes.
They can be used to gain early feedback on ongoing research, for
demonstrations, to present
research results, early research ideas, perspectives and positions of
interest to the community.
Submit abstract via the main submission link.
General Information
The workshop is one day in length and will be held in conjunction with the
International
Supercomputing Conference - High Performance (ISC) 2017, June 18-22,
Frankfurt,
Germany.
7 years, 9 months
[libvirt-users] Libvirt 3.0 machine start fails on LVM
by Jeroen Hoekx
Hello,
Since updating to libvirt 3.0 I am no longer able to create a virtual
machine that is backed by LVM storage on my two Arch machines.
This is the error in virt-manager:
Unable to complete install: 'An error occurred, but the cause is unknown'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 88, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/create.py", line 2288, in
_do_async_install
guest.start_install(meter=meter)
File "/usr/share/virt-manager/virtinst/guest.py", line 461, in start_install
doboot, transient)
File "/usr/share/virt-manager/virtinst/guest.py", line 396, in _create_guest
self.domain = self.conn.createXML(install_xml or final_xml, 0)
File "/usr/lib/python2.7/site-packages/libvirt.py", line 3773, in createXML
if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self)
libvirtError: An error occurred, but the cause is unknown
I originally encountered the problem when running my Ansible modules
for libvirt in https://github.com/ansible-provisioning/ansible-provisioning
, so the issue is not virt-manager specific.
Steps to reproduce in virt-manager (1.4.0-3):
- create a new VM
- choose import existing disk image
- create a new logical volume when browsing to find the existing image
path or select a lv that was manually created before.
- click forward
- click forward
- click finish
When using qcow2 storage, both virt-manager and my modules work fine.
This was also working on previous versions of libvirt. I do not see
any changes in compilation flags in the Arch packages.
Is there anything I can do to help diagnose this issue?
Greetings,
Jeroen
7 years, 9 months
[libvirt-users] libvirt does not show same CPU Model as /proc/cpuinfo for CPU Model info.
by akhilesh rawat
Hi ,
Created new thread .
Environment:
Bare Metal server + CentOs with qemu/KVM +libvirt for virtualization
Guest Instantiated with virt-install with forced CPU model like below
virt-install --virt-type kvm --name compute-0 --cpu
Haswell,+fma,+movbe,+fsgsbase,+bmi1,+hle,+avx2,+smep,+bmi2,+erms,+invpcid,+rtm
--ram=61440 --vcpus=20 --os-type=linux --os-variant=generic
After guest installation /proc/cpuinfo show model name as Haswell .
However Libvirt virsh capabilities show CPU configuration as "SandyBridge .
"
1. Could I get explanation of the why inconsistency .
2. Is this expected behaviour
3. Why libvirt does not show same CPU Model as /proc/cpuinfo for CPU Model
info.
My Aim : Nested Virtualization and Nested Guest needs CPU model as Haswell
strictly .
4. IS it possible alternative to succeed in my aim ?
Thanks a lot for support. !
Guest CPU config:
processor : 19
vendor_id : GenuineIntel
cpu family : 6
model : 60
model name : Intel Core Processor (Haswell)
stepping : 1
microcode : 0x1
cpu MHz : 1995.144
cache size : 4096 KB
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
pat pse36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm constant_tsc
rep_good nopl eagerfpu pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic
popcnt tsc_deadline_timer aes xsave avx hypervisor lahf_lm xsaveopt
bogomips : 4105.33
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
<capabilities>
<host>
<uuid>95c4e625-3383-4ec1-9ee4-c47a87d93b97</uuid>
<cpu>
<arch>x86_64</arch>
<model>SandyBridge</model>
<vendor>Intel</vendor>
<topology sockets='20' cores='1' threads='1'/>
<feature name='hypervisor'/>
<feature name='osxsave'/>
<feature name='pcid'/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
Host CPU config:
processor : 23
vendor_id : GenuineIntel
cpu family : 6
model : 45
model name : Intel(R) Xeon(R) CPU E5-2620 0 @ 2.00GHz
stepping : 7
microcode : 0x710
cpu MHz : 1200.000
cache size : 15360 KB
physical id : 1
siblings : 12
core id : 5
cpu cores : 6
apicid : 43
initial apicid : 43
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl
xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor
ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic
popcnt tsc_deadline_timer aes xsave avx lahf_lm ida arat epb pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid xsaveopt
bogomips : 4004.01
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
Br Aki
7 years, 9 months
[libvirt-users] LibVirt query CPU Model support and restore operation
by akhilesh rawat
Hello ,
It was working good working with kvm management tools using libvirt .
virsh/virt-manager .
But then i got annoyed when management tool did not allow me to change the
CPU model while creating new virtual machine .
error
root@kvm-server qemu]# virt-install --virt-type kvm --name compute-2 --cpu
Haswell-noTSX --ram=61440 --vcpus=20 --os-type=linux --os-variant=generic
--disk
compute2-disk0.qcow2,device=disk,bus=ide,size=300,sparse=true,format=qcow2
--pxe --network bridge=virbr0,model=e1000 --network
bridge=virbr0,model=e1000 --network bridge=virbr0,model=virtio --network
bridge=virbr0,model=virtio --graphics vnc,port=5906 --noautoconsole
Starting install...
ERROR unsupported configuration: guest and host CPU are not compatible:
Host CPU does not provide required features: invpcid, erms, bmi2, smep,
avx2, bmi1, fsgsbase, movbe, fma
Domain installation does not appear to have been successful.
I could not resolve this error .
Finally i came across the post which say that libvirt cross verifies flags
under CPU model present in
/usr/share/libvirt/cpu_map.xml and cat /proc/cpuinfo .
An yes the flag complained were not present in host proc/cpuinfo.
Question 1: Is it really needed for Libvirt to do this check .
AS KVM seems to allow support Haswell with native commands . Why libvirt is
doing this check then ?
As i was not sure where the problem lies and i was using libvirt 2.0.0
version i upgraded to 2.5.0 by compiling form source .
But then i faces issue using virsh/virt-manager as seem it was not
compatilble with upgarded version of libvirt.
Now my system is little messed around libvirt . I did reinstall/reboot
Quite many things not working .
Question 2 : How can i restore original Libvirt 2.0.0 .
br aki
7 years, 10 months
[libvirt-users] Unresticted guest trunk network interface
by Dennis Jacobfeuerborn
Hi,
is there way to create a network interface for a guest that just
forwards packets tagged with a vlan id?
Currrently I have a guest that has configured 10+ interfaces and for
each one I add I have to create a new vlan interface and a bridge on the
host which makes things rather cumbersome.
What I'd like to have is just one interface eth0 in the guest and then
be able to create eth0.2, eth0.3 in the guest and have the host just
pass them through without interference.
I found some guides on the internet but these always come with some
limitation such as that if you have a vlan interface on the host like
e.g. eth0.8 then the guest no longer receives packets in vlan 8 as there
apparently get consumed by the host.
Is there any solution for this?
Regards,
Dennis
7 years, 10 months
[libvirt-users] KVM live migration issues for Windows guests
by Fabrizio Soppelsa
Hi,
We're repeatedly facing a live migration issue with Windows guests, would be
great if someone could send their thoughts/suggestions/experience on how to
further troubleshoot this.
When we live migrate a Windows instance, it gets migrated (guest is running on
the destination host), but it eventually internally hangs. From the Windows
logs, the operating system has tried to shut down during the live migration.
There is no info at all, even at debug level, in libvirt logs.
I'm thinking to go with perf kvm or maybe kernel upgrade, but it might be I'm
missing something obvious in Windows for this scenario?
QEMU 2.0.0
Libvirtd 1.2.2
Host kernel version : 3.13.0-40-generic
Windows version: Windows 2012 R2
Has RBD volumes attached driver name='qemu' type='raw' cache='writeback'/, but
reproduced also without.
Thanks!
![](https://link.nylas.com/open/bi1c6xnw46zc0gy4ozrqerfkk/local-e0bf2fff-
21e6?r=bGlidmlydC11c2Vyc0ByZWRoYXQuY29t)
7 years, 10 months