[libvirt-users] What may be cause vm "in shutdown" state
by ahongloumeng@163.com
Hello,
We meet a question about vm state.
System environment:
linux version: centos 6.8
libvirt version: 3.4.0
qemu version: 2.4.1
When shutdown one of vms, the state of the vm stuck in in shutdown state. And the qemu process is gone.
Look up the vm log /var/log/libvirt/qemu/vm.log:
"qemu: terminating on signal 15 from pid 54601
2017-07-19 04:41:09.960+0000: shutting down, reason=crashed"
What may be cause vm "in shutdown" state
Thanks!
ahongloumeng(a)163.com
7 years, 5 months
[libvirt-users] Way to detect virtual machine cpu features
by Lei Zhang
Hello everyone
I want to know how can I use libvirt to detect what cpu features a virtual
machine will see.
I guess I could do it in following way:
1. if cpu mode is 'custom', use 'virsh cpu-baseline --features' on the cpu
model to get model features.
2. if cpu mode is 'host-passthrough' or 'host-model', do a 'virsh
capabilities' to list cpu features of physical host, they are identical to
features of virtual machine.
Is this right way to do things? Look forward to your valuable comments.
Best regards,
Lei
7 years, 5 months
Re: [libvirt-users] [Qemu-devel] [PATCH v2] hmp: allow cpu index for "info lapic"
by Igor Mammedov
On Wed, 19 Jul 2017 16:48:23 +0800 (CST)
<wang.yi59(a)zte.com.cn> wrote:
> >* wang.yi59(a)zte.com.cn (wang.yi59(a)zte.com.cn) wrote:
>
>
> >> Hi Eduardo,
>
> >>
>
> >> Thank you for your reply!
>
> >>
>
> >> >On Mon, Jul 17, 2017 at 09:49:37PM -0400, Yi Wang wrote:
>
> >>
>
> >> >> Add [vcpu] index support for hmp command "info lapic", which is
>
> >>
>
> >> >> useful when debugging ipi and so on. Current behavior is not
>
> >>
>
> >> >> changed when the parameter isn't specified.
>
> >>
>
> >> >>
>
> >>
>
> >> >> Signed-off-by: Yi Wang <wang.yi59(a)zte.com.cn>
>
> >>
>
> >> >> Signed-off-by: Yun Liu <liu.yunh(a)zte.com.cn>
>
> >>
>
> >> >
>
> >>
>
> >> >We have 8 monitor commands (see below) that use the CPU set by
>
> >> >the "cpu" command (mon_get_cpu()) as input. Why is "info lapic"
>
> >> >special?
>
> >>
>
> >> When we debugging a problem of ipi, we wanted to verify lapic info
>
> >> on each vCPU, but we found that we could only get vCPU 0's lapic
>
> >> through "info lapic", so we supposed this patch could help those
>
> >> who have the same problem as us.
>
> >
>
> >I think Eduardo's point is that you can already do:
>
> > cpu 0
>
> > info lapic
>
> > cpu 1
>
> > info lapic
>
> Yes, I get it, thank you.
>
> The reason of the problem we met is that we use "virsh qemu-monitor-command",
>
> so the 'cpu' command didn't work.
you can try to use qmp interface directly which supports specifying cpu for monitor commands:
qemu supports:
-- Command: human-monitor-command
Execute a command on the human monitor and return the output.
Arguments:
'command-line: string'
the command to execute in the human monitor
'cpu-index: int' (optional)
The CPU to use for commands that require an implicit CPU
maybe "virsh qemu-monitor-command" can also do it, CCing libvirt list
>
>
>
>
>
>
>
>
> ---
>
> Best wishes
>
> Yi Wang
7 years, 5 months
[libvirt-users] Hot-migration of OVS Vlan configuration
by Antoine Millet
Hi list,
I'm using OVS as backend for guest networking on my hypervisors. VLAN
configuration for each interface is specified into the guest XML
definition.
When I need to change VLAN configuration of a running guest, I first
edit the inactive XML to keep the changes for future boots, then I use
ovs-vsctl to instruct the changes on the existing OVS Port.
The problem happen when the guest is migrated to another hypervisor.
The "active" XML is used to instantiate the VM on the destination and
this XML doesn't incorporate the changes made to the inactive one. All
the changes made on source are lost on the running guest at
destination.
I'm looking for a solution to this problem:
- IFAIK, I cannot edit the "active" XML myself other than by using
provided Libvirt API (like attach-device), and such API doesn't exists
for VLANs modification on OVS interface
- I cannot use a migration hook to pass changes made to the inactive
XML into the active one because the inactive XML is not reachable from
the hook
- I found a patch allowing to transport some information attached to an
OVS port during migration[1]. Unfortunately, this patch doesn't
transport the VLAN configuration
- Modification of the active XML during the migration itself is not
desirable because it will complicate the migration process by
requesting an external tool
Did I miss something to achieve VLAN configuration migration? How to
implement that into the Libvirt properly?
Thanks!
Antoine
[1] https://www.redhat.com/archives/libvir-list/2012-September/msg01092
.html
7 years, 5 months
Re: [libvirt-users] [Fwd: UEFI NVRAM variables]
by Laszlo Ersek
Hi,
thank you, Andrea, for the forward.
On 07/13/17 10:19, Andrea Bolognani wrote:
> -------- Forwarded Message --------
> From: Thomas Meyer <thomas(a)m3y3r.de>
> To: libvirt-users(a)redhat.com
> Subject: [libvirt-users] UEFI NVRAM variables
> Date: Wed, 12 Jul 2017 07:49:43 +0200
>
>> Hi,
>>
>> how do I set the BootOrder variable in an NVRAM file for UEFI boot?
This is one of the most frequently asked questions about OVMF.
Short version:
- alternative 1: use neither <boot dev="..."/> nor <boot index="..."/>
in your domain XML, and manage the BootOrder and Boot#### variables
entirely within the guest (using the OVMF Setup TUI and/or the
efibootmgr utility).
- alternative 2: use <boot index="..."/> in your domain XML, and it will
mostly do what you want.
Mid-size version:
* Never use "-boot order=x,y,z" with OVMF, on the QEMU command line,
because the guest-visible effects of this QEMU option are fully useless
to UEFI guest firmware. For more details, please refer to
<https://bugzilla.redhat.com/show_bug.cgi?id=1323085#c10>.
Translating the above to libvirt, never use <boot dev='...'/> elements
in your domain XML with UEFI guest firmware. (They are also discouraged
by the libvirt documentation, for their non-unique meaning.)
* The "-device xxx,bootindex=N" properties on the QEMU command line are
usable and useful to UEFI guest firmware. (This is what libvirt's <boot
order='N'/> per-device elements map to, which are also what the libvirt
docs recommend.)
However, the expressive power of these properties is still lesser than
the expressive power of the UEFI boot options, therefore the best that
OVMF can do here is a kind of auto-generation of all possible boot
options, and filtering and reordering them based on translation and
prefix-matching.
For that, the long version: please refer to the OVMF whitepaper from a
few years back, where I described this algorithm in excruciating detail:
http://www.linux-kvm.org/downloads/lersek/ovmf-whitepaper-c770f8c.txt
Search it for "Platform-specific boot policy".
>> Is there a way to configure a domain with the bios qemu option instead of pflash?
If you want to boot a guest with OVMF as firmware, but map the firmware
as ROM, not as pflash, you can do it. For that, you have to use the
unified OVMF image (called "OVMF.fd" most commonly), not the split image
(called "OVMF_CODE.fd" and other variants). The <loader> element should
look like:
<loader>/usr/share/OVMF/OVMF.fd</loader>
and the <nvram> element should not be present.
This will map to "-bios OVMF.fd" on the QEMU command line.
Be *strongly* warned though that by this, you are throwing away
persistent non-volatile variables, and that the fake variable support
that OVMF falls back to does not -- cannot -- conform to the UEFI spec.
You will encounter obscure mis-behavior with variables. (One of the
obvious glitches will be that Secure Boot settings will not survive a
reboot.)
>>
>> Are there a tool available to manipulate the UEFI variables from the outside?
No, there isn't, and as far as it's up to me, there won't be. (This is
another of the most frequently asked questions.)
The reason for this is that the representation of the variable store on
the host filesystem (in /var/lib/libvirt/qemu/nvram/<guest>_VARS.fd) is
the functional composition of three layers in the edk2 driver stack:
- platform-specific pflash driver / layout
- generic fault tolerant write driver (implementing a kind of journaled
system, where you can "lose power" in the middle of a write operation,
and your varstore won't be corrupted)
- generic variable driver
None of these are standardized in UEFI, and even restricting ourselves
to edk2 (the reference implementation of PI/UEFI), only the last two
layers are documented in Intel whitepapers. Regarding examples for the
first layer, OVMF itself can be built with two distinct (and
incompatible) variable store layouts, and the ArmVirtQemu varstore
layout is again different.
In other words, a host-side utility to parse and modify these files
would have to reimplement the functionality of the above three layers,
deal with, and clean up, interrupted writes (see the fault tolerant
writer level), and generally keep chasing any pertinent changes in the
edk2 codebase. (Upstream edk2 does not guarantee backwards compatibility
with existent variable stores, and it does not provide conversion tools
either.) If someone feels up to writing and continuously maintaining
such a host-side utility, I won't try to prevent them. But, I won't be
that person.
Another approach would be to replace the above-mentioned variable driver
stack in OVMF, with a custom "paravirt" variable driver implementation.
This would be a large project, and even if I were sympathetic to it
(which I'm not), I'm pretty sure it wouldn't be welcome to upstream edk2
(because competing, parallel implementations for the same thing are a
big maintenance burden). I vaguely suspect that a giga-company whose
name starts with G and refers to a very large number has done this,
targeting their own (unreleased / proprietary) VMM, but to my knowledge
their variable driver implementation is similarly unreleased / proprietary.
Either way, massaging guest-produced data from the host side, be the
data "UEFI variable content" or "disk image content", is also a security
question. So the only one really feasible approach here would be a
libguestfs-like tool that
- booted a guest on top of the variable store,
- implemented a kind of "firmware guest agent" that manipulated the
variables "from the inside",
- and used custom commands over virtio-serial, between host and guest.
As I said, a large project.
Thanks,
Laszlo
7 years, 5 months
[libvirt-users] VM freezes while insmod virtio_net.ko
by jason
Hi all,
I encountered a tricky problem. I am using ovs+DPDK on the host side
to build the virtual network, so on the VM side , the virtio_net.ko
module is used to drive the nic. But VM always freezes while insmod
virtio_net.ko (I found virtio_net.ko is located in initrams.img by
defualt, so I had to delete all virtio modules from initramfs.img and
insmod then one by one after system is loaded to confirm that).
After such freeze, virsh reset can not recover it due to cannot
acquire state change lock (held by remoteDispatchDomainReset) so I
have to use virsh destroy instead.
On the ovs-vswitchd side, the last log I can see is:
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:0 file:59
VHOST_CONFIG: virtio is not ready for processing.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:0 file:60
VHOST_CONFIG: read message VHOST_USER_SET_VRING_NUM
VHOST_CONFIG: read message VHOST_USER_SET_VRING_BASE
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ADDR
VHOST_CONFIG: read message VHOST_USER_SET_VRING_KICK
VHOST_CONFIG: vring kick idx:1 file:46
VHOST_CONFIG: virtio is not ready for processing.
VHOST_CONFIG: read message VHOST_USER_SET_VRING_CALL
VHOST_CONFIG: vring call idx:1 file:61
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 0
VHOST_CONFIG: read message VHOST_USER_SET_VRING_ENABLE
VHOST_CONFIG: set queue enable: 1 to qp idx: 1
VHOST_CONFIG: read message VHOST_USER_SET_FEATURES
Software version I used are:
guest kernel: 3.10.0-514.26.2.el7.x86_64
host kernel: 3.10.0-514.26.2.el7.x86_64
host libvirt: 2.0.0-10.el7_3.9.x86_64
host qemu: 2.6.0-28.el7.10.1
host openvswitch: git 2.7.0
host DPDK: git 17.02.0
Please help to have a look at this issue.
--
B.R.,
Jason
7 years, 5 months
Re: [libvirt-users] Is there still no easier way to shrink a VM image?
by Leroy Tennison
Thanks for letting me know I'm not making myself clear, let me try again with a few more specifics, These are Windows VMs with three disk images for C:, D: and L:. To simplify I'll focus on the image for C: which had grown to 77G physical size on the hypervisor's ssd (I just double-checked that with 'ls -al' because qemu-img below says 70G, this image had two snapshots at one time which may be the reason for the discrepancy). qemu-img info reports:
file format: qcow2
virtual size: 100G (107374182400 bytes)
disk size: 70G
cluster_size: 65536
Format specific information:
compat: 0.10
I used Windows Server 2012r2 "Optimize" (defrag) and then reduced the C: partition to about 67G in Disk Administrator leaving the remaining 33G as unallocated. Afterward I tried a web reference technique and used Sysinternals SDelete to zero the free space then used 'qemu-img convert -O qcow2 <original.qcow2> <new.qcow2>' to produce a physical image size of 29G. qemu-img info reports on "new.qcow2":
file format: qcow2
virtual size: 100G (107374182400 bytes)
disk size: 29G
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
The issue is that the virtual size is still 100G. I don't have the physical disk space to allow approximately 12 images (all configured for a virtual size of 100G) to grow to that size (6 VMs with C: and D: on a 1TB ssd, L: is on an hdd which isn't an issue). I need to change that virtual size number for this image to 67 or 68G. On the D: images I can drop to approximately 40G for an aggregate total of about 650G for all six VMs - well within the 875G physical size limitation that the ssd provides after overhead.
That's a long background to why I need to change the virtual disk size. Again, any alternatives would be much appreciated.
-----Original Message-----
From: Martin Kletzander <mkletzan(a)redhat.com>
To: Leroy Tennison <leroy.tennison(a)verizon.net>
Cc: libvirt-users <libvirt-users(a)redhat.com>
Sent: Tue, Jul 11, 2017 2:51 am
Subject: Re: [libvirt-users] Is there still no easier way to shrink a VM image?
On Tue, Jul 11, 2017 at 12:34:31AM -0500, Leroy Tennison wrote:
>I have numerous qcow2 images which need to be reduced in size and have
>their maximum size (virtual size) reduced. Physical disk space became
>so low that VMs "auto-paused" themselves, I moved enough images to solve
>the immediate problem but need to rectify the underlying issue. It
>seems that qcow[2] files are grown in size such that the data inside of
>them takes about 50-60% of the space (does anyone know the actual
>algorithm or how to control it?). Given the total physical disk space
>on the hypervisors, I need something more restrictive.
>
I don't get it. You have virtual size greater than the free space on
the physical storage and instead of the VM finding out you want the
guest OS to see it has no space at all?
>Our hypervisors are a mix of Ubuntu 14 or 16 LTS (qemu-img 2.2 or 2.5).
>After doing all the preparation (defragment, reduce OS partition size)
>"qemu-img resize" reports that shrinking isn't supported yet. My web
>research indicates that, to accomplish this, I have to:
>
> convert to raw
>
> shrink the image
>
> convert back to qcow[2]
>
> increase the image size to provide for some growth
>
>I'm hoping I've missed something in my research and that someone knows
>an easier way. I don't feel constrained to qemu-img but this is a
>production environment precluding consideration of experimental
>software. Virt-resize, guestfish or any other reasonable option is fine
>with me. Solutions or ideas? Thanks.
>
>_______________________________________________
>libvirt-users mailing list
>libvirt-users(a)redhat.com
>https://www.redhat.com/mailman/listinfo/libvirt-users
7 years, 5 months
[libvirt-users] Is there still no easier way to shrink a VM image?
by Leroy Tennison
I have numerous qcow2 images which need to be reduced in size and have
their maximum size (virtual size) reduced. Physical disk space became
so low that VMs "auto-paused" themselves, I moved enough images to solve
the immediate problem but need to rectify the underlying issue. It
seems that qcow[2] files are grown in size such that the data inside of
them takes about 50-60% of the space (does anyone know the actual
algorithm or how to control it?). Given the total physical disk space
on the hypervisors, I need something more restrictive.
Our hypervisors are a mix of Ubuntu 14 or 16 LTS (qemu-img 2.2 or 2.5).
After doing all the preparation (defragment, reduce OS partition size)
"qemu-img resize" reports that shrinking isn't supported yet. My web
research indicates that, to accomplish this, I have to:
convert to raw
shrink the image
convert back to qcow[2]
increase the image size to provide for some growth
I'm hoping I've missed something in my research and that someone knows
an easier way. I don't feel constrained to qemu-img but this is a
production environment precluding consideration of experimental
software. Virt-resize, guestfish or any other reasonable option is fine
with me. Solutions or ideas? Thanks.
7 years, 5 months
[libvirt-users] How to make gic_version=3 as defailt to qemu on arm64
by Vishnu Pajjuri
Hi
I'm running Openstack which is installed by using devstack. But it is
not launching VMs.
>From command line with gic_version=3 option it is running. But openstack
glance doesn't have any privilege to specify gic version.
On my ARM64 board gicv2 is not supported, so i want to make gicv3 as
default one to pass to qemu.
Kindly suggest any specific version of vibvirt or patch such that libvirt
should pass gicv3 as default one.
Thanks in Advance
-Vishnu.
7 years, 5 months