all domains paused, maybe logging might be helpfull
by Lennart Fricke
Hello,
I just hit the situation that all domains on a host where paused due to
missing space. It took me some time to figure out that there was no
space left for the images on the host. I learned that 'virsh domstate
--reason $GUEST' and 'domblkerror $GUEST' could have helped me. But the
logs are silent about the problem.
Would it be possible to show these problems in logs or is there other
documentation than the reference to find out how to troubleshoot those
issues?
Thank you
Lennart
7 months, 1 week
Info regarding AMX support and libvirt implications
by Gianluca Cecchi
Hello,
I'm trying to use AMX in my virtual machines.
More info on AMX:
https://www.intel.com/content/www/us/en/products/docs/accelerator-engines...
My system in test is currently SLES 15 SP5.
I'm also verifying in parallel with Suse (especially regarding the
backported features in their 5.14 based kernel), but in the meantime I
would like to understand implication, if any, of libvirt in the
certification loop I have to analyse.
From what I see we have in upstream:
. support in the KVM kernel module since 5.17
. support of cpu model SapphireRapids, the first offering AMX as an ISA
extension, in QEMU since 7.0
Is there any dependance to check on libvirt too?
When I run
virsh cpu-models x86_64
Is libvirt sw stack querying qemu directly? Or the kvm kernel module? Or
any internal "database" file?
From man page it is not clear to me what "known" means:
"
cpu-models
Syntax:
cpu-models arch
Print the list of CPU models known by libvirt for the specified
architecture. Whether a specific hypervisor is able to create a domain
which uses any of the printed CPU models is a separate question which can
be answered by looking at the domain capabilities XML returned by
domcapabilities command. Moreover, for some architectures libvirt does not
know any CPU models and the usable CPU models are only limited by the
hypervisor. This command will print that all CPU models are accepted for
these architectures and the actual list of supported CPU models can be
checked in the domain capabilities XML.
"
In SLES 15 SP5 with:
qemu-7.1.0-150500.49.9.2.x86_64
kernel-default-5.14.21-150500.55.49.1.x86_64
libvirtd-*-9.0.0-150500.6.11.1.x86_64
I get
# virsh cpu-models x86_64
...
Cascadelake-Server
Cascadelake-Server-noTSX
Icelake-Client
Icelake-Client-noTSX
Icelake-Server
Icelake-Server-noTSX
Cooperlake
Snowridge
athlon
phenom
Opteron_G1
Opteron_G2
...
# virsh domcapabilities | grep -i sapphirerapid
#
In fedora39 with
qemu-8.1.3-4.fc39.x86_64
kernel-6.7.5-200.fc39.x86_64
libvirt-*-9.7.0-2.fc39.x86_64
I get
# virsh cpu-models x86_64
...
Cascadelake-Server
Cascadelake-Server-noTSX
Icelake-Client
Icelake-Client-noTSX
Icelake-Server
Icelake-Server-noTSX
Cooperlake
Snowridge
SapphireRapids
athlon
phenom
Opteron_G1
Opteron_G2
...
# virsh domcapabilities | grep -i sapphirerapids
<model usable='no' vendor='Intel'>SapphireRapids</model>
#
because I'm running on a client system without AMX support
Thanks in advance,
Gianluca
7 months, 3 weeks
non-root bridge set-up on Fedora 39 aarch64
by Chuck Lever
Hello-
I'm somewhat new to the libvirt world, and I've encountered a problem
that needs better troubleshooting skills than I have. I've searched
Google/Ecosia and stackoverflow without finding a solution.
I set up libvirt on an x86_64 system without a problem, but on my
new aarch64 / Fedora 39 system, virsh doesn't seem to want to start
virbr0 when run from my own user account:
cel@boudin:~/kdevops$ virsh net-start default
error: Failed to start network default
error: error creating bridge interface virbr0: Operation not permitted
cel@boudin:~/kdevops$ cat /etc/qemu/bridge.conf
allow virbr0
cel@boudin:~/kdevops$
Where can I look next?
--
Chuck Lever
8 months
Re: restarting libvirtd with sr-iov
by Paul B. Henson
To reply to myself, I see that the sr-iov pool is initialized by
networkCreateInterfacePool in network/bridge_driver.c, and it looks like
ports are allocated by networkAllocatePort. The latter looks for a
device with 0 connections as defined by
netdef->forward.ifs[i].connections, and later bumps that count if the
device is successfully allocated.
So it seems the answer to my question is that libvirt does maintain this
state in memory only, and does not try to re-create it if restarted. As
such I don't think there's any way to recover from my situation
currently short of shutting down everything :(.
networkCreateInterfacePool iterates over all the vf's while configuring
the pool. Would there by any way for it to check to see if a vf is
already is use while doing so, and initialize connections to 1 so it
won't be used until the running vm releases it, or at least generate a
warning that the vf is in use and *not* add it to the pool?
Thanks...
8 months
restarting libvirtd with sr-iov
by Paul B. Henson
We're running libvirt under Debian 12, package version 9.0.0-4. Earlier
today I made a configuration change and restarted libvirtd. I've done
this for years and never had a problem, after restarting it shows all
the active storage pools, networks, and virtual machines and worked
fine.
However, I guess this is the first time I've done it on a system with an
sr-iov network pool. After restarting, I was unable to initialize any
virtual machines, as it would try to reallocate vf's that were in use by
existing machines already running, resulting in an error from qemu.
I spent a fair amount of time trying to recover from this, ideally with
some way to make libvirt scan existing vm's and update the sr-iov pool
in use status, or even some manual way to tell it which ones were in
use. Unfortunately, I couldn't find anything and ended up having to shut
down all the vm's and then restart them to fix it.
Where is the sr-iov pool state stored? Is it just in an in-memory data
structure that goes away when libvirt restarts? libvirt doesn't
inventory existing vm's and figure out what's in use at startup if
that's the case?
Is there any way to recover from this situation short of the nuclear
"shut down and restart everything" option?
Thanks much...
8 months
Set up networking so the VM Guest uses LAN DHCP?
by Jeffrey Walton
Hi Everyone,
I'm having trouble understanding what I need to do so my VM guests use
the DHCP server on my LAN. I've read
<https://wiki.libvirt.org/VirtualNetworking.html#virtual-networking>,
but I don't see the use case covered. There is a section on dns-dhcp,
but it looks like some sort of libvirt-internal setup so guests get
their networking params from libvirt, and not my DHCP server.
(I am looking for something similar to VirtualBox and Bridged
networking. VBox does what I want when I select a bridged adapter).
How do I set up networking so the VM Guest uses LAN DHCP?
Thanks in advance.
8 months
Windows VM shutting down reason=crashed
by Jürgen Echter
Hello,
i have a few Windows Server VM's running and one of them is randomly shutting itself down. In the log file i have the following line:
2024-02-26 06:53:13.286+0000: shutting down, reason=crashed
What would be a good approach to figure out what the reason is?
libvirt version 9.8.0
Gentoo linux with kernel 6.6.13
Thanks for some hints
Juergen
8 months
add nvdimm and set it as a dax device
by Pierre Clouzet
Hello,
I'm trying to add an nvdimm device on my vm and configure it to a dax mode.
Directly with qemu, I was:
sudo ndctl disable-namespace namespace0.0
sudo ndctl create-namespace -m devdax
sudo daxctl reconfigure-device -m system-ram all --force
However, when running with virt-manager, I get the following error message when I run
sudo ndctl create-namespace -m devdax
Error: create namespace: region0 align setting is 0x1000000 size 0x1dde0000 is misaligned.
Here are the xml infos for the nvdimm device:
<memory model="nvdimm" access="shared">
<source>
<path>/mnt/scratch/pclouzet/libvirt/dax0.0</path>
<alignsize unit="KiB">2048</alignsize>
<pmem/>
</source>
<target>
<size unit="KiB">488282</size>
<node>0</node>
<label>
<size unit="KiB">128</size>
</label>
</target>
<address type="dimm" slot="0"/>
</memory>
</devices>
Is there an additional command I missed to set it as a dax device?
Thanks,
Pierre
8 months, 2 weeks
qemu arguments
by Pierre Clouzet
Hello,
I've recently tried libvirt api to manage virtual machine.
The point of my work is to modify the topology of a virtual machine and test performances (add numa nodes, change number of cpu per numa, nvm devices, cxl, etc)
As far as understand, when using virt-manager, qemu arguments stored in xml files can only be like the one that page:
[ https://wiki.libvirt.org/QEMUSwitchToLibvirt.html | https://wiki.libvirt.org/QEMUSwitchToLibvirt.html ]
But maybe I misundertood?
For example, when I directly run qemu with arguments, I can add an arguments like this:
qemu [...] -object memory-backend-ram,size=4G,id=ram0
-object memory-backend-ram,size=2G,id=ram1
-numa node,nodeid=0,memdev=ram0,cpus=0-3
-numa node,nodeid=1,memdev=ram1,cpus=4-7
To have 2 numa nodes with 4 cpus each.
Is there an equivalent of -objet, -numa, -device; qemu arguments when using libvirt? Or arguments are restricted to the one described on wiki page?
Thanks for you help,
Have a good day,
Pierre Clouzet
8 months, 3 weeks