ERROR Couldn't find hvm kernel for Ubuntu tree.
by Kaushal Shriyan
Hi,
I am running the below command to spawn Ubuntu 18.04 based Virtual Machine
using KVM based technology.
#virt-install --version
1.5.0
#
virt-install --name=snipeitassetmanagement
> --file=/linuxkvmguestosdisk/snipeitassetmanagement.img --file-size=40
> --nonsparse --vcpus=2 --ram=8096 --network=bridge:br0 --os-type=linux
> --os-variant=ubuntu18.04 --graphics none
> --location=/linuxkvmguestosdisk/var/lib/libvirt/isos/ubuntu-18.04.5-live-server-amd64.iso
> --extra-args="console=ttyS0"
>
> Starting install...
> Retrieving file .treeinfo...
>
> | 0 B 00:00:00
> Retrieving file content...
>
> | 0 B 00:00:00
> Retrieving file info...
>
> | 70 B 00:00:00
> ERROR Couldn't find hvm kernel for Ubuntu tree.
> Domain installation does not appear to have been successful.
> If it was, you can restart your domain by running:
> virsh --connect qemu:///system start snipeitassetmanagement
> otherwise, please restart your installation.
>
>
Any clue and i look forward to hearing from you. Thanks in advance.
Best Regards,
3 years, 3 months
Usage of virsh commands on guest lxc container are failing
by Yuva Raj
Hi Team,
I am new to the Qemu/KVM, libvirt and these technologies.
Hypervisor is running on linux kernel with libvirtd version 1.3.2.
I am trying to spawn LXC ubuntu(21.04) container using virsh -c lxc://
commands on the hypervisor and ubuntu container is running now.
On the ubuntu 21.04 container, I have installed debian "libvirt-clients"
package to use the virsh utility.
Updated the /etc/libvirt/libvirt.conf with to the hypervisor Qemu URI.
When I run virsh commands without URI, observing below error
root@host:~# virsh list
error: failed to connect to the hypervisor
error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such
file or directory
root@host:~# virsh -c <Qemu URI> list
Id Name State
-------------------------
2 VM running
If I execute the virsh commands without "-c" option, then ubuntu 21.04
container would display the VM's running as same as hypervisor.
Could you please help to manage this on the LXC ubuntu container?
LIBVIRT xml used to spawn LXC ubuntu 21.04 container:
<domain type='lxc'>
<name>host</name>
<uuid>096f46bf-80bb-4441-a512-043a9c7a64d4</uuid>
<memory unit='KiB'>1048576</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<vcpu placement='static' cpuset='7'>1</vcpu>
<os>
<type arch='x86_64'>exe</type>
<init>/sbin/init</init>
</os>
<features>
<capabilities policy='allow'>
<audit_control state='on'/>
<audit_write state='on'/>
<block_suspend state='on'/>
<chown state='on'/>
<ipc_lock state='on'/>
<ipc_owner state='on'/>
<kill state='on'/>
<mac_admin state='on'/>
<mac_override state='on'/>
<mknod state='on'/>
<net_admin state='on'/>
<net_bind_service state='on'/>
<net_broadcast state='on'/>
<net_raw state='on'/>
<sys_admin state='on'/>
<sys_boot state='on'/>
<sys_ptrace state='on'/>
<sys_rawio state='on'/>
<sys_resource state='on'/>
<sys_time state='on'/>
<syslog state='on'/>
</capabilities>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>destroy</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/lib64/libvirt/libvirt_lxc</emulator>
<filesystem type='mount' accessmode='passthrough'>
<source dir='/junos/lxc/jdm/jdm1/rootfs'/>
<target dir='/'/>
</filesystem>
</devices>
</domain>
Thanks,
Yuvaraj.
3 years, 3 months
attach a pcie root port as hostdev
by Jiatong Shen
Hello community,
I am working on plugging a xilinx fpga card as a hostdev. This fpga card
has 2 functions, each has a different role, for some reason, I would like
to attach both of them but do not want to attach them separately, so I have
tried to attach a pci bridge where fpga cards connect to. but got the
following error,
error: Failed to attach device from bridge.xml
error: internal error: Non-endpoint PCI devices cannot be assigned to guests
so, my question is is it possible to attach a pci-bridge as hostdev? thank
you.
--
Best Regards,
Jiatong Shen
3 years, 3 months
Sharing dhcp leases between multiple host systems
by Michael Ablassmeier
hi,
assume i have multiple host systems which spin up virtual machines using
the vagrant/vagrant-libvirt provider. Both host systems have a defined
network (which has the same name on both hosts) which the first network
interface of the virtual machine is assigned to.
During boot of the virtual machine, the first network device is
configured via DHCP and vagrant uses the mac address table or libvirt
dhcp leases table to find out about the IP address that was assigned to
the virtual machine: From that point on, i can reach the virtual machine
locally on the host system.
This works nicely if the network is a libvirt NAT network, as the IP
addresses are unique on both host systems.
Now i want to change the situation and provide routed addresses, thus i
want to make sure that an IP that is assigned for a virtual machine on
host A is not re-used on host B to not have IP address conflicts.
What im searching for is the "libvirt" way to have a central lease file
between multiple hosts for the same network (without having another
layer like OVS/OVN).
What i guess would work is:
1) share /var/lib/libvirt/dnsmasq between both host systems, of course
means the virtual bridge for the network has to have the same
name on both systems.
2) replace /usr/libexec/libvirt_leaseshelper with my own version, that
stores the leases in an central place.
3) a way that exists and i dont know about?
Option 2) sounds the best for me, but i currently dont see a way to
specify the dhcp-script used for a network on libvirt side .. any
opinions on this?
Using libvirt 7.x and alike from the centos 8 advanced virtualization
stream.
thanks,
- michael
3 years, 3 months
Disk extend during migration
by Vojtech Juranek
Hi,
as a follow-up of BZ #1883399 [1], we are reviewing vdsm VM migration flows and
solve few follow-up bugs, e.g. BZ #1981079 [2]. I have couple of questions
related to libvirt:
* if we run disk extend during migration, it can happen that migration finishes
sooner than disk extend. In such case we will try to set disk threshold on
already stopped VM (we handle libvirt event that VM was stopper, but due to
Python GIL there can be a delay between obtaining appropriate signal from
libvirt and handling it). In such case we get libvirt
VIR_ERR_OPERATION_INVALID when setting disk threshold. Is it safe to
catch this exception and ignore it or it's thrown for various reasons and the
root cause can be something else than stopped VM?
* after disk extend, we resume VM if it's stopped (usually due to running out
of the disk space). Is it safe to do so also when we do the disk extend during
migration and VM can be stopped because it was already migrated? I.e. can we
assume that libvirt will handle such situation and won't resume VM in such
case? We do some checks before resume and try to avoid situation when we
resume migrated VM, but there can be some corner cases and it would be useful
to know if we can rely in libvirt to prevent resuming VM in unwanted cases
like one when VM is stopper after migration.
Thanks
Vojta
[1] https://bugzilla.redhat.com/1883399
[2] https://bugzilla.redhat.com/1981079
3 years, 3 months