[libvirt-users] Network problem when rebooting Fedora qemu/kvm guest on Gentoo host
by Dan Johansson
Hello,
I have some problem with some of my qemu/kvm guests running Fedora on a
Gentoo host where my Gentoo guests works without problem.
The problem that I have is that when I reboot (shutdown -r now) the
Fedora guest "loses" (ifconfig does not show a IP-address) their network
connections. The Gentoo guests reboots without problem.
All the guests have same HW-configuration, only Name/DiskImage/MAC-
addresses differ.
Here some technical details:
Host:
Distribution Gentoo
Kernel 4.9.95-gentoo
Qemu app-emulation/qemu-2.11.1-r2
USE="aio bzip2 caps curl fdt filecaps gnutls jpeg lzo ncurses nls
pin-upstream-blobs png sdl seccomp vhost-net vnc xattr"
QEMU_SOFTMMU_TARGETS="x86_64"
QEMU_USER_TARGETS="x86_64"
libvirt app-emulation/libvirt-4.3.0
USE="caps dbus libvirtd nls qemu udev"
Guest
Distribution Fedora 28
Kernel 4.16.15-300.fc28.x86_64
Networkmanager NetworkManager.x86_64 1:1.10.8-1.fc28
Any suggestions?
--
Dan Johansson
***************************************************
This message is printed on 100% recycled electrons!
***************************************************
6 years, 5 months
[libvirt-users] Feedback for a new application NIMO using the libvirt API
by Jay Mehta
Hello folks,
A while back I developed VM instance monitoring tool "NIMO", Nova
(Openstack's project Nova) Instance Monitor. The need to develop Nimo arise
with requirement to monitor VM's boot stages.
A VM boot undergoes multiple stages, like creating block device, mounting
root filesystem, mounting local filesystem, DHCP lease acquiring,
post-install activities, etc., and it could fail or stuck at any of these
stages. It would be good to have a monitor in place to identify which stage
a VM is in while booting and how much time it takes at each stage.
A special need for such tool arise on the cloud space, where end-user do
not have access to log on to the hypervisor.
It also helps to people who build custom VM images and add a lot of tasks
in post install script and would like to know performance of each step.
The below is the github link and it has an architecture diagram.
https://github.com/jay-g-mehta/nimo
I would love to hear all of your response and feel free to use it, extend
it.
Thanks,
Jay
6 years, 5 months
[libvirt-users] "virsh" list
by Bo YU
Hello,
I installed libvirt from package manager(debian apt) and want to modify
feture in it, so i git clone from https://libvirt.org/git/?p=libvirt.git;a=summary
But now i have trouble to start virsh.
I have traced the email:
https://www.redhat.com/archives/libvirt-users/2017-February/msg00074.html
Here is output of command below:
strace -o libvirt.log -f -s 1000 /usr/sbin/libvirtd
https://github.com/yuzibo/linux-programming/blob/master/tmp/libvirt.log
Here:
sudo gdb -p $(pgrep libvirtd)
(gdb) t a a bt
[New LWP 14781]
[New LWP 14782]
[New LWP 14783]
[New LWP 14784]
[New LWP 14785]
[New LWP 14786]
[New LWP 14787]
[New LWP 14788]
[New LWP 14789]
[New LWP 14790]
[New LWP 14791]
[New LWP 14792]
[New LWP 14793]
[New LWP 14794]
[New LWP 14795]
[New LWP 14796]
[New LWP 14799]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
0x00007fcf4c75f67d in poll () at ../sysdeps/unix/syscall-template.S:84
84../sysdeps/unix/syscall-template.S: (no such file or directory)没有那个文件或目录.
""
Thread 18 (Thread 0x7fcf3a4e8700 (LWP 14799)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x7fcf3400cc90,
m=m@entry=0x7fcf3400cc50) at util/virthread.c:154
#2 0x00007fcf405498a0 in udevEventHandleThread (opaque=<optimized out>)
at node_device/node_device_udev.c:1606
#3 0x00007fcf4e17ec02 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf3a4e8700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 17 (Thread 0x7fcf3b315700 (LWP 14796)):
#0 0x00007fcf4c75f67d in poll () at ../sysdeps/unix/syscall-template.S:84
#1 0x00007fcf4c77917e in __poll_chk (fds=<optimized out>,
nfds=<optimized out>, timeout=<optimized out>, fdslen=<optimized out>)
at poll_chk.c:27
#2 0x00007fcf4e10f281 in poll (__timeout=-1, __nfds=<optimized out>,
__fds=0x7fcf3b3143b0) at /usr/include/x86_64-linux-gnu/bits/poll2.h:41
#3 virCommandProcessIO (cmd=cmd@entry=0x7fcf34015d30)
--Type <return> to continue, or q <return> to quit---
at util/vircommand.c:2085
#4 0x00007fcf4e1134ea in virCommandRun (cmd=0x7fcf34015d30,
exitstatus=exitstatus@entry=0x7fcf3b314924) at util/vircommand.c:2316
#5 0x00007fcf3e239fca in virQEMUCapsInitQMPCommandRun (
cmd=cmd@entry=0x7fcf34015b10, forceTCG=forceTCG@entry=false)
at qemu/qemu_capabilities.c:4262
#6 0x00007fcf3e23f4b9 in virQEMUCapsInitQMP (qmperr=0x7fcf3b314970,
runGid=<optimized out>, runUid=<optimized out>, libDir=<optimized out>,
qemuCaps=0x7fcf340159d0) at qemu/qemu_capabilities.c:4318
#7 virQEMUCapsNewForBinaryInternal (hostArch=VIR_ARCH_X86_64,
binary=0x7fcf3403e160 "/usr/local/bin/qemu-system-x86_64",
libDir=<optimized out>, runUid=<optimized out>, runGid=<optimized out>,
microcodeVersion=166,
kernelVersion=0x7fcf3404e120 "4.14.0+ #1 SMP Sun Nov 26 15:17:48 HKT 2017")
at qemu/qemu_capabilities.c:4405
#8 0x00007fcf3e23f793 in virQEMUCapsNewData (binary=<optimized out>,
privData=<optimized out>) at qemu/qemu_capabilities.c:4447
#9 0x00007fcf4e18ca1e in virFileCacheNewData (
name=0x7fcf3403e160 "/usr/local/bin/qemu-system-x86_64",
cache=0x7fcf3404ddf0) at util/virfilecache.c:219
#10 virFileCacheValidate (cache=cache@entry=0x7fcf3404ddf0,
name=name@entry=0x7fcf3403e160 "/usr/local/bin/qemu-system-x86_64",
data=data@entry=0x7fcf3b314ae0) at util/virfilecache.c:290
---Type <return> to continue, or q <return> to quit---
#11 0x00007fcf4e18cd77 in virFileCacheLookup (
cache=cache@entry=0x7fcf3404ddf0,
name=name@entry=0x7fcf3403e160 "/usr/local/bin/qemu-system-x86_64")
at util/virfilecache.c:323
#12 0x00007fcf3e23f9ee in virQEMUCapsCacheLookup (
cache=cache@entry=0x7fcf3404ddf0,
binary=0x7fcf3403e160 "/usr/local/bin/qemu-system-x86_64")
at qemu/qemu_capabilities.c:4583
#13 0x00007fcf3e23fc94 in virQEMUCapsInitGuest (guestarch=VIR_ARCH_X86_64,
hostarch=VIR_ARCH_X86_64, cache=0x7fcf3404ddf0, caps=0x7fcf3404ded0)
at qemu/qemu_capabilities.c:736
#14 virQEMUCapsInit (cache=0x7fcf3404ddf0) at qemu/qemu_capabilities.c:971
#15 0x00007fcf3e28d3a0 in virQEMUDriverCreateCapabilities (
driver=driver@entry=0x7fcf3401cbd0) at qemu/qemu_conf.c:1101
#16 0x00007fcf3e2d0be2 in qemuStateInitialize (privileged=true,
callback=<optimized out>, opaque=<optimized out>) at qemu/qemu_driver.c:851
#17 0x00007fcf4e2b0bff in virStateInitialize (privileged=<optimized out>,
callback=0x564b91ad18a0 <daemonInhibitCallback>, opaque=0x564b933e4540)
at libvirt.c:662
#18 0x0000564b91ad18fb in daemonRunStateInit (opaque=0x564b933e4540)
at remote/remote_daemon.c:802
#19 0x00007fcf4e17ec02 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
---Type <return> to continue, or q <return> to quit---
#20 0x00007fcf4ca26494 in start_thread (arg=0x7fcf3b315700)
at pthread_create.c:333
#21 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 16 (Thread 0x7fcf3bb16700 (LWP 14795)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933f1bc8,
m=m@entry=0x564b933f1ba0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d6d80) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf3bb16700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 15 (Thread 0x7fcf3c317700 (LWP 14794)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933f1bc8,
---Type <return> to continue, or q <return> to quit---
m=m@entry=0x564b933f1ba0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d6cc0) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf3c317700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 14 (Thread 0x7fcf3cb18700 (LWP 14793)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933f1bc8,
m=m@entry=0x564b933f1ba0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d6c00) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf3cb18700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
---Type <return> to continue, or q <return> to quit---
Thread 13 (Thread 0x7fcf3d319700 (LWP 14792)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933f1bc8,
m=m@entry=0x564b933f1ba0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d6b20) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf3d319700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 12 (Thread 0x7fcf3db1a700 (LWP 14791)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933f1bc8,
m=m@entry=0x564b933f1ba0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d6a60) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
---Type <return> to continue, or q <return> to quit---
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf3db1a700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 11 (Thread 0x7fcf41df5700 (LWP 14790)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e49b8,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f944 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d5ca0) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf41df5700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 10 (Thread 0x7fcf425f6700 (LWP 14789)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
---Type <return> to continue, or q <return> to quit---
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e49b8,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f944 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d5b00) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf425f6700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 9 (Thread 0x7fcf42df7700 (LWP 14788)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e49b8,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f944 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d5bc0) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf42df7700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
---Type <return> to continue, or q <return> to quit---
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 8 (Thread 0x7fcf435f8700 (LWP 14787)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e49b8,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f944 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d5b00) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf435f8700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 7 (Thread 0x7fcf43df9700 (LWP 14786)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e49b8,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f944 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d5d60) at util/virthreadpool.c:124
---Type <return> to continue, or q <return> to quit---
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf43df9700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 6 (Thread 0x7fcf445fa700 (LWP 14785)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e4918,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d5b00) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf445fa700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 5 (Thread 0x7fcf44dfb700 (LWP 14784)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
---Type <return> to continue, or q <return> to quit---
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e4918,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d5bc0) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf44dfb700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 4 (Thread 0x7fcf455fc700 (LWP 14783)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e4918,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d5ca0) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf455fc700)
at pthread_create.c:333
---Type <return> to continue, or q <return> to quit---
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 3 (Thread 0x7fcf45dfd700 (LWP 14782)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e4918,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
opaque=opaque@entry=0x564b933d5d60) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf45dfd700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 2 (Thread 0x7fcf465fe700 (LWP 14781)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 ()
at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007fcf4e17ee36 in virCondWait (c=c@entry=0x564b933e4918,
m=m@entry=0x564b933e48f0) at util/virthread.c:154
#2 0x00007fcf4e17f983 in virThreadPoolWorker (
---Type <return> to continue, or q <return> to quit---
opaque=opaque@entry=0x564b933d5e20) at util/virthreadpool.c:124
#3 0x00007fcf4e17ebd8 in virThreadHelper (data=<optimized out>)
at util/virthread.c:206
#4 0x00007fcf4ca26494 in start_thread (arg=0x7fcf465fe700)
at pthread_create.c:333
#5 0x00007fcf4c768acf in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:97
Thread 1 (Thread 0x7fcf4eb4dd40 (LWP 14780)):
#0 0x00007fcf4c75f67d in poll () at ../sysdeps/unix/syscall-template.S:84
#1 0x00007fcf4e122684 in poll (__timeout=-1, __nfds=6, __fds=<optimized out>)
at /usr/include/x86_64-linux-gnu/bits/poll2.h:46
#2 virEventPollRunOnce () at util/vireventpoll.c:641
#3 0x00007fcf4e121371 in virEventRunDefaultImpl () at util/virevent.c:327
#4 0x00007fcf4e243e4d in virNetDaemonRun (dmn=0x564b933e4540)
at rpc/virnetdaemon.c:850
#5 0x0000564b91ad0d5e in main (argc=<optimized out>, argv=<optimized out>)
at remote/remote_daemon.c:1460
Thank you in advance!
6 years, 5 months
[libvirt-users] Amazon s3 as libvirt storage pool
by Shashwat shagun
Namaste,
i want to use Minio (open source amazon S3 compatible object storage) as
libvirt storage pool. is such thing possible? i apologize if this is a
stupid question. if not then can i write some driver for it?
--
Regards,
Shashwat Shagun
6 years, 5 months
[libvirt-users] Are more then one overlay possible without recieving a "Permission denied"?
by Xen Mann
I've created a backing chain like this. FirstFollr.ovl is an overlay to Base.img and so on.
Base.img (ok - Win 10 Installation starting)
FirstFloor.ovl (ok - Win 10 Installation starting)
SecondFloor.ovl (permission denied -> Base.img) => assumed Bug
Roof.ovl (permission denied -> Base.img) => assumed Bug
Using virsh / virt-manager, I can only start Base and FirstFloor. Starting Second and Roof are failing.
What's wrong?
Details are described in:
https://bugzilla.redhat.com/show_bug.cgi?id=1588373
System infos:
Compiled against library: libvirt 4.0.0
Using library: libvirt 4.0.0
Using API: QEMU 4.0.0
Running hypervisor: QEMU 2.11.1
Ubuntu 18.04 LTS (Dell Precission Tower 3620)
6 years, 5 months
[libvirt-users] List-Archives ...
by thg
Hi everybody,
actually I wanted to search the list archive, before asking, but
unfortunately I don't "get" it:
$ gunzip 2018-May.txt.gz
gunzip: 2018-May.txt.gz: not in gzip format
It seems, that in every archive-file there is always one message in
plaintext and then a big binary block.
Any hint?
Thanks a lot,
--
kind regards,
thg
6 years, 5 months
[libvirt-users] Vlan on vhostuser interfaces
by Pradeep K.S
I am planning to configure vlan on vhostuser interfaces in the libvirt xml,
it throws out error.
Does this configuration is supported ? I can install higher version if
required.
*Error:*
unsupported configuration: an interface of type 'vhostuser' is requesting a
vlan tag, but that is not supported for this type of connection
*Libvirt version:*
[redhathost@qemu]# libvirtd --version
libvirtd (libvirt) 3.2.0
*Qemu version:*
QEMU emulator version 2.9.0(qemu-kvm-ev-2.9.0-16.el7_4.11.1)
*Domain XML*
<interface type='vhostuser'>
<mac address='02:3a:c1:4b:a1:0a'/>
<source type='unix' path='/tmp/vhost-test' mode='server'/>
<vlan trunk='yes'>
<tag id='3' nativeMode='tagged'/>
<tag id='4'/>
</vlan>
<model type='virtio'/>
<driver rx_queue_size='1024'>
<host mrg_rxbuf='on'/>
</driver>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
--
Thanks and Regards,
Pradeep.K.S.
6 years, 5 months
[libvirt-users] Make discard='unmap' the default?
by Ian Pilcher
Is it possible to make discard='unmap' the default for virtio-scsi
disks? (Related, is it possible to make virtio-scsi the default disk
type, rather than virtio-blk?)
Thanks!
--
========================================================================
Ian Pilcher arequipeno(a)gmail.com
-------- "I grew up before Mark Zuckerberg invented friendship" --------
========================================================================
6 years, 5 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] Two Node Cluster
by Cobin Bluth
Hello Libvirt Users,
I would like to setup a two node bare-metal cluster. I need to guidance on
the network configuration. I have attached a small diagram, the same
diagram can be seen here: https://i.imgur.com/SOk6a6G.png
I would like to configure the following details:
- Each node has a DHCP enabled guest network where VMs will run. (eg,
*192.168.1.0/24
<http://192.168.1.0/24>* for Host1, and *192.168.2.0/24
<http://192.168.2.0/24>* for Host2)
- Any guest in Host1 should be able to ping guests in Host2, and vice versa.
- All guests have routes to reach the open internet (so that '*yum update*'
will work "out-of-the-box")
- Each node will be able to operate fully if the other physical node fails.
(no central DHCP server, etc)
- I would like to *add more physical nodes later* when I need the resources.
This is what I have done so far:
- Installed latest Ubuntu 18.04, with latest version of libvirt and
supporting software from ubuntu's apt repo.
- Each node can reach the other via its own eth0.
- Each node has a working vxlan0, which can ping the other via its vxlan0,
so it looks like the vxlan config is working. (I used *ip link add vxlan0
type vxlan...*)
- Configured route on Host1 like so: *ip route add 192.168.2.0/24
<http://192.168.2.0/24> via 172.20.0.1*
- Configured route on Host2 also: *ip route add 192.168.1.0/24
<http://192.168.1.0/24> via 172.20.0.2*
- All guests on Host1 (and Host1) can ping eth0 and vxlan0 on Host2, and
vice versa, yay.
- Guests on Host1 *cannot* ping guests on Host2, I suspect because the the
default NAT config of the libvirt network.
So, at this point I started to search for tutorials or more
information/documentation, but I am a little overwhelmed by the sheer
amount of information, as well as a lot of "stale" information on blogs etc.
I have learned that I can *virsh net-edit default*, and then change it to
an "open" network:* <forward mode='open'/>*
After doing this, the guests cannot reach outside their own network, nor
reach the internet, so I assume that I would need to add some routes, or
something else to get the network functioning like I want it. There is
also *<forward
mode="route"/>*, but I dont fully understand the scenarios where one would
need an *open* or a *route* forward mode. I have also shied away from using
openvswitch, and have opted for ifupdown2.
(I have taken most of my inspiration from this blog post:
https://joejulian.name/post/how-to-configure-linux-vxlans-with-multiple-u...
)
Some questions that I have for the mailing list, any help would be greatly
appreciated:
- Is my target configuration of a KVM cluster uncommon? Do you see
drawbacks of this setup, or does it go against "typical convention"?
- Would my scenario be better suited for an "*open*" network or a "*route*"
network?
- What would be the approach to complete this setup?
6 years, 5 months