[libvirt-users] VM bootup got failed with systemd/dbus error messages
by PRIYANKA M A
We are having one server machine (ubuntu 16.04 OS) with 10 VMs.
When the server is rebooted, some VMs got failed to boot with systemd/dbus
error messages.
Rebooting the affected VMs resolves this issue.
Jan 30 20:40:02 VM systemd[1]: Failed to subscribe to NameOwnerChanged
signal for 'org.freedesktop.DisplayManager': Connection timed out
Jan 30 20:40:02 VM systemd[1]: Failed to subscribe to NameOwnerChanged
signal for 'org.freedesktop.NetworkManager': Connection timed out
Jan 30 20:40:02 VM systemd[1]: Failed to subscribe to NameOwnerChanged
signal for 'org.freedesktop.login1': Connection timed out
Jan 30 20:40:02 VM systemd[1]: Failed to subscribe to NameOwnerChanged
signal for 'org.freedesktop.Accounts': Connection timed out
Jan 30 20:40:02 VM systemd[1]: Failed to subscribe to NameOwnerChanged
signal for 'org.freedesktop.Avahi': Connection timed out
Jan 30 20:40:02 VM systemd[1]: Failed to subscribe to NameOwnerChanged
signal for 'org.freedesktop.ModemManager1': Connection timed out
Jan 30 20:40:02 VM systemd[1]: Failed to subscribe to activation signal:
Connection timed out
Jan 30 20:40:02 VM systemd[1]: Failed to register name: Connection timed out
Jan 30 20:40:02 VM systemd[1]: Failed to set up API bus: Connection timed
out
an 30 20:40:32 VM dbus[1009]: [system] Failed to activate service
'org.freedesktop.systemd1': timed out
Jan 30 20:40:32 VM systemd-logind[998]: Failed to enable subscription:
Failed to activate service 'org.freedesktop.systemd1': timed out
Jan 30 20:40:32 VM systemd-logind[998]: Failed to fully start up daemon:
Connection timed out
Host machine specifications:-
OS : Ubuntu 16.04
Kernel : 4.15.0-47-generic
CPUs : 48 ( 2*12*2)
RAM : 64G
VM specifications:-
OS : Ubuntu 16.04
Kernel : 4.13.16
CPUs : 8
RAM : 4G
Attached the full log file.
Kindly help us to resolve this issue with a permanent fix.
*With Best Regards,*
M A PRIYANKA
VVDN Technologies Pvt Ltd
*Cell : * +91 8489574081 | *Skype :* priyankamathavan27
--
_Disclaimer: _© 2019 VVDN Technologies Pvt. Ltd. This e-mail contains
PRIVILEGED AND CONFIDENTIAL INFORMATION intended solely for the use of the
addressee(s). If you are not the intended recipient, please notify the
sender by e-mail and delete the original message. Further, you are not to
copy, disclose, or distribute this e-mail or its contents to any other
person and any such actions are unlawful.__
5 years, 5 months
[libvirt-users] GVT-g - suboptimal user experience
by Alex Ivanov
Hi.
In the current state of gvt-g the user experience is suboptimal.
So my question is what are the ETAs for following features:
1. Accelerated virt-manager console using gvt-g device
2. Custom resolutions or dynamic resolution
3. UEFI VMs support (Windows guest)
Thanks.
5 years, 5 months
[libvirt-users] virtio-user intarfece
by Avi Cohen
Hello all
i have multiple workload containers connected to my DPDK-app.
I have another special container - which all packets are mirrored to it.
Is it possible for zero-copy reason - that all virtio interfaces shared
memory between my DPDK-app and workload containers - can be accessed by
this special container ?
it means that for every workload container X this is a shared memory
between the:
5 years, 5 months
Re: [libvirt-users] [libvirt] surprising <backingStore type='file'> setting in domain.xml
by Thomas Stein
(switched to libvirt-users as it seems to be more appropriate)
On 2019-05-16 23:02, Eric Blake wrote:
> On 5/16/19 10:20 AM, Thomas Stein wrote:
>> Hello all.
>>
>> My currently used versions: libvirt-5.2.0 and qemu-4.0.0.
>>
>> Here is my problem. I'm struggeling since a few weeks with a strange
>> behaviour by either qemu or libvirt. After a reboot of
>> the hardware node the $domain.xml contains suddenly a backingStore
>> setting which was not there before reboot.
>> Something like that:
>>
>> <devices>
>> <emulator>/usr/bin/qemu-system-x86_64</emulator>
>> <disk type='file' device='disk'>
>> <driver name='qemu' type='qcow2'/>
>> <source
>> file='/var/lib/libvirt/shinymail/shinymail_weekly.qcow2-2019-05-15'/>
>> <backingStore type='file'>
>> <format type='qcow2'/>
>> <source file='/var/lib/libvirt/images/shinymail.qcow2'/>
>
> Yes, this matches:
>
>> </backingStore>
>> <target dev='vda' bus='virtio'/>
>> <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
>> function='0x0'/>
>> </disk>
>> ...
>>
>> This obviousely happens after a backup has been running. The Backup
>> Script looks like this:
>>
>> <snip>
>> virsh snapshot-create-as --domain shinymail weekly --diskspec
>> vda,file=/var/lib/libvirt/shinymail/shinymail_weekly.qcow2-$(date
>> +%Y-%m-%d) --disk-only --atomic --no-metadata
>
> the effects of this command.
>
> Ultimately, I'm TRYING to get my new 'virsh domain-backup' command
> integrated into the next libvirt release, which has the advantage of
> performing a backup WITHOUT having to modify the <domain> XML. But
> until
> that happens, any time you use 'virsh snapshot-create-as' as part of a
> sequence for performing backups, you ARE modifying the <domain> XML,
> and
> if you want to revert to the external backup, or if...
Cool. Will it be in 5.4.0 already?
>>
>> cp ...
>>
>> virsh blockcommit shinymail vda --active --verbose --pivot
>> <snip>
>
> ...blockcommit fails for whatever reason to undo the effects of
> 'snapshot-create-as' in creating a temporary overlay, then yes, you do
> have to worry about the temporary overlay being in the way, where
> you'll
> have to manually edit the <domain> definition to match the actual disk
> layouts you really want.
So you're saying blockcommit fails me somehow? What i'm asking myself
is, why does
this problem suddenly appear. It worked literally for years this way.
thanks for your answer Eric.
cheers
t.
>>
>> So after that "dmblklist shinymail" does show the right source file
>> but
>> after a reboot it tries to use the weekly snapshot
>> again which leads to filesystem errors.
>>
>> Someone has an idea what could cause such a behaviour?
>>
5 years, 5 months
[libvirt-users] surprising <backingStore type='file'> setting in domain.xml
by Thomas Stein
Hello all.
My currently used versions: libvirt-5.2.0 and qemu-4.0.0.
Here is my problem. I'm struggeling since a few weeks with a strange
behaviour by either qemu or libvirt. After a reboot of
the hardware node the $domain.xml contains suddenly a backingStore
setting which was not there before reboot.
Something like that:
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source
file='/var/lib/libvirt/shinymail/shinymail_weekly.qcow2-2019-05-15'/>
<backingStore type='file'>
<format type='qcow2'/>
<source file='/var/lib/libvirt/images/shinymail.qcow2'/>
</backingStore>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
</disk>
...
This obviousely happens after a backup has been running. The Backup
Script looks like this:
<snip>
virsh snapshot-create-as --domain shinymail weekly --diskspec
vda,file=/var/lib/libvirt/shinymail/shinymail_weekly.qcow2-$(date
+%Y-%m-%d) --disk-only --atomic --no-metadata
cp ...
virsh blockcommit shinymail vda --active --verbose --pivot
<snip>
So after that "dmblklist shinymail" does show the right source file but
after a reboot it tries to use the weekly snapshot
again which leads to filesystem errors.
Someone has an idea what could cause such a behaviour?
cheers
t.
--
libvir-list mailing list
libvir-list(a)redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list
5 years, 5 months
[libvirt-users] domain still running although snapshot-file is deleted !?!
by Lentes, Bernd
Hi,
i have a strange situation:
A domain is still running where domblklist points to a snapshot file and also dumpxml says the current drive is that snapshot file.
But the file has been deleted hours ago. And the domain is still running. I can login via ssh, the database and the webserver are still running,
domain is performant.
How can that be ?
Also lsof shows that the file is deleted:
qemu-syst 27007 qemu 15ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
qemu-syst 27007 qemu 16ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
qemu-syst 27007 27288 qemu 15ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
qemu-syst 27007 27288 qemu 16ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
CPU\x200/ 27007 27308 qemu 15ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
CPU\x200/ 27007 27308 qemu 16ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
CPU\x201/ 27007 27309 qemu 15ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
CPU\x201/ 27007 27309 qemu 16ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
vnc_worke 27007 27321 qemu 15ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
vnc_worke 27007 27321 qemu 16ur REG 254,14 335609856 183599 /mnt/snap/sim.sn (deleted)
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 89 3187 1241
phone: +49 89 3187 3827
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg
wer Fehler macht kann etwas lernen
wer nichts macht kann auch nichts lernen
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
5 years, 5 months
[libvirt-users] domains paused without any obvious reason
by Lentes, Bernd
Hi,
i have a two node HA-Cluster with several domains as resources.
Currently it's running in test mode.
Some domains (all on the same host) stopped running, virsh list shows them as "paused".
All stopped at the same time (11th of may, 7:00 am), my monitoring system began to yell.
I don't have any clue why this happened.
virsh domblkerror says for all the domains (5) "no space". The days before the domains were running fine and i know that all disks inside the domain should have enough space.
Also the host is not running out of space.
The logs don't say anything sensefully, unfortunately i didn't have a log for the libvirtd daemon, i just configured that now.
The domains are stopped each day by cron at 10:30 pm for a short moment, a snapshot is taken, domains are started again, the backing file is copied to a CIFS server and if that is finished the snapshot is blockcommited into the backing file.
That's working fine already for several days. This cronjob creates a log and it's looking fine.
The domains reside in naked Logical Volumes, the respective Volume Group has enough space.
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 89 3187 1241
phone: +49 89 3187 3827
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg
wer Fehler macht kann etwas lernen
wer nichts macht kann auch nichts lernen
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Stellv. Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrung: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, Kerstin Guenther
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
5 years, 5 months
[libvirt-users] VM display is blank when open (but Gnome Boxes thumbnails ok)
by Michel Rozpendowski
Hi,
I also posted this question on the IRC #virt channel but have not received
any reaction at the moment of posting this message to the mailing list.
Since I last turned off my laptop (Dell XPS-15-9570 running Ubuntu 19.04)
on Friday 10 May (I had to do it by long power press by the way), I am no
longer able to get the display of my VMs: I am getting a black screen, even
though I can see it running in the thumbnail of Gnome Boxes.
You can see the described behaviour on the following video capture:
https://youtu.be/Lv67g0foyzc.
Troubleshooting Log from Gnome Boxes is available here:
https://pastebin.com/9eWGYsJV.
It was working all properly on Friday before I turn off my laptop.
Any clue what could be wrong and how I could solve this issue (I digged the
Internet for 2 hours without success)?
Thanks in advance for your help!
Kind regards,
Michel
5 years, 5 months
[libvirt-users] Cannot get interface MTU - qemu quest fails to start off OpenVswitch
by lejeczek
hi guys
I have a qemu guest and openvswitch bridge and the guest fails to start:
$ virsh start work8
error: Failed to start domain work8-vm-win2016
error: Cannot get interface MTU on 'ovsbr0': No such device
LXC guest which uses the same source network starts just fine.
I'm on Centos 7 with openvswitch-2.9.0-3.el7.x86_64 from
centos-openstack-pike repo and libvirt-4.5.0-10.el7_6.7.x86_64.
Would anybody care to suggest a possible reson(s) causing the failure?
many thanks, L.
5 years, 5 months
[libvirt-users] Running libvirtd out of source directory connection reset error
by Peter P.
Hi all,
I'm getting started with hacking around with libvirt and am trying to
familiarize myself with launching and running an instance of libvirtd
I built from source on Centos 7.6.
Following the instructions from https://libvirt.org/compiling.html to
launch my built versions of libvirtd and virsh, I get the following
error with no other context when trying to start a domain using "virsh
start mydomain":
error: Cannot recv data: Connection reset by peer
Despite this error, I am able to run commands list virsh list.
Are there additional parameters needed to launch libvirtd or
additional services I need to start up alongside it?
Thanks,
Peter
5 years, 5 months