[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 1 month
how to use external snapshots with memory state
by Riccardo Ravaioli
Hi all,
Best wishes for 2021! :)
So I've been reading and playing around with live snapshots and still
haven't figured out how to use an external memory snapshot. My goal is to
take a disk+memory snapshot of a running VM and, if possible, save it in
external files.
As far as I understand, I can run:
$ virsh snapshot-create $VM
... and that'll take an *internal* live snapshot of a given VM, consisting
of its disks and memory state, which will be stored in the qcow2 disk(s) of
the VM. In particular, the memory state will be stored in the first disk of
the VM. I can then use the full range of snapshot commands available:
revert, list, current, delete.
Now, an external snapshot can be taken with:
$ virsh snapshot-create-as --domain $VM mysnapshot --diskspec
vda,file=/home/riccardo/disk_mysnapshot.qcow2,snapshot=external --memspec
file=/home/riccardo/mem_mysnapshot.qcow2,snapshot=external
... with as many "--diskspec" as there are disks in the VM.
I've read the virsh manual and the libvirt API documentation, but it's not
clear to me what exactly I can do then with an external snapshot, in
particular with the file containing the memory state. In articles from 7-8
years ago people state that external memory snapshots cannot be reverted...
is it still the case today? If so, what's a typical usage for such files?
If not with libvirt, is it possible to revert to an external memory + disk
state in other ways, for instance through qemu commands?
Thanks!
Riccardo
3 years, 6 months
Re: [ovirt-devel] Issue: Device path changed after adding disks to guest VM
by Nir Soffer
On Wed, Dec 2, 2020 at 4:57 PM Joy Li <joooy.li(a)gmail.com> wrote:
>
> Thanks a lot Nir! Good to know that oVirt cannot guarantee the disk names so that I don't need to spend more time trying to enable such a feature.
>
> I can always reproduce the problem via my application, basically, the procedure is as following:
>
> 1. create a VM
Which guest OS? Can you share the guest disk image or ISO image used
to instal it?
> 2. add disks to the VM (e.g.: disk names: disk1, disk3)
> 3. check the disk mappings via `virsh domblklist `
Please shared the libvirt domain xml (virsh dumpxml vm-name)
> 4. add another disk (let's say, disk2, give a name alphabetically before some existing disks)
Add disk while the vm is running (hotplug)?
> 5. shutdown the VM via hypervisor and start it again (reboot won't work)
What do you mean by "reboot does not work?"
> 6. `virsh domblklist` again, then you might see the problem I mentioned before
Mapping is different compared with state before the reboot?
> There are no virtio devices inside /dev/disk/by-id/xxx of my guest VM.
Maybe you don't have systemd-udev installed?
The links in /dev/disk/... are created by udev during startup, and
when detecting a new disk.
> And I just noticed that the disks mapping information given by hypervisor (from VM configuration or virsh command) is different from the reality inside the VM. The disk name inside the VM was actually not changed.
>
> So now my issue is that given a disk name (/dev/vdb) of a VM, how can I get its wwid? Before I got it from the hypervisor, but now the hypervisor's information is not reliable, and since the disk is unformatted, I cannot use UUID.
You can use:
# udevadm info /dev/vda
P: /devices/pci0000:00/0000:00:02.5/0000:06:00.0/virtio4/block/vda
N: vda
L: 0
S: disk/by-path/pci-0000:06:00.0
S: disk/by-id/virtio-b97e68b2-87ea-45ca-9
S: disk/by-path/virtio-pci-0000:06:00.0
E: DEVPATH=/devices/pci0000:00/0000:00:02.5/0000:06:00.0/virtio4/block/vda
E: DEVNAME=/dev/vda
E: DEVTYPE=disk
E: MAJOR=252
E: MINOR=0
E: SUBSYSTEM=block
E: USEC_INITIALIZED=10518442
E: ID_SERIAL=b97e68b2-87ea-45ca-9
E: ID_PATH=pci-0000:06:00.0
E: ID_PATH_TAG=pci-0000_06_00_0
E: DEVLINKS=/dev/disk/by-path/pci-0000:06:00.0
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9
/dev/disk/by-path/virtio-pci-0000:06:00.0
E: TAGS=:systemd:
I tried to reproduce you issue 4.4.5 development build:
Starting vm with 2 direct LUN disks:
# virsh -r dumpxml disk-mapping
...
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none'
error_policy='stop' io='native' discard='unmap'/>
<source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5'
index='3'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore type='block' index='5'>
<format type='qcow2'/>
<source
dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/ad891316-b80a-4ab5-895b-400108bd0ca1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
</backingStore>
<target dev='sda' bus='scsi'/>
<serial>40018b33-2b11-4d10-82e4-604a5b135fb2</serial>
<boot order='1'/>
<alias name='ua-40018b33-2b11-4d10-82e4-604a5b135fb2'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/3600140594af345ed76d42058f2b1a454' index='2'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>b97e68b2-87ea-45ca-94fb-277d5b30baa2</serial>
<alias name='ua-b97e68b2-87ea-45ca-94fb-277d5b30baa2'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5' index='1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<serial>d9a29187-f492-4a0d-aea2-7d5216c957d7</serial>
<alias name='ua-d9a29187-f492-4a0d-aea2-7d5216c957d7'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</disk>
...
# virsh -r domblklist disk-mapping
Target Source
---------------------------------------------------------------------------------------------------------------------------------------------------------------
sdc -
sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5
vda /dev/mapper/3600140594af345ed76d42058f2b1a454
vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5
In guest:
# ls -lh /dev/disk/by-id/virtio-*
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb
NOTE: "d9a29187-f492-4a0d-a" are the first characters of the disk id:
"d9a29187-f492-4a0d-aea2-7d5216c957d7"
seen in oVirt:
https://my-engine/ovirt-engine/webadmin/?locale=en_US#disks-general;id=d9...
Adding another disk in sorted in the middle (while the vm is running):
# virsh -r dumpxml disk-mapping
...
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none'
error_policy='stop' io='native' discard='unmap'/>
<source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5'
index='3'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore type='block' index='5'>
<format type='qcow2'/>
<source
dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/ad891316-b80a-4ab5-895b-400108bd0ca1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
</backingStore>
<target dev='sda' bus='scsi'/>
<serial>40018b33-2b11-4d10-82e4-604a5b135fb2</serial>
<boot order='1'/>
<alias name='ua-40018b33-2b11-4d10-82e4-604a5b135fb2'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/3600140594af345ed76d42058f2b1a454' index='2'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>b97e68b2-87ea-45ca-94fb-277d5b30baa2</serial>
<alias name='ua-b97e68b2-87ea-45ca-94fb-277d5b30baa2'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5' index='1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<serial>d9a29187-f492-4a0d-aea2-7d5216c957d7</serial>
<alias name='ua-d9a29187-f492-4a0d-aea2-7d5216c957d7'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/36001405b4d0c0b7544d47438b21296ef' index='7'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdc' bus='virtio'/>
<serial>e801c2e4-dc2e-4c53-b17b-bf6de99f16ed</serial>
<alias name='ua-e801c2e4-dc2e-4c53-b17b-bf6de99f16ed'/>
<address type='pci' domain='0x0000' bus='0x09' slot='0x00'
function='0x0'/>
</disk>
...
# virsh -r domblklist disk-mapping
Target Source
---------------------------------------------------------------------------------------------------------------------------------------------------------------
sdc -
sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5
vda /dev/mapper/3600140594af345ed76d42058f2b1a454
vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5
vdc /dev/mapper/36001405b4d0c0b7544d47438b21296ef
In the guest:
# ls -lh /dev/disk/by-id/virtio-*
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb
lrwxrwxrwx. 1 root root 9 Jan 6 09:51
/dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc
Shutdown VM and start it again
# virsh -r dumpxml disk-mapping
...
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='qcow2' cache='none'
error_policy='stop' io='native' discard='unmap'/>
<source dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5'
index='4'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore type='block' index='6'>
<format type='qcow2'/>
<source
dev='/rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/ad891316-b80a-4ab5-895b-400108bd0ca1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
</backingStore>
<target dev='sda' bus='scsi'/>
<serial>40018b33-2b11-4d10-82e4-604a5b135fb2</serial>
<boot order='1'/>
<alias name='ua-40018b33-2b11-4d10-82e4-604a5b135fb2'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/3600140594af345ed76d42058f2b1a454' index='3'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vda' bus='virtio'/>
<serial>b97e68b2-87ea-45ca-94fb-277d5b30baa2</serial>
<alias name='ua-b97e68b2-87ea-45ca-94fb-277d5b30baa2'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/36001405b4d0c0b7544d47438b21296ef' index='2'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdb' bus='virtio'/>
<serial>e801c2e4-dc2e-4c53-b17b-bf6de99f16ed</serial>
<alias name='ua-e801c2e4-dc2e-4c53-b17b-bf6de99f16ed'/>
<address type='pci' domain='0x0000' bus='0x09' slot='0x00'
function='0x0'/>
</disk>
<disk type='block' device='disk' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop'
io='native' iothread='1'/>
<source dev='/dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5' index='1'>
<seclabel model='dac' relabel='no'/>
</source>
<backingStore/>
<target dev='vdc' bus='virtio'/>
<serial>d9a29187-f492-4a0d-aea2-7d5216c957d7</serial>
<alias name='ua-d9a29187-f492-4a0d-aea2-7d5216c957d7'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</disk>
...
# virsh -r domblklist disk-mapping
Target Source
---------------------------------------------------------------------------------------------------------------------------------------------------------------
sdc -
sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5
vda /dev/mapper/3600140594af345ed76d42058f2b1a454
vdb /dev/mapper/36001405b4d0c0b7544d47438b21296ef
vdc /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5
In the guest:
# ls -lh /dev/disk/by-id/virtio-*
lrwxrwxrwx. 1 root root 9 Jan 6 09:55
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda
lrwxrwxrwx. 1 root root 9 Jan 6 09:55
/dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb
lrwxrwxrwx. 1 root root 9 Jan 6 09:55
/dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc
Comparing to state before reboot:
# virsh -r domblklist disk-mapping
Target Source
---------------------------------------------------------------------------------------------------------------------------------------------------------------
sdc -
sda /rhev/data-center/mnt/blockSD/84dc4e3c-00fd-4263-84e8-fc246eeee6e9/images/40018b33-2b11-4d10-82e4-604a5b135fb2/40f455c4-8c92-4f8f-91c2-991b0ddfc2f5
vda /dev/mapper/3600140594af345ed76d42058f2b1a454
vdb /dev/mapper/360014050058f2f8a0474dc7a8a7cc6a5
vdc /dev/mapper/36001405b4d0c0b7544d47438b21296ef
# ls -lh /dev/disk/by-id/virtio-*
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-b97e68b2-87ea-45ca-9 -> ../../vda
lrwxrwxrwx. 1 root root 9 Jan 6 09:42
/dev/disk/by-id/virtio-d9a29187-f492-4a0d-a -> ../../vdb
lrwxrwxrwx. 1 root root 9 Jan 6 09:51
/dev/disk/by-id/virtio-e801c2e4-dc2e-4c53-b -> ../../vdc
In the guest disks are mapped to the same device name.
It looks like libivrt domblklist is not correct - vdb and vdc are switched.
Peter, this expected?
Nir
>
> Joy
>
> On Wed, Dec 2, 2020 at 1:28 PM Nir Soffer <nsoffer(a)redhat.com> wrote:
>>
>> On Wed, Dec 2, 2020 at 10:27 AM Joy Li <joooy.li(a)gmail.com> wrote:
>> >
>> > Hi All,
>> >
>> > I'm facing the problem that after adding disks to guest VM, the device target path changed (My ovirt version is 4.3). For example:
>> >
>> > Before adding a disk:
>> >
>> > virsh # domblklist <vmname>
>> > Target Source
>> > ---------------------------------------------------------
>> > hdc -
>> > vda /dev/mapper/3600a09803830386546244a546d494f53
>> > vdb /dev/mapper/3600a09803830386546244a546d494f54
>> > vdc /dev/mapper/3600a09803830386546244a546d494f55
>> > vdd /dev/mapper/3600a09803830386546244a546d494f56
>> > vde /dev/mapper/3600a09803830386546244a546d494f57
>> > vdf /dev/mapper/3600a09803830386546244a546d494f58
>> >
>> > After adding a disk, and then shutdown and start the VM:
>> >
>> > virsh # domblklist <vmname>
>> > Target Source
>> > ---------------------------------------------------------
>> > hdc -
>> > vda /dev/mapper/3600a09803830386546244a546d494f53
>> > vdb /dev/mapper/3600a09803830386546244a546d494f54
>> > vdc /dev/mapper/3600a09803830386546244a546d494f6c
>> > vdd /dev/mapper/3600a09803830386546244a546d494f55
>> > vde /dev/mapper/3600a09803830386546244a546d494f56
>> > vdf /dev/mapper/3600a09803830386546244a546d494f57
>> > vdg /dev/mapper/3600a09803830386546244a546d494f58
>> >
>> > The devices' multipath doesn't map to the same target path as before, so in my VM the /dev/vdc doesn't point to the old /dev/mapper/3600a09803830386546244a546d494f55 anymore.
>> >
>> > Anybody knows how can I make the device path mapping fixed without being changed after adding or removing disks.
>>
>> Device nodes are not stable, and oVirt cannot guarantee that you will
>> get the same
>> node in the guest in all runs.
>>
>> You should use /dev/disk/by-id/xxx links to located devices, and blkid to create
>> fstab mounts that do not depend on node names.
>>
>> Regardless, oVirt try to keep devices stable as possible. Do you know
>> how to reproduce
>> this issue reliably?
>>
>> Nir
>>
3 years, 9 months
Fwd: virsh backup-begin problem
by Andrey Fokin
Ok. Sorry.
On Thu, Jan 21, 2021 at 3:38 PM Peter Krempa <pkrempa(a)redhat.com> wrote:
> On Thu, Jan 21, 2021 at 15:16:44 +0300, Andrey Fokin wrote:
> > Yes, there is an error message.
> > sudo virsh backup-begin lubuntu2 ./backup.xml
> > error: XML document failed to validate against schema: Unable to validate
> > doc against /usr/share/libvirt/schemas/domainbackup.rng
> > Element domainbackup has extra content: disk
>
> As I've noted in the exact mail you are replying to, I will not offer
> any further assistance unless you post the question also on
> libvirt-users(a)redhat.com list as you did with your original question.
> This is to archive any discussions for future reference.
>
> > On Thu, Jan 21, 2021 at 2:40 PM Peter Krempa <pkrempa(a)redhat.com> wrote:
> >
> > > On Thu, Jan 21, 2021 at 12:34:43 +0100, Peter Krempa wrote:
> > > > On Thu, Jan 21, 2021 at 14:31:18 +0300, Andrey Fokin wrote:
> > > > > Peter, thanks. I understood you message about additional
> configuration
> > > in
> > > > > domen settings. It's really interesting, and I'll going to test it
> as
> > > well.
> > > > > But could you please advise how change backup-begin command XML
> > > parameters
> > > > > file to start full backup process? What is wrong in my case and
> how to
> > > fix
> > > > > it? My config below.
> > > > > Thanks a lot!
> > > > > <domainbackup>
> > > > > <disk name='/var/lib/libvirt/images/lubuntu2.qcow2'
> type='file'>
> > > > > <target file='$PWD/scratch1.img'/>
> > > > > <driver type='raw'/>
> > > > > </disk>
> > > > > <disk name='/var/lib/libvirt/images/lubuntu2-1.qcow2'
> type='file'>
> > > > > <target file='$PWD/scratch2.img'/>
> > > > > <driver type='raw'/>
> > > > > </disk>
> > > > > </domainbackup>
> > > >
> > > > Could you please describe what the problem is? You are getting an
> error?
> > > > If so why didn't you paste it?
> > > >
> > > > I'm not going to setup a backup just to try your configuration.
> Please
> > > > provide all information you have and describe the problem as best as
> > > > possible.
>
> Here:
>
> > >
> > > Oh, and don't drop libvirt-users from the CC-list! That way my advice
> > > would not be recorded in the mailing list archives and browsable by
> > > others with a possibly same problem.
> > >
> > > Please Re-post your query to the list AND include any information you
> > > have to get a reply from me.
> > >
> > >
> >
> > --
> > BRG,
> > Andrey
>
>
--
BRG,
Andrey
3 years, 9 months
virsh backup-begin problem
by Andrey Fokin
Colleagues, could you please advise what a problem could be in XML file in
case of:
virsh backup-begin lubuntu2 ./backup.xml
where backup.xml -
<domainbackup>
<disk name='/var/lib/libvirt/images/lubuntu2.qcow2' type='file'>
<target file='$PWD/scratch1.img'/>
<driver type='raw'/>
</disk>
<disk name='/var/lib/libvirt/images/lubuntu2-1.qcow2' type='file'>
<target file='$PWD/scratch2.img'/>
<driver type='raw'/>
</disk>
</domainbackup>
and domain config is-
<domain type="kvm">
<name>lubuntu2</name>
<uuid>c466086e-4570-4fb5-8cb6-bfe104791381</uuid>
<memory unit="KiB">1048576</memory>
<currentMemory unit="KiB">1048576</currentMemory>
<vcpu placement="static">1</vcpu>
<os>
<type arch="x86_64" machine="pc-i440fx-focal">hvm</type>
<boot dev="hd"/>
</os>
<features>
<acpi/>
<apic/>
<vmport state="off"/>
</features>
<cpu mode="host-model" check="partial"/>
<clock offset="utc">
<timer name="rtc" tickpolicy="catchup"/>
<timer name="pit" tickpolicy="delay"/>
<timer name="hpet" present="no"/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled="no"/>
<suspend-to-disk enabled="no"/>
</pm>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type="file" device="disk">
<driver name="qemu" type="qcow2"/>
<source file="/var/lib/libvirt/images/lubuntu2.qcow2"/>
<target dev="hda" bus="ide"/>
<address type="drive" controller="0" bus="0" target="0" unit="0"/>
</disk>
<disk type="file" device="cdrom">
<driver name="qemu" type="raw"/>
<target dev="hdb" bus="ide"/>
<readonly/>
<address type="drive" controller="0" bus="0" target="0" unit="1"/>
</disk>
<disk type="file" device="disk">
<driver name="qemu" type="qcow2"/>
<source file="/var/lib/libvirt/images/lubuntu2-1.qcow2"/>
<target dev="hdc" bus="ide"/>
<address type="drive" controller="0" bus="1" target="0" unit="0"/>
</disk>
<controller type="usb" index="0" model="ich9-ehci1">
<address type="pci" domain="0x0000" bus="0x00" slot="0x05"
function="0x7"/>
</controller>
<controller type="usb" index="0" model="ich9-uhci1">
<master startport="0"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x05"
function="0x0" multifunction="on"/>
</controller>
<controller type="usb" index="0" model="ich9-uhci2">
<master startport="2"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x05"
function="0x1"/>
</controller>
<controller type="usb" index="0" model="ich9-uhci3">
<master startport="4"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x05"
function="0x2"/>
</controller>
<controller type="pci" index="0" model="pci-root"/>
<controller type="ide" index="0">
<address type="pci" domain="0x0000" bus="0x00" slot="0x01"
function="0x1"/>
</controller>
<controller type="virtio-serial" index="0">
<address type="pci" domain="0x0000" bus="0x00" slot="0x06"
function="0x0"/>
</controller>
<interface type="network">
<mac address="52:54:00:90:d7:29"/>
<source network="default"/>
<model type="e1000"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x03"
function="0x0"/>
</interface>
<serial type="pty">
<target type="isa-serial" port="0">
<model name="isa-serial"/>
</target>
</serial>
<console type="pty">
<target type="serial" port="0"/>
</console>
<channel type="spicevmc">
<target type="virtio" name="com.redhat.spice.0"/>
<address type="virtio-serial" controller="0" bus="0" port="1"/>
</channel>
<input type="tablet" bus="usb">
<address type="usb" bus="0" port="1"/>
</input>
<input type="mouse" bus="ps2"/>
<input type="keyboard" bus="ps2"/>
<graphics type="spice" autoport="yes">
<listen type="address"/>
<image compression="off"/>
</graphics>
<sound model="ich6">
<address type="pci" domain="0x0000" bus="0x00" slot="0x04"
function="0x0"/>
</sound>
<video>
<model type="qxl" ram="65536" vram="65536" vgamem="16384" heads="1"
primary="yes"/>
<address type="pci" domain="0x0000" bus="0x00" slot="0x02"
function="0x0"/>
</video>
<redirdev bus="usb" type="spicevmc">
<address type="usb" bus="0" port="2"/>
</redirdev>
<redirdev bus="usb" type="spicevmc">
<address type="usb" bus="0" port="3"/>
</redirdev>
<memballoon model="virtio">
<address type="pci" domain="0x0000" bus="0x00" slot="0x07"
function="0x0"/>
</memballoon>
</devices>
</domain>
--
BRG,
Andrey
3 years, 9 months
How to enable libvirtd auth just like oVirt ?
by tommy
Hi everyone:
In my libvirtd test env, there's no auth when using virsh:
root@ubts2:~# virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
virsh #
virsh # list
Id Name State
--------------------------
26 3.ohost1 running
27 3.ohost2 running
28 3.ohost3 running
virsh #
virsh #
virsh #
virsh #
But, in oVirt env, there's auth when using virsh:
[root@ohost1 ~]# virsh
Welcome to virsh, the virtualization interactive terminal.
Type: 'help' for help with commands
'quit' to quit
virsh #
virsh #
virsh #
virsh #
virsh # list
Please enter your authentication name:
How can I enable auth in my libvirtd env just like oVirt ?
Thanks.
3 years, 9 months
Get Host Capabilities failed: Internal JSON-RPC error: {'reason': 'internal error: Duplicate key'}
by tommy
Hi,everyone:
I got this error in my ovirt env:
VDSM ooengh1.tltd.com command Get Host Capabilities failed: Internal
JSON-RPC error: {'reason': 'internal error: Duplicate key'}
Systemctl message is:
Dec 23 20:48:48 ooengh1.tltd.com vdsm[2431]: ERROR Internal server error
Traceback (most recent call
last):
File
"/usr/lib/python2.7/site-packages/yajsonrpc/__init__.py", line 345, in
_handle_request
res = method(**params)
File
"/usr/lib/python2.7/site-packages/vdsm/rpc/Bridge.py", line 198, in
_dynamicMethod
result = fn(*methodArgs)
File "<string>", line 2, in
getCapabilities
File
"/usr/lib/python2.7/site-packages/vdsm/common/api.py", line 50, in method
ret = func(*args, **kwargs)
File
"/usr/lib/python2.7/site-packages/vdsm/API.py", line 1371, in
getCapabilities
c = caps.get()
File
"/usr/lib/python2.7/site-packages/vdsm/host/caps.py", line 93, in get
machinetype.compatible_cpu_models())
File
"/usr/lib/python2.7/site-packages/vdsm/common/cache.py", line 43, in
__call__
value = self.func(*args)
File
"/usr/lib/python2.7/site-packages/vdsm/machinetype.py", line 142, in
compatible_cpu_models
all_models =
domain_cpu_models(c, arch, cpu_mode)
File
"/usr/lib/python2.7/site-packages/vdsm/machinetype.py", line 97, in
domain_cpu_models
domcaps =
conn.getDomainCapabilities(None, arch, None, virt_type, 0)
File
"/usr/lib/python2.7/site-packages/vdsm/common/libvirtconnection.py", line
131, in wrapper
ret = f(*args, **kwargs)
File
"/usr/lib/python2.7/site-packages/vdsm/common/function.py", line 94, in
wrapper
return func(inst, *args,
**kwargs)
File
"/usr/lib64/python2.7/site-packages/libvirt.py", line 3844, in
getDomainCapabilities
if ret is None: raise
libvirtError ('virConnectGetDomainCapabilities() failed', conn=self)
libvirtError: internal error:
Duplicate key
AnyOne can help me ?
Thanks!
3 years, 9 months
[resent] virt-manager connection fails with 'qemu unexpectedly closed the monitor'
by John Paul Adrian Glaubitz
Hi!
I recently ran into a problem when connecting to libvirtd 6.9.0 on Debian unstable
and trying to import an existing image with Windows 7.
Upon finishing the wizard and starting the instance, the import process fails
with the following error message:
Unable to complete install: 'internal error: qemu unexpectedly closed the monitor'
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/asyncjob.py", line 65, in cb_wrapper
callback(asyncjob, *args, **kwargs)
File "/usr/share/virt-manager/virtManager/createvm.py", line 2081, in _do_async_install
installer.start_install(guest, meter=meter)
File "/usr/share/virt-manager/virtinst/install/installer.py", line 731, in start_install
domain = self._create_guest(
File "/usr/share/virt-manager/virtinst/install/installer.py", line 679, in _create_guest
domain = self.conn.createXML(install_xml or final_xml, 0)
File "/usr/lib64/python3.8/site-packages/libvirt.py", line 4366, in createXML
raise libvirtError('virDomainCreateXML() failed')
libvirt.libvirtError: internal error: qemu unexpectedly closed the monitor
Since this error message is rather generic, I don't know where to start debugging.
Does anyone know how to increase verbosity here to get an error message that might be
more helpful?
Thanks,
Adrian
--
.''`. John Paul Adrian Glaubitz
: :' : Debian Developer - glaubitz(a)debian.org
`. `' Freie Universitaet Berlin - glaubitz(a)physik.fu-berlin.de
`- GPG: 62FF 8A75 84E0 2956 9546 0006 7426 3B37 F5B5 F913
3 years, 9 months
Problem with xen hvm DomU
by Christoph
Hi
can someone say me whats wrong in this config:
<domain type='xen'>
<name>fenrir.chao5.int</name>
<uuid>7aedcd03-54e8-4055-8d1b-37dd34194859</uuid>
<memory unit='KiB'>2097152</memory>
<currentMemory unit='KiB'>2097152</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='xenfv'>hvm</type>
<loader type='rom'>/usr/lib/xen/boot/hvmloader</loader>
<kernel>/usr/lib64/xen/boot/hvmloader</kernel>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='variable' adjustment='0' basis='localtime'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/dev/vg_astarte/lv_fenrir_root'/>
<target dev='hda' bus='ide'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/dev/vg_astarte/lv_fenrir_swap'/>
<target dev='hdb' bus='ide'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='xenbus' index='0'/>
<controller type='ide' index='0'/>
<interface type='bridge'>
<mac address='00:16:3e:05:30:04'/>
<source bridge='xenbr5'/>
<script path='vif-bridge'/>
<model type='netfront'/>
</interface>
<interface type='bridge'>
<mac address='00:16:3e:04:30:04'/>
<source bridge='xenbr4'/>
<script path='vif-bridge'/>
<model type='netfront'/>
</interface>
<interface type='bridge'>
<mac address='00:16:3e:15:30:04'/>
<source bridge='xenbr15'/>
<script path='vif-bridge'/>
<model type='netfront'/>
</interface>
<interface type='bridge'>
<mac address='00:16:3e:14:30:04'/>
<source bridge='xenbr14'/>
<script path='vif-bridge'/>
<model type='netfront'/>
</interface>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
<graphics type='vnc' port='6000' autoport='no' listen='0.0.0.0'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='cirrus' vram='8192' heads='1' primary='yes'/>
</video>
<memballoon model='xen'/>
</devices>
</domain>
If I try to start it with virsh create... then I see only:
error: Failed to create domain from
/etc/libvirt/libxl/fenrir.chao5.int.xml
error: internal error: libxenlight failed to create new domain
'fenrir.chao5.int'
------
Greetz
3 years, 9 months