RBD volume not made available to Xen virtual guest on openSUSE 15.2 (with libvirt 6.0.0)
by Marcel Juffermans
Hi there,
Since upgrading to openSUSE 15.2 (which includes libvirt 6.0.0) the
virtual guests don't get their RBD disks made available to them. On
openSUSE 15.1 (which includes libvirt 5.1.0) that worked fine. The XML
is as follows:
<domain type='xen' id='7'>
<name>mytwotel-a</name>
<uuid>a56daa5d-c095-49d5-ae1b-00b38353614e</uuid>
<description>mytwotel-a</description>
<memory unit='KiB'>1048576</memory>
<currentMemory unit='KiB'>1048576</currentMemory>
<vcpu placement='static'>1</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='8-23'/>
</cputune>
<os>
<type arch='x86_64' machine='xenpv'>linux</type>
<kernel>/usr/lib/grub2/x86_64-xen/grub.xen</kernel>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<disk type='network' device='disk'>
<driver name='qemu' type='raw' cache='writethrough'/>
<source protocol='rbd' name='guests/mytwotel-a'>
<auth username='libvirt'>
<secret type='ceph' uuid='3f88b59a-d85b-4b47-946d-a4c4cce3fec0'/>
</auth>
</source>
<backingStore/>
<target dev='xvda' bus='xen'/>
</disk>
<controller type='xenbus' index='0'/>
<interface type='bridge'>
<mac address='00:16:3e:a3:ba:9f'/>
<source bridge='br0'/>
<target dev='vif7.0'/>
</interface>
<console type='pty' tty='/dev/pts/0'>
<source path='/dev/pts/0'/>
<target type='xen' port='0'/>
</console>
<input type='mouse' bus='xen'/>
<input type='keyboard' bus='xen'/>
<memballoon model='xen'/>
</devices>
</domain>
The virtual guest starts, but then sits in the Grub 2 boot prompt
because the disk is not available. The qemu log shows:
qemu-system-i386: failed to create 'qdisk' device '51712': failed to
create drive: Could not open
'rbd:guests/mytwotel-a:id=libvirt:key=AQCAUpBbrcaiFxAA1sztXPbkdW1L54i99oUpyA==:auth_supported=cephx\;none':
No such file or directory
qemu-system-i386: failed to create 'qdisk' device '51712': failed to
create drive: Could not open
'rbd:guests/mytwotel-a:id=libvirt:key=AQCAUpBbrcaiFxAA1sztXPbkdW1L54i99oUpyA==:auth_supported=cephx\;none':
No such file or directory
qemu-system-i386: failed to create 'qdisk' device '51712': failed to
create drive: Could not open
'rbd:guests/mytwotel-a:id=libvirt:key=AQCAUpBbrcaiFxAA1sztXPbkdW1L54i99oUpyA==:auth_supported=cephx\;none':
No such file or directory
qemu-system-i386: failed to create 'qdisk' device '51712': failed to
create drive: Could not open
'rbd:guests/mytwotel-a:id=libvirt:key=AQCAUpBbrcaiFxAA1sztXPbkdW1L54i99oUpyA==:auth_supported=cephx\;none':
No such file or directory
...
I tried to strace libvirtd. The results are as follows:
On openSUSE 15.2 with libvirt 6.0.0 (not working), we see this:
1682 openat(AT_FDCWD,
"rbd:guests/mytwotel-a:id=libvirt:key=AQCAUpBbrcaiFxAA1sztXPbkdW1L54i99oUpyA==:auth_supported=cephx\\;none",
O_RDWR|O_CLOEXEC) = -1 ENOENT (No such file or directory)
1682 rt_sigprocmask(SIG_BLOCK, NULL, [BUS USR1 ALRM IO], 8) = 0
1682 mmap(NULL, 1052672, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f538aefd000
1682 mprotect(0x7f538aefd000, 4096, PROT_NONE <unfinished ...>
1682 <... mprotect resumed>) = 0
1682 rt_sigprocmask(SIG_SETMASK, [BUS USR1 ALRM IO], [BUS USR1 ALRM
IO], 8) = 0
1682 rt_sigprocmask(SIG_BLOCK, NULL, [BUS USR1 ALRM IO], 8) = 0
1682 mmap(NULL, 1052672, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_ANONYMOUS, -1, 0 <unfinished ...>
1682 <... mmap resumed>) = 0x7f538adfc000
1682 mprotect(0x7f538adfc000, 4096, PROT_NONE <unfinished ...>
1682 <... mprotect resumed>) = 0
1682 rt_sigprocmask(SIG_SETMASK, [BUS USR1 ALRM IO], <unfinished ...>
1682 <... rt_sigprocmask resumed>[BUS USR1 ALRM IO], 8) = 0
1682 write(2, "qemu-system-i386: failed to crea"..., 232 <unfinished ...>
...
On the other hand, on openSUSE 15.1 with libvirt 5.1.0 (working), we see
this:
16267 openat(AT_FDCWD,
"rbd:guests/mytwotel-a:id=libvirt:key=AQCAUpBbrcaiFxAA1sztXPbkdW1L54i99oUpyA==:auth_supported=cephx\\;none",
O_RDONLY|O_NONBLOCK|O_CLOEXEC) = -1 ENOENT (No such file or directory)
16267
stat("rbd:guests/mytwotel-a:id=libvirt:key=AQCAUpBbrcaiFxAA1sztXPbkdW1L54i99oUpyA==:auth_supported=cephx\\;none",
0x7fff83e2e2b0) = -1 ENOENT (No such file or directory)
16267 access("/usr/lib64/qemu/block-rbd.so", F_OK) = 0
16267 stat("/usr/lib64/qemu/block-rbd.so", {st_mode=S_IFREG|0644,
st_size=27448, ...}) = 0
16267 openat(AT_FDCWD, "/usr/lib64/qemu/block-rbd.so",
O_RDONLY|O_CLOEXEC) = 60
16267 read(60, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0
&\0\0\0\0\0\0"..., 832) = 832
16267 fstat(60, {st_mode=S_IFREG|0644, st_size=27448, ...}) = 0
16267 mmap(NULL, 2122672, PROT_READ|PROT_EXEC,
MAP_PRIVATE|MAP_DENYWRITE, 60, 0) = 0x7f8e6030f000
16267 mprotect(0x7f8e60315000, 2093056, PROT_NONE) = 0
16267 mmap(0x7f8e60514000, 8192, PROT_READ|PROT_WRITE,
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 60, 0x5000) = 0x7f8e60514000
16267 close(60) = 0
...
Note that the latter opens "/usr/lib64/qemu/block-rbd.so". That library
*does* exist on openSUSE 15.2 but it doesn't seem to be used.
I've tried to update libvirt to a newer version using the Open Build
Service repos, but then ran into so many conflicting versions that I
gave up.
At this point I'm stuck. Does anyone have an idea I can try?
Many thanks,
Marcel
4 years, 2 months
Understanding 'State change failed: (-2) Need access to UpToDate data'
by Paul O'Rorke
Hi list,
I had to relocate my third node in a classic DRBD 8.4 three node set up
to a new host. I am having difficulty making the stacked resource the
primary. I am following this guide:
https://www.linbit.com/drbd-user-guide/users-guide-drbd-8-4/#s-three-nodes
Specifically this:
>
>
> 5.18.3. Enabling stacked resources
>
> To enable a stacked resource, you first enable its lower-level
> resource and promote it:
>
> drbdadm up r0
> drbdadm primary r0
>
> As with unstacked resources, you must create DRBD meta data on the
> stacked resources. This is done using the following command:
>
> # drbdadm create-md --stacked r0-U
>
> Then, you may enable the stacked resource:
>
> # drbdadm up --stacked r0-U
> # drbdadm primary --stacked r0-U
>
> After this, you may bring up the resource on the backup node, enabling
> three-node replication:
>
> # drbdadm create-md r0-U
> # drbdadm up r0-U
It seems all is good right up to the last command:
:~# drbdadm primary informer
:~# drbdadm create-md --stacked informer-U
You want me to create a v08 style flexible-size internal meta data block.
There appears to be a v08 flexible-size internal meta data block
already in place on /dev/drbd11 at byte offset 429483581440
Do you really want to overwrite the existing meta-data?
[need to type 'yes' to confirm] yes
md_offset 429483581440
al_offset 429483548672
bm_offset 429470441472
Found some data
==> This might destroy existing data! <==
Do you want to proceed?
[need to type 'yes' to confirm] yes
initializing activity log
NOT initializing bitmap
Writing meta data...
New drbd meta data block successfully created.
:~# drbdadm up --stacked informer-U
:~# drbdadm primary --stacked informer-U
110: State change failed: (-2) Need access to UpToDate data
Command 'drbdsetup-84 primary 110' terminated with exit code 17
drbd11 is the device for the lower level resource 'informer', drbd110
the device for stacked resource 'informer-U'. Replication between hosts
'Alice' and 'Bob' (not the real host names but corresponding to the
names used int the documentation) is functional. I am performing the
above commands on 'bob'.
I do not understand why it is complaining it needs UpToDate data in that
last command if 'informer-U' is 'stacked-on-top-of informer' and was
just created.
the .res file(s) look like this:
> resource informer {
> net {
> protocol C;
> }
>
> device /dev/drbd11;
> meta-disk internal;
>
> on trk-kvm-01 {
> address 10.10.1.125:7789;
> disk /dev/trk-kvm-01-vg/informer;
> }
> on trk-kvm-02 {
> address 10.10.1.126:7789;
> disk /dev/trk-kvm-02-vg/informer;
> }
> }
>
> resource informer-U {
> net {
> protocol A;
> }
>
> stacked-on-top-of informer {
> device /dev/drbd110;
> address 10.10.2.126:7789;
> }
>
> on trk-bkp-01 {
> device /dev/drbd110;
> disk /dev/ubuntu-vg/informer;
> address 10.10.2.127:7789;
> meta-disk internal;
> }
> }
Any suggestions on what I am missing in this picture?
Please and thanks.
--
Paul O'Rorke
Tracker Software Products (Canada) Limited
www.tracker-software.com <https://www.tracker-software.com>
Tel: +1 (250) 324 1621
Fax: +1 (250) 324 1623
Support:
www.tracker-software.com/support <https://www.tracker-software.com/support>
Download latest Releases:
www.tracker-software.com/downloads
<https://www.tracker-software.com/downloads>
4 years, 2 months
Why "discard":"unmap" is the default option for disks
by Han Han
Hello,
I find "discard":"unmap" is defaultly enabled in qemu cmdline(libvirt
v6.6, qemu v5.1):
XML:
<disk type="file" device="disk">
<driver name="qemu" type="qcow2"/>
<source file="/var/lib/libvirt/images/new.qcow2" index="2"/>
<backingStore/>
<target dev="sda" bus="scsi"/>
<alias name="scsi0-0-0-0"/>
<address type="drive" controller="0" bus="0" target="0" unit="0"/>
</disk><disk type="network" device="disk">
<driver name="qemu" type="raw" error_policy="report"/>
<source protocol="nbd" name="new" tls="no" index="1">
<host name="localhost" port="10809"/>
</source>
<target dev="vdb" bus="virtio"/>
<alias name="virtio-disk1"/>
<address type="pci" domain="0x0000" bus="0x02" slot="0x00"
function="0x0"/>
</disk>
QEMU cmdline:
... -blockdev
{"driver":"file","filename":"/var/lib/libvirt/images/new.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,
*"discard":"unmap"*} -blockdev
{"node-name":"libvirt-2-format","read-only":false,"driver":"qcow2","file":"libvirt-2-storage","backing":null}
-device
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,device_id=drive-scsi0-0-0-0,drive=libvirt-2-format,id=scsi0-0-0-0,bootindex=1
-blockdev
{"driver":"nbd","server":{"type":"inet","host":"localhost","port":"10809"},"export":"new","node-name":"libvirt-1-storage","auto-read-only":true,
*"discard":"unmap"*} -blockdev
{"node-name":"libvirt-1-format","read-only":false,"driver":"raw","file":"libvirt-1-storage"}
-device
virtio-blk-pci,bus=pci.2,addr=0x0,drive=libvirt-1-format,id=virtio-disk1,werror=report,rerror=report
...
I think it's from
https://gitlab.com/libvirt/libvirt/-/blob/master/src/qemu/qemu_block.c#L1211
But I cannot find the reason from commit msgs or documents.
Could you please explain it?
4 years, 2 months
unable to migrate: virPortAllocatorSetUsed:299 : internal error: Failed to reserve port 49153
by Vjaceslavs Klimovs
On libvirt 6.8.0 and qemu 5.1.0, when trying to live migrate "error:
internal error: Failed to reserve port" error is received and
migration does not succeed:
virsh # migrate cartridge qemu+tls://ratchet.lan/system --live
--persistent --undefinesource --copy-storage-all --verbose
error: internal error: Failed to reserve port 49153
virsh #
On target host with debug logs, nothing interesting but the error
itself is found in the logs
...
2020-10-12 02:11:33.852+0000: 6871: debug :
qemuMonitorJSONIOProcessLine:220 : Line [{"return": {}, "id":
"libvirt-373"}]
2020-10-12 02:11:33.852+0000: 6871: info :
qemuMonitorJSONIOProcessLine:239 : QEMU_MONITOR_RECV_REPLY:
mon=0x7fe784255020 reply={"return": {}, "id": "libvirt-373"}
2020-10-12 02:11:33.852+0000: 6825: debug :
qemuDomainObjExitMonitorInternal:5615 : Exited monitor
(mon=0x7fe784255020 vm=0x55f086c81ea0 name=cartridge)
2020-10-12 02:11:33.852+0000: 6825: debug : qemuDomainObjEndJob:1140 :
Stopping job: async nested (async=migration in vm=0x55f086c81ea0
name=cartridge)
2020-10-12 02:11:33.852+0000: 6825: error :
virPortAllocatorSetUsed:299 : internal error: Failed to reserve port
49153
2020-10-12 02:11:33.852+0000: 6825: debug :
qemuMigrationParamsReset:1206 : Resetting migration parameters
0x7fe784257c30, flags 0x59
2020-10-12 02:11:33.852+0000: 6825: debug :
qemuDomainObjBeginJobInternal:835 : Starting job: job=async nested
agentJob=none asyncJob=none (vm=0x55f086c81ea0 name=cartridge, current
job=none agentJob=none async=migration in)
2020-10-12 02:11:33.852+0000: 6825: debug :
qemuDomainObjBeginJobInternal:887 : Started job: async nested
(async=migration in vm=0x55f086c81ea0 name=cartridge)
2020-10-12 02:11:33.852+0000: 6825: debug :
qemuDomainObjEnterMonitorInternal:5590 : Entering monitor
(mon=0x7fe784255020 vm=0x55f086c81ea0 name=cartridge)
2020-10-12 02:11:33.852+0000: 6825: debug :
qemuMonitorSetMigrationCapabilities:3853 : mon:0x7fe784255020
vm:0x55f086c81ea0 fd:30
2020-10-12 02:11:33.852+0000: 6825: info : qemuMonitorSend:944 :
QEMU_MONITOR_SEND_MSG: mon=0x7fe784255020
msg={"execute":"migrate-set-capabilities","arguments":{"capabilities":[{"capability":"xbzrle","state":false},{"capability":"auto-converge","state":false},{"capability":"rdma-pin-all","state":false},{"capability>
fd=-1
...
Full logs:
destination: https://drive.google.com/file/d/1g986SbSVijvwZd8d7xDrJwo_AmKH-JnV/view?us...
source:
https://drive.google.com/file/d/1lsV2EOBxF7xH5-lgz2Psh9YSkOePXvAd/view?us...
On target host, nothing is listening on the target port:
ratchet /var/log/libvirt/qemu # netstat -lnp | grep 49153 | wc -l
0
and nc -l 49153 succeeds without issues.
Do you have any suggestions on how to proceed here?
4 years, 2 months
possible bug in efi detection for guest
by daggs
Greetings All,
following a suggestion I got here on how to properly boot uefi-q35 guest, I found an weird config in the xml.
this is what I see when I run virsh edit streamer-vm-q35:
<os firmware='efi'>
<type arch='x86_64' machine='pc-q35-5.0'>hvm</type>
<boot dev='hd'/>
</os>
when I run virsh dumpxml streamer-vm-q35, I get this:
<os>
<type arch='x86_64' machine='pc-q35-5.0'>hvm</type>
<loader readonly='yes' type='pflash'>/usr/share/qemu/edk2-x86_64-secure-code.fd</loader>
<nvram template='/usr/share/qemu/edk2-i386-vars.fd'>/var/lib/libvirt/qemu/nvram/streamer-vm-q35_VARS.fd</nvram>
<boot dev='hd'/>
</os>
question is, why the nvram template is /usr/share/qemu/edk2-i386-vars.fd and not /usr/share/qemu/edk2-x86_64-code.fd (file exists) as the system is x64?
Thanks
Dagg.
4 years, 2 months
scsi passthrough differs between guests
by daggs
Greetings,
I have two machines running the same distro, both running qemu 5.1.0, one runs libvirt 6.7.0, the other 6.8.0.
I've decided to test the viability of passing through my sata cdrom into a vm, so I went to the libvirt docs, read a bit and added the following to a debian10 uefi vm running on libvirt 6.8.0:
<controller type='scsi' index='0' model='virtio-scsi'>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</controller>
<hostdev mode='subsystem' type='scsi' managed='no'>
<source>
<adapter name='scsi_host11'/>
<address bus='0' target='0' unit='0'/>
</source>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</hostdev>
I've booted the vm and saw the cdrom in the vm, eject worked and so did mount.
I've decided to move it to another machine running libvirt 6.7.0, I've verified the cdrom is visible on the new system, seE:
# lsscsi
[0:0:0:0] cd/dvd HL-DT-ST DVDRAM GH24NSD1 LW00 /dev/sr0
I've inserted the same xml into another vm running libreelec uefi but changed the host from 11 to 0, it looks like this:
<controller type='scsi' index='0' model='virtio-scsi'>
<alias name='scsi0'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
</controller>
<hostdev mode='subsystem' type='scsi' managed='no'>
<source>
<adapter name='scsi_host0'/>
<address bus='0' target='0' unit='0'/>
</source>
<readonly/>
<alias name='hostdev0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</hostdev>
but when I boot the vm, I see no cdrom, lspci shows this:
03:00.0 SCSI storage controller: Red Hat, Inc. Virtio block device (rev 01)
07:01.0 SCSI storage controller: Red Hat, Inc. Virtio SCSI
as there is no lsscsi or /proc/scsi/scsi I cannot see other possible scsis.
I know for a fact that libreelec supports scsi/sata cdroms as it is a frequent he for streamers.
I'm baffled about the two scsi controllers I see and why I cannot see the device.
is there a known issue in 6.7.0 with scsi pass-trough?
Dagg.
4 years, 2 months
libivrt client using python on windows
by Talha Jawaid
Hello,
I want to run a python script on Windows to remotely control libvirt running on a (Linux) server. This was all working fine while prototyping stuff on the server but now I am having some trouble installing the python module on windows ("pip install" fails). I struggled through getting it to COMPILE but then ran into linking issues. Seems like the actual libvirt library is needed to run (and even compile) on Windows? Is there a prebuilt package somewhere that I could get? Do I really need the whole libvirt library or is there a client only package? How could I go about accomplishing my goal here?
Thanks,
-Talha
4 years, 2 months
unable to find any master var store for loader error
by daggs
Greetings,
I have the following machine: https://dpaste.com/5BPA3F77F which I'm trying to boot in uefi.
/etc/libvirt/qemu.conf looks like this: https://dpaste.com/B3SFHUY6R and the ovmf files exists in the path, see:
# ll /usr/share/edk2-ovmf/OVMF_CODE.fd /usr/share/edk2-ovmf/OVMF_VARS.fd /usr/share/edk2-ovmf/OVMF_CODE.secboot.fd /usr/share/edk2-ovmf/OVMF_VARS.secboot.fd
-rw-r--r-- 1 root root 1966080 Aug 21 14:32 /usr/share/edk2-ovmf/OVMF_CODE.fd
-rw-r--r-- 1 root root 1966080 Aug 21 14:32 /usr/share/edk2-ovmf/OVMF_CODE.secboot.fd
-rw-r--r-- 1 root root 131072 Aug 21 14:32 /usr/share/edk2-ovmf/OVMF_VARS.fd
-rw-r--r-- 1 root root 131072 Aug 21 14:32 /usr/share/edk2-ovmf/OVMF_VARS.secboot.fd
when I try to start the machine, I get this error:
error: Failed to start domain vm1
error: operation failed: unable to find any master var store for loader: /usr/share/edk2-ovmf/OVMF_CODE.fd
libvirt version is 6.7.0 and qemu version is 5.1.0
any idea how to fix this issue?
Thanks.
Dagg.
4 years, 2 months
Re: Encrypting boot partition Libvirt not showing the OS booting up
by john doe
On 10/12/2020 11:53 AM, john doe wrote:
> On 10/12/2020 11:37 AM, Peter Krempa wrote:
>> On Mon, Oct 12, 2020 at 11:27:20 +0200, john doe wrote:
>>> Hi, thank you for your answer, I'm sending this privatly as you asked
>>> for private information.
>>> Can I ask you to keep those information private?
>>>
>>> On 10/12/2020 10:29 AM, Peter Krempa wrote:
>>>> On Mon, Oct 12, 2020 at 10:03:15 +0200, john doe wrote:
>>>>> Hi,
>>>>>
>>>>> I have installed Debian Buster with encrypted LVM so apon installation
>>>>> my root partition is encrypted.
>>>>> So far so good but as soon as I encrypt the boot partition, after
>>>>> reboot
>>>>> the OS won't start.
>>>>> If I start the drive directly with qemu, it works but it looks like
>>>>> Libvirt is somehow not able to deel with it.
>>>>
>>>> This is not enough information to diagnose the problem.
>>>>
>>>> We'll need the following:
>>>>
>>>> 1) Did you encrypt the partition using the debian installer
>>>>
>>>
>>> No, I did it after installation following the instructions at (1).
>>>
>>>> 2) what vm XML you used:
>>>> a) during installation
>>>
>>> The domain xml file was created by virt-install with the following
>>> command:
>>> $ virsh destroy try01; virsh undefine try01; time virt-install
>>> --name=try01 --ram=1024 --noreboot --cpuset=auto --cpu host
>>> --vcpus=1,maxvcpus=4 --disk=path=/mnt/usbkey01/machines/try/try01,size=6
>>> --graphic none --pxe --os-variant=debian10 --network
>>> bridge=br0,mac=0e:35:32:84:c3:f3 --filesystem
>>> type=mount,mode=mapped,source=/mnt/usbkey01/public,target=public_dir
>>>
>>>> b) when trying to finally boot the vm
>>>>
>>>
>>> Attached as 'try01.xml' obtained by doing 'virsh dumpxml try01 >
>>> try01.xml'.
>>>
>>>> 3) what qemu command line you've used at the point you claim it worked
>>>>
>>>
>>> qemu-system-x86_64 -drive file=/mnt/usbkey01/machines/try/try01 -m 1024
>>> -boot c -accel kvm -machine q35 -nographic
>>>
>>>> 4) what is the error/final state when the VM fails to boot with libvirt
>>>>
>>>
>>> After having encrypted the boot partition:
>>>
>>> $virsh console try01
>>> root@0e-35-32-84-c3-f3:# [ 208.513259] watchdog: watchdog0: watchdog
>>> did not stop!
>>> [ 208.855971] reboot: Restarting system
>>>
>>>
>>> $ qemu-system-x86_64 -drive file=/mnt/usbkey01/machines/try/try01 -m
>>> 1024 -boot c -accel kvm -machine q35 -nographic
>>> SeaBIOS (version 1.12.0-1)
>>>
>>>
>>> iPXE (http://ipxe.org) 00:02.0 C980 PCI2.10 PnP PMM+3FF8FE80+3FECFE80
>>> C980
>>>
>>>
>>>
>>> Booting from Hard Disk...
>>> Attempting to decrypt master key...
>>> Enter passphrase for hd0,msdos1 (43a322dfc8ba4628b80afc66d49642a7):
>>>
>>>
>>> As you can see above, if I invoked qemu directly, I get prompted for the
>>> boot passthrase but I'm not getting it when using libvirt.
>>
>> Okay, so the root cause ... or "problem" here is that you don't see the
>> console via 'virsh console' after the guest OS rebooted.
>>
>> My suspicion according to the VM XML is that the VM restart triggered a
>> restart of the qemu process and thus our internal handler of the console
>> passthrough disconnected.
>>
>> Please try a "virsh destroy $VM" if the VM is running/stuck waiting for
>> the password without actually showing it and then start it using
>>
>> virsh start --console $VM
>>
>> This will start it and connect to the console immediately.
>>
>> Please report your findings, we might want to discuss what happens when
>> a console is connected and the guest uses a setting of:
>>
>> <on_reboot>restart</on_reboot>
>>
>> when the VM is rebooted.
>>
>
> Still no luck:
>
> $ virsh destroy try01; virsh start --console try01
> error: Failed to destroy domain try01
> error: Requested operation is not valid: domain is not running
>
> Domain try01 started
> Connected to domain try01
> Escape character is ^]
>
>
>
>
>
> I did not modify anything during the time I sent my answer and seeing
> yours.
>
I sent privately the requested xml file to 'Peter Krempa
<pkrempa(a)redhat.com>'.
Peter Krempa 's privately answered me back suggesting to add the
following in the domain xml file:
<bios useserial='yes'/> under <os>
such as ...
<os>
<type arch='x86_64' machine='pc-q35-3.1'>hvm</type>
<boot dev='hd'/>
<bios useserial='yes'/>
</os>
This does not help at all and still give the output sent previously.
--
John Doe
4 years, 2 months
Re: Attached disk blockpull
by Peter Krempa
This is a libvirt question, so asking it on the qemu-block might not get
you an answer that quick, ... or ever if I didn't notice your other
question also addressed incorrectly.
[adding libvirt-users(a)redhat.com to cc]
On Tue, Sep 01, 2020 at 10:57:34 -0400, Yoonho Park wrote:
> I am trying to perform a blockpull on an attached disk, but I cannot do
> this using "virsh attach-disk" specifying a new, empty overlay file created
> with "qemu-img create". The VM does not pick up the backing_file path from
Could you please elaborate on the exact steps? specifically the
arguments for qemu-img create.
My feeling is that you didn't specify the backing image format (-F) when
creating the overlay, but it's hard to tell from this very sparse
question.
> the qcow2 overlay file, and there seems to be no way to specify the backing
> path through the attach-disk command itself. Instead the backing path has
virsh attach-disk is a syntax-sugar wrapper which generates the XML
document which is in fact used to attach disk. You can use virsh
attach-disk --print-xml to see what would be used.
> to be specified in the xml file used to create the VM, which precludes
> attaching a new disk to a running VM. Is this a bug? Is it possible to
using virsh attach-device allows you to pass in a whole <devices>
subelement such as <disk> with any configuration you are able to specify
when defining the XML.
The virsh attach-disk is limited to just "basic" config options.
> attach a disk to a running VM, and specify its backing path, using qemu
> directly? This is with qemu 4.2.0 and libvirt 6.0.0.
4 years, 2 months