[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 1 month
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 2 months
[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] Create qcow2 v3 volumes via libvirt
by Gionatan Danti
Hi all,
on a fully patched CentOS 7.4 x86-64, I see the following behavior:
- when creating a new volumes using vol-create-as, the resulting file is
a qcow2 version 2 (compat=0.10) file. Example:
[root@gdanti-lenovo vmimages]# virsh vol-create-as default zzz.qcow2
8589934592 --format=qcow2 --backing-vol /mnt/vmimages/centos6.img
Vol zzz.qcow2 created
[root@gdanti-lenovo vmimages]# file zzz.qcow2
zzz.qcow2: QEMU QCOW Image (v2), has backing file (path
/mnt/vmimages/centos6.img), 8589934592 bytes
[root@gdanti-lenovo vmimages]# qemu-img info zzz.qcow2
image: zzz.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 196K
cluster_size: 65536
backing file: /mnt/vmimages/centos6.img
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16
- when creating a snapshot, the resulting file is a qcow2 version 3
(comapt=1.1) file. Example:
[root@gdanti-lenovo vmimages]# virsh snapshot-create-as centos6left
--disk-only --no-metadata snap.qcow2
Domain snapshot snap.qcow2 created
[root@gdanti-lenovo vmimages]# file centos6left.snap.qcow2
centos6left.snap.qcow2: QEMU QCOW Image (v3), has backing file (path
/mnt/vmimages/centos6left.qcow2), 8589934592 bytes
[root@gdanti-lenovo vmimages]# qemu-img info centos6left.snap.qcow2
image: centos6left.snap.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 196K
cluster_size: 65536
backing file: /mnt/vmimages/centos6left.qcow2
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
From what I know, this is a deliberate decision: compat=1.1 requires
relatively recent qemu version, and creating a new volume play on the
"safe side" of compatibility.
It is possible to create a new volume using qcow2 version 3 (compat=1.1)
format *using libvirt/virsh* (I know I can do that via qemu-img)? Any
drawback on using version 3 format?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
6 years, 5 months
[libvirt-users] snapshot of a raw file - how to revert ?
by Lentes, Bernd
Hi,
i have the following system:
pc59093:~ # cat /etc/os-release
NAME="SLES"
VERSION="11.4"
VERSION_ID="11.4"
PRETTY_NAME="SUSE Linux Enterprise Server 11 SP4"
ID="sles"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:suse:sles:11:4"
pc59093:~ # uname -a
Linux pc59093 3.0.101-84-default #1 SMP Tue Oct 18 10:32:51 UTC 2016 (15251d6) x86_64 x86_64 x86_64 GNU/Linux
pc59093:~ # rpm -qa|grep -iE 'libvirt|kvm'
libvirt-cim-0.5.12-0.7.16
libvirt-python-1.2.5-1.102
libvirt-client-1.2.5-15.3
kvm-1.4.2-47.1
sles-kvm_en-pdf-11.4-0.33.1
libvirt-1.2.5-15.3
I have several guests running with raw files, which is sufficent for me. Now i'd like to snapshot one guest because i make heavy configuration changes on it.
>From what i read in the net is that libvirt supports snapshoting of raw files when the guest is shutdown and the file of the snapshot becomes a qcow2. Right ?
I try to avoid converting my raw file to a qcow2 file. I can shutdown the guest for a certain time, that's no problem. I don't need a live snapshot.
But how can i revert to my previous state if my configuration changes go wrong ?
Can i do this with snapshot-revert or do i have to edit the xml file and point the hd again to the origin raw file ?
What i found in the net wasn't complete clear.
Thanks.
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
[ mailto:bernd.lentes@helmholtz-muenchen.de | bernd.lentes(a)helmholtz-muenchen.de ]
phone: +49 89 3187 1241
fax: +49 89 3187 2294
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ]
no backup - no mercy
Helmholtz Zentrum München
6 years, 7 months
[libvirt-users] Fail in virDomainUpdateDeviceFlags (libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph 10.2.10)
by Star Guo
Hello Everyone,
My pc run in CentOS 7.4 and install libvirt-4.0.0 + Qemu-kvm 2.9.0 + Ceph
10.2.10 ALL-in-One.
I use python-sdk with libvirt and run [self.domain.updateDeviceFlags(xml,
libvirt.VIR_DOMAIN_AFFECT_LIVE)] on CDROM (I want to change media path).
However, I enable libvirt debug log , the log as below:
"2018-02-26 13:09:13.638+0000: 50524: debug : virDomainLookupByName:412 :
conn=0x7f7278000aa0, name=6ec499397d594ef2a64fcfc938f38225
2018-02-26 13:09:13.638+0000: 50515: debug : virDomainGetInfo:2431 :
dom=0x7f726c000c30, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), info=0x7f72b9059b20
2018-02-26 13:09:13.638+0000: 50515: debug : qemuGetProcessInfo:1479 : Got
status for 71205/0 user=14674 sys=3627 cpu=5 rss=105105
2018-02-26 13:09:13.644+0000: 50519: debug : virDomainGetXMLDesc:2572 :
dom=0x7f7280002f20, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), flags=0x0
2018-02-26 13:09:13.653+0000: 50516: debug : virDomainUpdateDeviceFlags:8326
: dom=0x7f7274000b90, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), xml=<disk device="cdrom"
type="network"><source
name="zstack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b
1f" protocol="rbd"><host name="10.0.229.181" port="6789" /></source><auth
username="zstack"><secret type="ceph"
uuid="9b06bb70-dc13-4338-88fd-b0c72d5ab9e9" /></auth><target bus="ide"
dev="hdc" /><readonly /></disk>, flags=0x1
2018-02-26 13:09:13.653+0000: 50516: debug :
qemuDomainObjBeginJobInternal:4778 : Starting job: modify (vm=0x7f7294100af0
name=6ec499397d594ef2a64fcfc938f38225, current job=none async=none)
2018-02-26 13:09:13.653+0000: 50516: debug :
qemuDomainObjBeginJobInternal:4819 : Started job: modify (async=none
vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.660+0000: 50516: debug : virQEMUCapsCacheLookup:5443 :
Returning caps 0x7f7294126ac0 for /usr/libexec/qemu-kvm
2018-02-26 13:09:13.664+0000: 50516: debug : virQEMUCapsCacheLookup:5443 :
Returning caps 0x7f7294126ac0 for /usr/libexec/qemu-kvm
2018-02-26 13:09:13.667+0000: 50516: debug : qemuSetupImageCgroupInternal:91
: Not updating cgroups for disk path
'08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f', type:
network
2018-02-26 13:09:13.667+0000: 50516: debug :
qemuDomainObjEnterMonitorInternal:5048 : Entering monitor
(mon=0x7f728c07f260 vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.667+0000: 50516: debug : qemuMonitorEjectMedia:2487 :
dev_name=drive-ide0-1-0 force=0
2018-02-26 13:09:13.667+0000: 50516: debug : qemuMonitorEjectMedia:2489 :
mon:0x7f728c07f260 vm:0x7f7294100af0 json:1 fd:24
2018-02-26 13:09:13.667+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:301 : Send command
'{"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false},"i
d":"libvirt-78"}' for write with FD -1
2018-02-26 13:09:13.667+0000: 50516: info : qemuMonitorSend:1079 :
QEMU_MONITOR_SEND_MSG: mon=0x7f728c07f260
msg={"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false}
,"id":"libvirt-78"}
fd=-1
2018-02-26 13:09:13.667+0000: 50514: info : qemuMonitorIOWrite:553 :
QEMU_MONITOR_IO_WRITE: mon=0x7f728c07f260
buf={"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false}
,"id":"libvirt-78"}
len=93 ret=93 errno=0
2018-02-26 13:09:13.669+0000: 50514: debug :
qemuMonitorJSONIOProcessLine:193 : Line [{"return": {}, "id": "libvirt-78"}]
2018-02-26 13:09:13.669+0000: 50514: info : qemuMonitorJSONIOProcessLine:213
: QEMU_MONITOR_RECV_REPLY: mon=0x7f728c07f260 reply={"return": {}, "id":
"libvirt-78"}
2018-02-26 13:09:13.669+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:306 : Receive command reply ret=0
rxObject=0x5561b7c6abc0
2018-02-26 13:09:13.669+0000: 50516: debug :
qemuDomainObjExitMonitorInternal:5071 : Exited monitor (mon=0x7f728c07f260
vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.669+0000: 50516: debug :
qemuDomainObjEnterMonitorInternal:5048 : Entering monitor
(mon=0x7f728c07f260 vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.669+0000: 50516: debug : qemuMonitorEjectMedia:2487 :
dev_name=drive-ide0-1-0 force=0
2018-02-26 13:09:13.669+0000: 50516: debug : qemuMonitorEjectMedia:2489 :
mon:0x7f728c07f260 vm:0x7f7294100af0 json:1 fd:24
2018-02-26 13:09:13.669+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:301 : Send command
'{"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false},"i
d":"libvirt-79"}' for write with FD -1
2018-02-26 13:09:13.669+0000: 50516: info : qemuMonitorSend:1079 :
QEMU_MONITOR_SEND_MSG: mon=0x7f728c07f260
msg={"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false}
,"id":"libvirt-79"}
fd=-1
2018-02-26 13:09:13.669+0000: 50514: info : qemuMonitorIOWrite:553 :
QEMU_MONITOR_IO_WRITE: mon=0x7f728c07f260
buf={"execute":"eject","arguments":{"device":"drive-ide0-1-0","force":false}
,"id":"libvirt-79"}
len=93 ret=93 errno=0
2018-02-26 13:09:13.670+0000: 50514: debug :
qemuMonitorJSONIOProcessLine:193 : Line [{"return": {}, "id": "libvirt-79"}]
2018-02-26 13:09:13.670+0000: 50514: info : qemuMonitorJSONIOProcessLine:213
: QEMU_MONITOR_RECV_REPLY: mon=0x7f728c07f260 reply={"return": {}, "id":
"libvirt-79"}
2018-02-26 13:09:13.670+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:306 : Receive command reply ret=0
rxObject=0x5561b7c6a080
2018-02-26 13:09:13.670+0000: 50516: debug :
qemuDomainObjExitMonitorInternal:5071 : Exited monitor (mon=0x7f728c07f260
vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.670+0000: 50516: debug :
qemuDomainObjEnterMonitorInternal:5048 : Entering monitor
(mon=0x7f728c07f260 vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.670+0000: 50516: debug : qemuMonitorChangeMedia:2504 :
dev_name=drive-ide0-1-0
newmedia=rbd:zstack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f64
9ee166b1f:auth_supported=none:mon_host=10.0.229.181\:6789 format=raw
2018-02-26 13:09:13.670+0000: 50516: debug : qemuMonitorChangeMedia:2506 :
mon:0x7f728c07f260 vm:0x7f7294100af0 json:1 fd:24
2018-02-26 13:09:13.670+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:301 : Send command
'{"execute":"change","arguments":{"device":"drive-ide0-1-0","target":"rbd:zs
tack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f:auth_
supported=none:mon_host=10.0.229.181\\:6789","arg":"raw"},"id":"libvirt-80"}
' for write with FD -1
2018-02-26 13:09:13.670+0000: 50516: info : qemuMonitorSend:1079 :
QEMU_MONITOR_SEND_MSG: mon=0x7f728c07f260
msg={"execute":"change","arguments":{"device":"drive-ide0-1-0","target":"rbd
:zstack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f:au
th_supported=none:mon_host=10.0.229.181\\:6789","arg":"raw"},"id":"libvirt-8
0"}
fd=-1
2018-02-26 13:09:13.670+0000: 50514: info : qemuMonitorIOWrite:553 :
QEMU_MONITOR_IO_WRITE: mon=0x7f728c07f260
buf={"execute":"change","arguments":{"device":"drive-ide0-1-0","target":"rbd
:zstack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f:au
th_supported=none:mon_host=10.0.229.181\\:6789","arg":"raw"},"id":"libvirt-8
0"}
len=229 ret=229 errno=0
2018-02-26 13:09:13.678+0000: 50514: debug :
qemuMonitorJSONIOProcessLine:193 : Line [{"id": "libvirt-80", "error":
{"class": "GenericError", "desc": "error connecting: Operation not
supported"}}]
2018-02-26 13:09:13.678+0000: 50514: info : qemuMonitorJSONIOProcessLine:213
: QEMU_MONITOR_RECV_REPLY: mon=0x7f728c07f260 reply={"id": "libvirt-80",
"error": {"class": "GenericError", "desc": "error connecting: Operation not
supported"}}
2018-02-26 13:09:13.678+0000: 50516: debug :
qemuMonitorJSONCommandWithFd:306 : Receive command reply ret=0
rxObject=0x5561b7c88f40
2018-02-26 13:09:13.678+0000: 50516: debug : qemuMonitorJSONCheckError:381 :
unable to execute QEMU command
{"execute":"change","arguments":{"device":"drive-ide0-1-0","target":"rbd:zst
ack/08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f:auth_s
upported=none:mon_host=10.0.229.181\\:6789","arg":"raw"},"id":"libvirt-80"}:
{"id":"libvirt-80","error":{"class":"GenericError","desc":"error connecting:
Operation not supported"}}
2018-02-26 13:09:13.678+0000: 50516: error : qemuMonitorJSONCheckError:392 :
internal error: unable to execute QEMU command 'change': error connecting:
Operation not supported
2018-02-26 13:09:13.678+0000: 50516: debug :
qemuDomainObjExitMonitorInternal:5071 : Exited monitor (mon=0x7f728c07f260
vm=0x7f7294100af0 name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.678+0000: 50516: debug : qemuTeardownImageCgroup:123 :
Not updating cgroups for disk path
'08085a31f8c43f278ed2f649ee166b1f@08085a31f8c43f278ed2f649ee166b1f', type:
network
2018-02-26 13:09:13.682+0000: 50516: debug : qemuDomainObjEndJob:4979 :
Stopping job: modify (async=none vm=0x7f7294100af0
name=6ec499397d594ef2a64fcfc938f38225)
2018-02-26 13:09:13.983+0000: 50520: debug : virDomainLookupByName:412 :
conn=0x7f7278000aa0, name=6ec499397d594ef2a64fcfc938f38225
2018-02-26 13:09:13.990+0000: 50518: debug : virDomainGetInfo:2431 :
dom=0x7f72700009b0, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), info=0x7f72b7856b20
2018-02-26 13:09:13.990+0000: 50518: debug : qemuGetProcessInfo:1479 : Got
status for 71205/0 user=14675 sys=3628 cpu=0 rss=105119
2018-02-26 13:09:13.991+0000: 50515: debug : virDomainGetXMLDesc:2572 :
dom=0x7f726c000c30, (VM: name=6ec499397d594ef2a64fcfc938f38225,
uuid=6ec49939-7d59-4ef2-a64f-cfc938f38225), flags=0x0"
I see the flow is virDomainUpdateDeviceFlags -> qemuMonitorChangeMedia, but
the cephx auth is drop, so make update error. Anybody meet this error?
Best Regards,
Star Guo
6 years, 8 months
[libvirt-users] How can we achieve vga emulation over a serial port in libvirt
by Meina Li
Hi
For the latest seabios version, it said: Support for vga emulation over a
serial port in SeaBIOS (sercon).
So I want to know how can I find the application of this feature in
libvirt? And whether my understanding is correct?
There are no related instruction on the website: https://libvirt.org/
My understanding:
(1) The feature means: we can set this the IO address of a serial port to
"video" element to enable SeaBIOS' VGA adapter emulation on the given
serial port.
(2) Libvirt XML examples and with this xml the guest can start successfully:
<controller type='virtio-serial' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<video>
<model type='vga' vram='16384' heads='1' primary='yes'/>
<address type='virtio-serial' controller='0' bus='0' port='3'/>
</video>
Thank in advance.
Best Regards
Meina Li
6 years, 8 months