how does setMemory work?
by Henry lol
Hi guys,
I used setMemory in order to dynamically change guest's memory on QEMU-KVM.
As expected, memory(total, free, available) in the guest was really
changed, but after a few seconds, it automatically reverted to its initial
memory.
So, setMemory changes the guest's memory temporarily?
Thanks.
4 years, 3 months
Two questions about NVDIMM devices
by Milan Zamazal
Hi,
I've met two situations with NVDIMM support in libvirt where I'm not
sure all the parties (libvirt & I) do the things correctly.
The first problem is with memory alignment and size changes. In
addition to the size changes applied to NVDIMMs by QEMU, libvirt also
makes some NVDIMM size changes for better alignments, in
qemuDomainMemoryDeviceAlignSize. This can lead to the size being
rounded up, exceeding the size of the backing device and QEMU failing to
start the VM for that reason (I've experienced that actually). I work
with emulated NVDIMM devices, not a bare metal hardware, so one might
argue that in practice the device sizes should already be aligned, but
I'm not sure it must be always the case considering labels or whatever
else the user decides to set up. And I still don't feel very
comfortable that I have to count with two internal size adjustments
(libvirt & QEMU) to the `size' value I specify, with the ultimate goal
of getting the VM started and having the NVDIMM aligned properly to make
(non-NVDIMM) memory hot plug working. Is the size alignment performed
by libvirt, especially rounding up, completely correct for NVDIMMs?
The second problem is that a VM fails to start with a backing NVDIMM in
devdax mode due to SELinux preventing access to the /dev/dax* device (it
doesn't happen with any other NVDIMM modes). Who should be responsible
for handling the SELinux label appropriately in that case? libvirt, the
system administrator, anybody else? Using <seclabel> in NVDIMM's source
doesn't seem to be accepted by the domain XML schema.
Thanks,
Milan
4 years, 3 months
Re: [ovirt-users] Re: Testing ovirt 4.4.1 Nested KVM on Skylake-client (core i5) does not work
by Nir Soffer
On Mon, Sep 14, 2020 at 8:42 AM Yedidyah Bar David <didi(a)redhat.com> wrote:
>
> On Mon, Sep 14, 2020 at 12:28 AM wodel youchi <wodel.youchi(a)gmail.com> wrote:
> >
> > Hi,
> >
> > Thanks for the help, I think I found the solution using this link : https://www.berrange.com/posts/2018/06/29/cpu-model-configuration-for-qem...
> >
> > When executing : virsh dumpxml on my ovirt hypervisor I saw that the mpx flag was disabled, so I edited the XML file of the hypervisor VM and I did this : add the already enabled features and enable mpx with them. I stopped/started my hyerpvisor VM and voila, le nested VM-Manager has booted successfully.
> >
> >
> > <cpu mode="host-model" check="partial">
> > <feature policy="require" name="ss"/>
> > <feature policy="require" name="vmx"/>
> > <feature policy="require" name="pdcm"/>
> > <feature policy="require" name="hypervisor"/>
> > <feature policy="require" name="tsc_adjust"/>
> > <feature policy="require" name="clflushopt"/>
> > <feature policy="require" name="umip"/>
> > <feature policy="require" name="md-clear"/>
> > <feature policy="require" name="stibp"/>
> > <feature policy="require" name="arch-capabilities"/>
> > <feature policy="require" name="ssbd"/>
> > <feature policy="require" name="xsaves"/>
> > <feature policy="require" name="pdpe1gb"/>
> > <feature policy="require" name="ibpb"/>
> > <feature policy="require" name="amd-ssbd"/>
> > <feature policy="require" name="skip-l1dfl-vmentry"/>
> > <feature policy="require" name="mpx"/>
> > </cpu
>
> Thanks for the report!
>
> Would you like to open a bug about this?
>
> A possible fix is probably to pass relevant options to the
> virt-install command in ovirt-ansible-hosted-engine-setup.
> Either always - no idea what the implications are - or
> optionally, or even allow the user to pass arbitrary options.
I don't think we need to do such change on our side. This seems like a
hard to reproduce libvirt bug.
The strange thing is that after playing with the XML generated by
virt-manager, using
[x] Copy host CPU configuration
Creating this XML:
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>Skylake-Client-IBRS</model>
<vendor>Intel</vendor>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='pdcm'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='clflushopt'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='md-clear'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='xsaves'/>
<feature policy='require' name='pdpe1gb'/>
<feature policy='require' name='ibpb'/>
<feature policy='require' name='amd-stibp'/>
<feature policy='require' name='amd-ssbd'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
<feature policy='require' name='pschange-mc-no'/>
<feature policy='disable' name='mpx'/>
</cpu>
Or using this XML in virt-manager:
<cpu mode="host-passthrough" check="none" migratable="on"/>
Both work with these cluster CPU Type:
- Secure Intel Skylake Client Family
- Intel Skylake Client Family
I think the best place to discuss this is libvirt-users mailing list:
https://www.redhat.com/mailman/listinfo/libvirt-users
Nir
> Thanks and best regards,
>
> >
> >
> > Regards.
> >
> > Le dim. 13 sept. 2020 à 19:47, Nir Soffer <nsoffer(a)redhat.com> a écrit :
> >>
> >> On Sun, Sep 13, 2020 at 8:32 PM wodel youchi <wodel.youchi(a)gmail.com> wrote:
> >> >
> >> > Hi,
> >> >
> >> > I've been using my core i5 6500 (skylake-client) for some time now to test oVirt on my machine.
> >> > However this is no longer the case.
> >> >
> >> > I am using Fedora 32 as my base system with nested-kvm enabled, when I try to install oVirt 4.4 as HCI single node, I get an error in the last phase which consists of copying the VM-Manager to the engine volume and boot it.
> >> > It is the boot that causes the problem, I get an error about the CPU :
> >> > the CPU is incompatible with host CPU: Host CPU does not provide required features: mpx
> >> >
> >> > This is the CPU part from virsh domcapabilities on my physical machine
> >> > <cpu>
> >> > <mode name='host-passthrough' supported='yes'/>
> >> > <mode name='host-model' supported='yes'>
> >> > <model fallback='forbid'>Skylake-Client-IBRS</model>
> >> > <vendor>Intel</vendor>
> >> > <feature policy='require' name='ss'/>
> >> > <feature policy='require' name='vmx'/>
> >> > <feature policy='require' name='pdcm'/>
> >> > <feature policy='require' name='hypervisor'/>
> >> > <feature policy='require' name='tsc_adjust'/>
> >> > <feature policy='require' name='clflushopt'/>
> >> > <feature policy='require' name='umip'/>
> >> > <feature policy='require' name='md-clear'/>
> >> > <feature policy='require' name='stibp'/>
> >> > <feature policy='require' name='arch-capabilities'/>
> >> > <feature policy='require' name='ssbd'/>
> >> > <feature policy='require' name='xsaves'/>
> >> > <feature policy='require' name='pdpe1gb'/>
> >> > <feature policy='require' name='invtsc'/>
> >> > <feature policy='require' name='ibpb'/>
> >> > <feature policy='require' name='amd-ssbd'/>
> >> > <feature policy='require' name='skip-l1dfl-vmentry'/>
> >> > </mode>
> >> > <mode name='custom' supported='yes'>
> >> > <model usable='yes'>qemu64</model>
> >> > <model usable='yes'>qemu32</model>
> >> > <model usable='no'>phenom</model>
> >> > <model usable='yes'>pentium3</model>
> >> > <model usable='yes'>pentium2</model>
> >> > <model usable='yes'>pentium</model>
> >> > <model usable='yes'>n270</model>
> >> > <model usable='yes'>kvm64</model>
> >> > <model usable='yes'>kvm32</model>
> >> > <model usable='yes'>coreduo</model>
> >> > <model usable='yes'>core2duo</model>
> >> > <model usable='no'>athlon</model>
> >> > <model usable='yes'>Westmere-IBRS</model>
> >> > <model usable='yes'>Westmere</model>
> >> > <model usable='no'>Skylake-Server-IBRS</model>
> >> > <model usable='no'>Skylake-Server</model>
> >> > <model usable='yes'>Skylake-Client-IBRS</model>
> >> > <model usable='yes'>Skylake-Client</model>
> >> > <model usable='yes'>SandyBridge-IBRS</model>
> >> > <model usable='yes'>SandyBridge</model>
> >> > <model usable='yes'>Penryn</model>
> >> > <model usable='no'>Opteron_G5</model>
> >> > <model usable='no'>Opteron_G4</model>
> >> > <model usable='no'>Opteron_G3</model>
> >> > <model usable='yes'>Opteron_G2</model>
> >> > <model usable='yes'>Opteron_G1</model>
> >> > <model usable='yes'>Nehalem-IBRS</model>
> >> > <model usable='yes'>Nehalem</model>
> >> > <model usable='yes'>IvyBridge-IBRS</model>
> >> > <model usable='yes'>IvyBridge</model>
> >> > <model usable='no'>Icelake-Server</model>
> >> > <model usable='no'>Icelake-Client</model>
> >> > <model usable='yes'>Haswell-noTSX-IBRS</model>
> >> > <model usable='yes'>Haswell-noTSX</model>
> >> > <model usable='yes'>Haswell-IBRS</model>
> >> > <model usable='yes'>Haswell</model>
> >> > <model usable='no'>EPYC-IBPB</model>
> >> > <model usable='no'>EPYC</model>
> >> > <model usable='no'>Dhyana</model>
> >> > <model usable='yes'>Conroe</model>
> >> > <model usable='no'>Cascadelake-Server</model>
> >> > <model usable='yes'>Broadwell-noTSX-IBRS</model>
> >> > <model usable='yes'>Broadwell-noTSX</model>
> >> > <model usable='yes'>Broadwell-IBRS</model>
> >> > <model usable='yes'>Broadwell</model>
> >> > <model usable='yes'>486</model>
> >> > </mode>
> >> > </cpu>
> >> >
> >> > Here is the lscpu of my physical machine
> >> > # lscpu
> >> > Architecture: x86_64
> >> > CPU op-mode(s): 32-bit, 64-bit
> >> > Byte Order: Little Endian
> >> > Address sizes: 39 bits physical, 48 bits virtual
> >> > CPU(s): 4
> >> > On-line CPU(s) list: 0-3
> >> > Thread(s) per core: 1
> >> > Core(s) per socket: 4
> >> > Socket(s): 1
> >> > NUMA node(s): 1
> >> > Vendor ID: GenuineIntel
> >> > CPU family: 6
> >> > Model: 94
> >> > Model name: Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz
> >> > Stepping: 3
> >> > CPU MHz: 954.588
> >> > CPU max MHz: 3600.0000
> >> > CPU min MHz: 800.0000
> >> > BogoMIPS: 6399.96
> >> > Virtualization: VT-x
> >> > L1d cache: 128 KiB
> >> > L1i cache: 128 KiB
> >> > L2 cache: 1 MiB
> >> > L3 cache: 6 MiB
> >> > NUMA node0 CPU(s): 0-3
> >> > Vulnerability Itlb multihit: KVM: Mitigation: Split huge pages
> >> > Vulnerability L1tf: Mitigation; PTE Inversion; VMX conditional cache flushes, SMT disabled
> >> > Vulnerability Mds: Mitigation; Clear CPU buffers; SMT disabled
> >> > Vulnerability Meltdown: Mitigation; PTI
> >> > Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
> >> > Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
> >> > Vulnerability Spectre v2: Mitigation; Full generic retpoline, IBPB conditional, IBRS_FW, STIBP disabled, RSB filling
> >> > Vulnerability Srbds: Vulnerable: No microcode
> >> > Vulnerability Tsx async abort: Mitigation; Clear CPU buffers; SMT disabled
> >> > Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constan
> >> > t_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16
> >> > xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti ssbd
> >> > ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx rdseed adx smap clflushopt in
> >> > tel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp md_clear flush_l1d
> >> >
> >> >
> >> >
> >> > Here is the CPU part from virsh dumpxml of my ovirt hypervisor
> >> > <cpu mode='custom' match='exact' check='full'>
> >> > <model fallback='forbid'>Skylake-Client-IBRS</model>
> >> > <vendor>Intel</vendor>
> >> > <feature policy='require' name='ss'/>
> >> > <feature policy='require' name='vmx'/>
> >> > <feature policy='require' name='pdcm'/>
> >> > <feature policy='require' name='hypervisor'/>
> >> > <feature policy='require' name='tsc_adjust'/>
> >> > <feature policy='require' name='clflushopt'/>
> >> > <feature policy='require' name='umip'/>
> >> > <feature policy='require' name='md-clear'/>
> >> > <feature policy='require' name='stibp'/>
> >> > <feature policy='require' name='arch-capabilities'/>
> >> > <feature policy='require' name='ssbd'/>
> >> > <feature policy='require' name='xsaves'/>
> >> > <feature policy='require' name='pdpe1gb'/>
> >> > <feature policy='require' name='ibpb'/>
> >> > <feature policy='require' name='amd-ssbd'/>
> >> > <feature policy='require' name='skip-l1dfl-vmentry'/>
> >> > <feature policy='disable' name='mpx'/>
> >> > </cpu>
> >> >
> >> > Here is the lcpu of my ovirt hypervisor
> >> > [root@node1 ~]# lscpu
> >> > Architecture : x86_64
> >> > Mode(s) opératoire(s) des processeurs : 32-bit, 64-bit
> >> > Boutisme : Little Endian
> >> > Processeur(s) : 4
> >> > Liste de processeur(s) en ligne : 0-3
> >> > Thread(s) par cœur : 1
> >> > Cœur(s) par socket : 1
> >> > Socket(s) : 4
> >> > Nœud(s) NUMA : 1
> >> > Identifiant constructeur : GenuineIntel
> >> > Famille de processeur : 6
> >> > Modèle : 94
> >> > Nom de modèle : Intel Core Processor (Skylake, IBRS)
> >> > Révision : 3
> >> > Vitesse du processeur en MHz : 3191.998
> >> > BogoMIPS : 6383.99
> >> > Virtualisation : VT-x
> >> > Constructeur d'hyperviseur : KVM
> >> > Type de virtualisation : complet
> >> > Cache L1d : 32K
> >> > Cache L1i : 32K
> >> > Cache L2 : 4096K
> >> > Cache L3 : 16384K
> >> > Nœud NUMA 0 de processeur(s) : 0-3
> >> > Drapaux : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_go
> >> > od nopl xtopology cpuid tsc_known_freq pni pclmulqdq vmx ssse3 fma cx16 pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnow
> >> > prefetch cpuid_fault invpcid_single pti ssbd ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx smap clflushopt xs
> >> > aveopt xsavec xgetbv1 xsaves arat umip md_clear arch_capabilities
> >> >
> >> > it seems not all the flags are presented to the hypervisor especially the mpx which causes the error
> >> >
> >> > Is there a workaround for this?
> >>
> >> I'm using a similar setup, using older generation CPU works.
> >>
> >> Cluster CPU Type:
> >> Intel Broadwell Family
> >>
> >> It looks like this bug:
> >> https://bugzilla.redhat.com/1609818
> >>
> >> But it cannot be fixed by resetting the cpu type, suggested in:
> >> https://bugzilla.redhat.com/show_bug.cgi?id=1609818#c9
> >>
> >> Nir
> >>
> >>
> >> Nir
> >>
> > _______________________________________________
> > Users mailing list -- users(a)ovirt.org
> > To unsubscribe send an email to users-leave(a)ovirt.org
> > Privacy Statement: https://www.ovirt.org/privacy-policy.html
> > oVirt Code of Conduct: https://www.ovirt.org/community/about/community-guidelines/
> > List Archives:
>
>
>
> --
> Didi
>
4 years, 3 months
libvirt binding
by Shashwat shagun
is the connection object a connection pool or just a single connection?
Can it be used concurrently?
Shashwat.
4 years, 3 months
Problem with a RAW Disk and Xen
by Christoph
Hi All,
I have in config of 2 Xen DomU's this disk:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/dev/mapper/keys'/>
<target dev='xvdz' bus='xen'/>
<readonly/>
</disk>
If I start the DomU's then all starts without problem, see this in log
of the DomU:
{
"pdev_path": "/dev/mapper/keys",
"vdev": "xvdz",
"backend": "qdisk",
"format": "raw",
"removable": 1,
"discard_enable": "False",
"colo_enable": "False",
"colo_restore_enable": "False"
}
but this disk isn't there in the domU... Why?
It is a shared disk between the DomU's but configured as "readonly" so
there shouldnt be a problem (with the same xen xl configuration, there
is no problem)
--
------
Greetz
4 years, 3 months
Read-only iscsi disk? Or hot-plug?
by Paul van der Vlis
Hello,
I have an iSCSI disk with a backup. I want to use that backup on another
machine to test putting back data.
What I use now is this:
----
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='directsync' io='native'/>
<source dev='/dev/disk/by-id/scsi-36e843b6afdddf65dc4e9d4dc2dab66de'/>
<target dev='vdh' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x0c'
function='0x0'/>
</disk>
----
Is it possible to do this read-only?
What I also could do is to hot-unplug the disk at the production
machine, and then hot-plug it at the test-machine. But the machines are
Windows, and I don't know Windows well.
With regards,
Paul
--
Paul van der Vlis Linux systeembeheer Groningen
https://www.vandervlis.nl/
4 years, 3 months
debian 10, vm cant connect to the host bridge
by Schuldei, Andreas
This is my system info:
Debian Release: 10.5
APT prefers stable-updates
APT policy: (500, 'stable-updates'), (500, 'stable')
Architecture: amd64 (x86_64)
Kernel: Linux 5.4.60-1-pve (SMP w/16 CPU cores)
Kernel taint flags: TAINT_PROPRIETARY_MODULE, TAINT_OOT_MODULE
Locale: LANG=en_US.UTF-8, LC_CTYPE=en_US.UTF-8 (charmap=UTF-8), LANGUAGE=en_US:en (charmap=UTF-8)
Shell: /bin/sh linked to /usr/bin/dash
Init: systemd (via /run/systemd/system)
LSM: AppArmor: enabled
Libvirt version 5.0.0
qemu
Version: 1:3.1+dfsg-8+deb10u7
I try to get the filtering bridge to work.
This is the host, with the br0 that is connected to a trunked port
================================
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
link/ether 34:48:ed:f0:a9:e8 brd ff:ff:ff:ff:ff:ff
inet 10.12.0.13/24 brd 10.12.0.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::3648:edff:fef0:a9e8/64 scope link
valid_lft forever preferred_lft forever
3: eno2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master br0 state UP group default qlen 1000
link/ether 34:48:ed:f0:a9:e9 brd ff:ff:ff:ff:ff:ff
inet6 fe80::3648:edff:fef0:a9e9/64 scope link
valid_lft forever preferred_lft forever
4: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether e6:67:7b:87:b5:ca brd ff:ff:ff:ff:ff:ff
inet6 fe80::e467:7bff:fe87:b5ca/64 scope link
valid_lft forever preferred_lft forever
5: virbr0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 52:54:00:2b:e3:f7 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
valid_lft forever preferred_lft forever
6: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast master virbr0 state DOWN group default qlen 1000
link/ether 52:54:00:2b:e3:f7 brd ff:ff:ff:ff:ff:ff
19: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UNKNOWN group default qlen 1000
link/ether fe:54:00:fc:ea:e6 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fefc:eae6/64 scope link
valid_lft forever preferred_lft forever
===================
bridge vlan show
port vlan ids
eno2 4
7
221
800
br0 None
virbr0 1 PVID Egress Untagged
virbr0-nic 1 PVID Egress Untagged
vnet0 800
==================
however the mac does not show up when i do
==================
brctl showmacs br0
==================
so vnet0 does not yet communicate with the bridge
inside the vm:
=============================
ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:fc:ea:e6 brd ff:ff:ff:ff:ff:ff
inet 195.37.235.121/26 brd 195.37.235.127 scope global enp1s0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:fefc:eae6/64 scope link
valid_lft forever preferred_lft forever
===============================
and
===========
ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
>From 195.37.235.121 icmp_seq=1 Destination Host Unreachable
>From 195.37.235.121 icmp_seq=2 Destination Host Unreachable
==============
The mac address of vnet0 and enp1s0 is the same. That means they are the same entity. yay!
The XML describing the network part of the VM is here:
=====================
<interface type='bridge'>
<mac address='52:54:00:29:b6:e0'/>
<source bridge='br0'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>
=======================
what could be the problem?
4 years, 3 months
Error With Xeon 2678 in virt-manager
by John Bajer
Hello, I have to select Ivy Bridge since I have Xeon 2678 but I cannot find
it on the drop-down menu in virt-manager. When I select copy host cpu
configuration it changes it to EPYC as soon as I click the start button.
I have Manjaro on my computer.
How to solve this problem?
Thanks!
*John Washington Bajer*
*El Ciclista de la Triste Figura*
4 years, 3 months
printing the qemu final execution line from an xml
by daggs
Greetings,
I have a qemu line which I want to convert ot libvirt xml but as domxml-from-native is deprecated, I want to try it the other way around.
e.g. write an libvirt xml and dump the final qemu line without running it.
is there a way to do so?
Thanks,
Dagg.
4 years, 3 months
usb-hdmi-cec-adapter usb pass-through experiences
by daggs
Greetings,
I need to get a device like this: https://www.pulse-eight.com/p/104/usb-hdmi-cec-adapter
I have usb pass-through experiences with some usb devices such as dtv, wireless network, thumbsticks and wireless keyboards.
I know there isn't much to be done in usb pass-through but I did had issues with the former two, now I'm using a wireless network without and issues.
I wanted to know if someone has tried using this device as pass-through into a vm and can share his thoughts.
Thanks,
Dagg.
4 years, 3 months