Re: about pcie-hot-plug on aarch64
by Andrea Bolognani
Forwarding to libvirt-users for visibility.
On Wed, Oct 20, 2021 at 03:51:10PM +0800, Jaze Lee wrote:
> For someone who meets the same problem.
>
> I upgrade stein to w, use one centos stream 8 vm as ctrl, and one
> centos stream 8 server as cn,
> then test it. There is no problem with attaching disks. All disks are
> attached correctly.
> But we do not know what's the problem with stein. If someone meets the
> problem, you can update your openstack.
>
> Jaze Lee <jazeltq(a)gmail.com> 于2021年10月12日周二 上午9:20写道:
> > Andrea Bolognani <abologna(a)redhat.com> 于2021年10月12日周二 上午12:07写道:
> > > On Fri, Oct 08, 2021 at 04:54:37PM +0800, Jaze Lee wrote:
> > > > Hello,
> > > > We run Openstack Stein on arm. It runs nova-compute(use libvirt
> > > > as virt driver) on arm host. We found when built with disks (use ceph
> > > > rbd) on arm hosts, the vm can not attach all disk correctly. For
> > > > example, built with six disks, the vm may attach three disks. No
> > > > obvious error can be fond in nova-compute, libvirt. We compare aarch64
> > > > and x86, find when detach disk, the dmesg of the vm's os is different.
> > > > May be the pciehg parameter is different?
> > > >
> > > > Did anyone met the problem? Or some suggestions?
> > >
> > > I think you might have just ran out of PCI ports available for
> > > hotplug. Please try setting
> > >
> > > https://docs.openstack.org/nova/stein/configuration/config.html#libvirt.n...
> > >
> > > to a reasonable value and see whether that helps.
> >
> > Thanks.
> > I already set the value to a reasonable value. It is 15 in our environment.
> > If the value is not set correctly, the nova-compute will complain no
> > available slot for pci device.
> > But it is not the case I talked about here.
--
Andrea Bolognani / Red Hat / Virtualization
3 years, 2 months
VM paused during multipath iSCSI reservation
by Vojtech Juranek
Hi,
I'm trying to find the root cause for BZ #1898049 [1]. When setting up Windows HA
cluster on Windows Server VMs run on top of oVirt, Windows cluster validator runs couple of tests
and fails during test "Validate SCSI-3 Persistent Reservation" and one of the VMs of
the cluster is paused with IO error. Disk definition is as follows:
<disk type='block' device='lun' sgio='unfiltered' snapshot='no'>
<driver name='qemu' type='raw' cache='none' error_policy='stop' io='native'/>
<source dev='/dev/mapper/3600a09803830447a4f244c4657616f6f' index='1'>
<seclabel model='dac' relabel='no'/>
<reservations managed='yes'>
<source type='unix' path='/var/lib/libvirt/qemu/domain-1-Windows-2016-2/pr-helper0.sock' mode='client'/>
</reservations>
</source>
<backingStore/>
<target dev='sdb' bus='scsi'/>
<shareable/>
<alias name='ua-26b4975e-e1d4-4e27-b2c6-2ea0894a571b'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
and libvirt error I get is bellow [2].
When I try to create reservation from Windows VM manually, I get following error
(but not sure I do it whole process correctly):
.\sg_persist --out --register --param-sark=123abc e:
QEMU QEMU HARDDISK 2.5+
Peripheral device type: disk
PR out (Register): command not supported
sg_persist failed: Illegal request, Invalid opcode
Do you have any ideas what could be wrong or how to determine
the root cause of this this issue?
Thanks in advance.
Vojta
[1] https://bugzilla.redhat.com/1898049
[2] libvirt debug log:
2021-10-12 11:43:25.148+0000: 2006427: debug : qemuMonitorEmitIOError:1243 : mon=0x7fb02006a020
2021-10-12 11:43:25.148+0000: 2006427: info : virObjectRef:402 : OBJECT_REF: obj=0x7fb02006a020
2021-10-12 11:43:25.148+0000: 2006427: info : virObjectRef:402 : OBJECT_REF: obj=0x7fafd0130020
2021-10-12 11:43:25.148+0000: 2000208: info : virObjectRef:402 : OBJECT_REF: obj=0x7fafd010d340
2021-10-12 11:43:25.148+0000: 2006427: info : virObjectNew:258 : OBJECT_NEW: obj=0x7fb020082590 classname=virDomainEventIOError
2021-10-12 11:43:25.148+0000: 2000208: info : vir_object_finalize:321 : OBJECT_DISPOSE: obj=0x7fb020082500
2021-10-12 11:43:25.148+0000: 2006427: info : virObjectNew:258 : OBJECT_NEW: obj=0x7fb020082620 classname=virDomainEventIOError
2021-10-12 11:43:25.148+0000: 2000208: info : virObjectUnref:380 : OBJECT_UNREF: obj=0x7fb020082500
2021-10-12 11:43:25.148+0000: 2006427: debug : qemuProcessHandleIOError:907 : Transitioned guest Windows-2016-2 to paused state due to IO error
2021-10-12 11:43:25.148+0000: 2000208: info : virObjectUnref:380 : OBJECT_UNREF: obj=0x7fafd010d340
2021-10-12 11:43:25.148+0000: 2006427: info : virObjectNew:258 : OBJECT_NEW: obj=0x7fafbc1fb8c0 classname=virDomainEventLifecycle
2021-10-12 11:43:25.148+0000: 2006427: debug : virDomainLockProcessPause:204 : plugin=0x7fafd01272a0 dom=0x7fb01400f5e0 state=0x7fb02401d768
2021-10-12 11:43:25.148+0000: 2006427: debug : virDomainLockManagerNew:134 : plugin=0x7fafd01272a0 dom=0x7fb01400f5e0 withResources=1
2021-10-12 11:43:25.148+0000: 2006427: debug : virLockManagerPluginGetDriver:276 : plugin=0x7fafd01272a0
2021-10-12 11:43:25.148+0000: 2006427: debug : virLockManagerNew:300 : driver=0x7fafd444a000 type=0 nparams=5 params=0x7fafd77de640 flags=0x0
2021-10-12 11:43:25.148+0000: 2006427: debug : virLockManagerLogParams:97 : key=uuid type=uuid value=70eee88c-ba2c-4c6c-bd51-c2b663db27f8
2021-10-12 11:43:25.148+0000: 2006427: debug : virLockManagerLogParams:90 : key=name type=string value=Windows-2016-2
2021-10-12 11:43:25.148+0000: 2006427: debug : virLockManagerLogParams:78 : key=id type=uint value=1
2021-10-12 11:43:25.148+0000: 2006427: debug : virLockManagerLogParams:78 : key=pid type=uint value=2006418
2021-10-12 11:43:25.148+0000: 2006427: debug : virLockManagerLogParams:93 : key=uri type=cstring value=(null)
2021-10-12 11:43:25.148+0000: 2006427: debug : virDomainLockManagerNew:146 : Adding leases
2021-10-12 11:43:25.148+0000: 2006427: debug : virDomainLockManagerNew:151 : Adding disks
2021-10-12 11:43:25.149+0000: 2006427: debug : virDomainLockManagerAddImage:90 : Add disk /rhev/data-center/mnt/blockSD/7c4f09b6-9e87-436f-bda9-22d1f0b50955/images/f5d6e074-dfe9-462d-8cfd-3e14b0eb5aea/766e36b2-84a6-43e7-a48b-a5f47e669860
2021-10-12 11:43:25.149+0000: 2006427: debug : virLockManagerAddResource:326 : lock=0x7fafbc19e250 type=0 name=/rhev/data-center/mnt/blockSD/7c4f09b6-9e87-436f-bda9-22d1f0b50955/images/f5d6e074-dfe9-462d-8cfd-3e14b0eb5aea/766e36b2-84a6-43e7-a48b-a5f47e669860 nparams=0 params=(nil) flags=0x0
2021-10-12 11:43:25.149+0000: 2006427: debug : virDomainLockManagerAddImage:90 : Add disk /dev/mapper/3600a09803830447a4f244c4657616f6f
2021-10-12 11:43:25.149+0000: 2006427: debug : virLockManagerAddResource:326 : lock=0x7fafbc19e250 type=0 name=/dev/mapper/3600a09803830447a4f244c4657616f6f nparams=0 params=(nil) flags=0x2
2021-10-12 11:43:25.149+0000: 2006427: debug : virLockManagerRelease:359 : lock=0x7fafbc19e250 state=0x7fb02401d768 flags=0x0
2021-10-12 11:43:25.149+0000: 2006427: debug : virLockManagerFree:381 : lock=0x7fafbc19e250
2021-10-12 11:43:25.149+0000: 2006427: debug : qemuProcessHandleIOError:920 : Preserving lock state '<null>'
2021-10-12 11:43:25.150+0000: 2006427: info : virObjectUnref:380 : OBJECT_UNREF: obj=0x7fafd0130020
2021-10-12 11:43:25.150+0000: 2006427: info : virObjectUnref:380 : OBJECT_UNREF: obj=0x7fb02006a020
2021-10-12 11:43:25.150+0000: 2000208: info : virObjectRef:402 : OBJECT_REF: obj=0x7fafd010d340
2021-10-12 11:43:25.150+0000: 2000208: info : vir_object_finalize:321 : OBJECT_DISPOSE: obj=0x7fb020082590
2021-10-12 11:43:25.150+0000: 2006427: info : virObjectRef:402 : OBJECT_REF: obj=0x7fb02006a020
2021-10-12 11:43:25.150+0000: 2000208: info : virObjectUnref:380 : OBJECT_UNREF: obj=0x7fb020082590
2021-10-12 11:43:25.150+0000: 2006427: info : virObjectUnref:380 : OBJECT_UNREF: obj=0x7fb02006a020
2021-10-12 11:43:25.150+0000: 2000208: info : virObjectNew:258 : OBJECT_NEW: obj=0x564a89a4cc60 classname=virDomain
2021-10-12 11:43:25.150+0000: 2006427: info : virObjectUnref:380 : OBJECT_UNREF: obj=0x7fb02006a020
2021-10-12 11:43:25.150+0000: 2000208: info : virObjectRef:402 : OBJECT_REF: obj=0x7fafd0018ca0
2021-10-12 11:43:25.150+0000: 2000208: info : virObjectRef:402 : OBJECT_REF: obj=0x564a89978df0
2021-10-12 11:43:25.150+0000: 2000208: debug : virAccessManagerCheckDomain:238 : manager=0x564a89978df0(name=stack) driver=QEMU domain=0x7ffd2c677010 perm=0
2021-10-12 11:43:25.150+0000: 2000208: debug : virAccessManagerCheckDomain:238 : manager=0x564a89978e50(name=none) driver=QEMU domain=0x7ffd2c677010 perm=0
2021-10-12 11:43:25.150+0000: 2000208: info : virObjectUnref:380 : OBJECT_UNREF: obj=0x564a89978df0
2021-10-12 11:43:25.150+0000: 2000208: debug : remoteRelayDomainEventIOErrorReason:529 : Relaying domain io error Windows-2016-2 1 /dev/mapper/3600a09803830447a4f244c4657616f6f ua-26b4975e-e1d4-4e27-b2c6-2ea0894a571b 1 , callback 3
3 years, 2 months
could not find capabilities for arch=x86_64 domaintype=kvm
by shafnamol N
Hi,
I am new to Libvirt and Qemu.I have installed Libvirt 6.10 and qemu-kvm
4.2.0 on CentOS8.
I configured and built libvirt based on instructions from
https://libvirt.org/compiling.html.
But when i tried to create a VM using virsh it shows the following error:
*# virsh create /home/abc.xml*
*error: Failed to create domain from /home/abc.xmlerror: invalid argument:
could not find capabilities for arch=x86_64 domaintype=kvm *
When i check the hypervisor capabilities ,it doest show qemu in guest
domain type.
# virsh capabilities
As per the reply from this mailing-list i passed an option while building
the libvirt.
ie.,
*meson build -Dsystem=true*
It worked for me.
But Now, i tried to migrate my work on a new machine with same os and
same version as above.
As part of,I configured and built libvirt.Also passed *meson build
-Dsystem=true *while building libvirt.
After installation I tried with
*# virsh create /home/abc.xml*
It results in the same error shown below.
*error: Failed to create domain from /home/abc.xmlerror: invalid argument:
could not find capabilities for arch=x86_64 domaintype=kvm *
I stuck here.
Thanks for the help in advance.
Shafnamol.N
3 years, 2 months
about pcie-hot-plug on aarch64
by Jaze Lee
Hello,
We run Openstack Stein on arm. It runs nova-compute(use libvirt
as virt driver) on arm host. We found when built with disks (use ceph
rbd) on arm hosts, the vm can not attach all disk correctly. For
example, built with six disks, the vm may attach three disks. No
obvious error can be fond in nova-compute, libvirt. We compare aarch64
and x86, find when detach disk, the dmesg of the vm's os is different.
May be the pciehg parameter is different?
Did anyone met the problem? Or some suggestions?
x86:
Nothing at all
aarch64:
Sep 29 15:28:55 * kernel: pciehp 0000:00:01.5:pcie004: Slot(0-5):
Attention button pressed
Sep 29 15:28:55 * kernel: pciehp 0000:00:01.5:pcie004: Slot(0-5):
Powering off due to button press
Sep 29 15:29:00 * kernel: pciehp 0000:00:01.5:pcie004: Slot(0-5):
Attention button pressed
Sep 29 15:29:00 * kernel: pciehp 0000:00:01.5:pcie004: Slot(0-5): Button cancel
Sep 29 15:29:00 * kernel: pciehp 0000:00:01.5:pcie004: Slot(0-5):
Action canceled due to button press
Sep 29 15:29:07 * kernel: pciehp 0000:00:01.5:pcie004: Slot(0-5):
Attention button pressed
Sep 29 15:29:07 * kernel: pciehp 0000:00:01.5:pcie004: Slot(0-5):
Powering off due to button press
Sep 29 15:29:13 * kernel: pciehp 0000:00:01.5:pcie004: Slot(0-5): Link Up
Sep 29 15:29:16 * kernel: pciehp 0000:00:01.5:pcie004: Failed to check
link status
Sep 29 15:29:18 * kernel: pciehp 0000:00:01.6:pcie004: Slot(0-6):
Attention button pressed
Sep 29 15:29:18 * kernel: pciehp 0000:00:01.6:pcie004: Slot(0-6):
Powering off due to button press
Sep 29 15:29:23 * kernel: pciehp 0000:00:01.6:pcie004: Slot(0-6):
Attention button pressed
Sep 29 15:29:23 * kernel: pciehp 0000:00:01.6:pcie004: Slot(0-6): Button cancel
Sep 29 15:29:23 * kernel: pciehp 0000:00:01.6:pcie004: Slot(0-6):
Action canceled due to button press
Sep 29 15:29:30 * kernel: pciehp 0000:00:01.6:pcie004: Slot(0-6):
Attention button pressed
Sep 29 15:29:30 * kernel: pciehp 0000:00:01.6:pcie004: Slot(0-6):
Powering off due to button press
Sep 29 15:29:36 * kernel: pciehp 0000:00:01.6:pcie004: Slot(0-6): Link Up
Sep 29 15:29:39 * kernel: pciehp 0000:00:01.6:pcie004: Failed to check
link status
Sep 29 15:29:39 * kernel: pciehp 0000:00:01.7:pcie004: Slot(0-7):
Attention button pressed
Sep 29 15:29:39 * kernel: pciehp 0000:00:01.7:pcie004: Slot(0-7):
Powering off due to button press
Sep 29 15:29:45 * kernel: pciehp 0000:00:01.7:pcie004: Slot(0-7):
Attention button pressed
Sep 29 15:29:45 * kernel: pciehp 0000:00:01.7:pcie004: Slot(0-7): Button cancel
Sep 29 15:29:45 * kernel: pciehp 0000:00:01.7:pcie004: Slot(0-7):
Action canceled due to button press
Sep 29 15:29:52 * kernel: pciehp 0000:00:01.7:pcie004: Slot(0-7):
Attention button pressed
Sep 29 15:29:52 * kernel: pciehp 0000:00:01.7:pcie004: Slot(0-7):
Powering off due to button press
Sep 29 15:29:58 * kernel: pciehp 0000:00:01.7:pcie004: Slot(0-7): Link Up
Sep 29 15:30:01 * kernel: pciehp 0000:00:02.0:pcie004: Slot(0-8):
Attention button pressed
Sep 29 15:30:01 * kernel: pciehp 0000:00:02.0:pcie004: Slot(0-8):
Powering off due to button press
3 years, 2 months
re: nwfilter direction not being used when protocol all
by Jason Pyeron
> -----Original Message-----
> From: Jason Pyeron
> Sent: Monday, October 11, 2021 8:49 AM
> To: Kyle Marek; Michael Watson Jr
> Cc: libvirt-users
>
> Watson / Kyle:
>
> (note I coped the list)
>
> While I read https://libvirt.org/formatnwfilter.html#nwfelemsRulesProtoMisc , it is not
> clear that it is intended to add the iptables action without regard to the rule’s
> direction.
>
> Take the following rule scenarios:
>
> <rule action='accept' direction='in' priority='500' statematch='false'>
> <tcp dstportstart='22'/>
> </rule>
> <rule action='drop' direction='in' priority='1000'>
> <all/>
> </rule>
>
> # iptables-save | grep vnet5 | tee in
> :FI-vnet5 - [0:0]
> :FO-vnet5 - [0:0]
> :HI-vnet5 - [0:0]
> -A FI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
> -A FI-vnet5 -j DROP
> -A FO-vnet5 -p tcp -m tcp --dport 22 -j ACCEPT
> -A FO-vnet5 -j DROP
> -A HI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
> -A HI-vnet5 -j DROP
> -A libvirt-host-in -m physdev --physdev-in vnet5 -g HI-vnet5
> -A libvirt-in -m physdev --physdev-in vnet5 -g FI-vnet5
> -A libvirt-in-post -m physdev --physdev-in vnet5 -j ACCEPT
> -A libvirt-out -m physdev --physdev-out vnet5 --physdev-is-bridged -g FO-vnet5
>
> <rule action='accept' direction='in' priority='500' statematch='false'>
> <tcp dstportstart='22'/>
> </rule>
> <rule action='drop' direction='out' priority='1000'>
> <all/>
> </rule>
>
> # iptables-save | grep vnet5 | tee out
> :FI-vnet5 - [0:0]
> :FO-vnet5 - [0:0]
> :HI-vnet5 - [0:0]
> -A FI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
> -A FI-vnet5 -j DROP
> -A FO-vnet5 -p tcp -m tcp --dport 22 -j ACCEPT
> -A FO-vnet5 -j DROP
> -A HI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
> -A HI-vnet5 -j DROP
> -A libvirt-host-in -m physdev --physdev-in vnet5 -g HI-vnet5
> -A libvirt-in -m physdev --physdev-in vnet5 -g FI-vnet5
> -A libvirt-in-post -m physdev --physdev-in vnet5 -j ACCEPT
> -A libvirt-out -m physdev --physdev-out vnet5 --physdev-is-bridged -g FO-vnet5
>
> <rule action='accept' direction='in' priority='500' statematch='false'>
> <tcp dstportstart='22'/>
> </rule>
> <rule action='drop' direction='inout' priority='1000'>
> <all/>
> </rule>
>
> # iptables-save | grep vnet5 | tee inout
> :FI-vnet5 - [0:0]
> :FO-vnet5 - [0:0]
> :HI-vnet5 - [0:0]
> -A FI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
> -A FI-vnet5 -j DROP
> -A FO-vnet5 -p tcp -m tcp --dport 22 -j ACCEPT
> -A FO-vnet5 -j DROP
> -A HI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
> -A HI-vnet5 -j DROP
> -A libvirt-host-in -m physdev --physdev-in vnet5 -g HI-vnet5
> -A libvirt-in -m physdev --physdev-in vnet5 -g FI-vnet5
> -A libvirt-in-post -m physdev --physdev-in vnet5 -j ACCEPT
> -A libvirt-out -m physdev --physdev-out vnet5 --physdev-is-bridged -g FO-vnet5
>
> We note that the
>
> -A HI-vnet5 -j DROP
> -A FI-vnet5 -j DROP
> -A FO-vnet5 -j DROP
>
> Is present without regards to the state of the direction attribute on the “default” drop
> rule.
>
> If the direction is “in” then the “-A FI-vnet5 -j DROP” should not exists.
>
> What does the source code say? I worry that either the docs are imprecise and this is
> desired, or there is a bug and I can end up like
After looking at libvirt-4.5.0/src/nwfilter/nwfilter_ebiptables_driver.c's _iptablesCreateRuleInstance and iptablesCreateRuleInstanceStateCtrl, I saw the if statements like the below.
1598 if (directionIn && !inout) {
1599 if ((rule->flags & IPTABLES_STATE_FLAGS))
1600 create = false;
1601 }
1629 if (!directionIn) {
1630 if ((rule->flags & IPTABLES_STATE_FLAGS))
1631 create = false;
1632 }
Is the only way to respect the direction is to have <all state='something...'/> ?
If that is the case the docs, really need an update to note this.
For others, my deny inbound, allow outbound was accomplished by:
<rule action='accept' direction='in' priority='999'>
<all state='ESTABLISHED,RELATED'/>
</rule>
<rule action='drop' direction='in' priority='1000'>
<all state='NONE'/>
</rule>
-Jason
3 years, 2 months
nwfilter direction not being used when protocol all
by Jason Pyeron
Watson / Kyle:
(note I coped the list)
While I read https://libvirt.org/formatnwfilter.html#nwfelemsRulesProtoMisc , it is not clear that it is intended to add the iptables action without regard to the rule’s direction.
Take the following rule scenarios:
<rule action='accept' direction='in' priority='500' statematch='false'>
<tcp dstportstart='22'/>
</rule>
<rule action='drop' direction='in' priority='1000'>
<all/>
</rule>
# iptables-save | grep vnet5 | tee in
:FI-vnet5 - [0:0]
:FO-vnet5 - [0:0]
:HI-vnet5 - [0:0]
-A FI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
-A FI-vnet5 -j DROP
-A FO-vnet5 -p tcp -m tcp --dport 22 -j ACCEPT
-A FO-vnet5 -j DROP
-A HI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
-A HI-vnet5 -j DROP
-A libvirt-host-in -m physdev --physdev-in vnet5 -g HI-vnet5
-A libvirt-in -m physdev --physdev-in vnet5 -g FI-vnet5
-A libvirt-in-post -m physdev --physdev-in vnet5 -j ACCEPT
-A libvirt-out -m physdev --physdev-out vnet5 --physdev-is-bridged -g FO-vnet5
<rule action='accept' direction='in' priority='500' statematch='false'>
<tcp dstportstart='22'/>
</rule>
<rule action='drop' direction='out' priority='1000'>
<all/>
</rule>
# iptables-save | grep vnet5 | tee out
:FI-vnet5 - [0:0]
:FO-vnet5 - [0:0]
:HI-vnet5 - [0:0]
-A FI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
-A FI-vnet5 -j DROP
-A FO-vnet5 -p tcp -m tcp --dport 22 -j ACCEPT
-A FO-vnet5 -j DROP
-A HI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
-A HI-vnet5 -j DROP
-A libvirt-host-in -m physdev --physdev-in vnet5 -g HI-vnet5
-A libvirt-in -m physdev --physdev-in vnet5 -g FI-vnet5
-A libvirt-in-post -m physdev --physdev-in vnet5 -j ACCEPT
-A libvirt-out -m physdev --physdev-out vnet5 --physdev-is-bridged -g FO-vnet5
<rule action='accept' direction='in' priority='500' statematch='false'>
<tcp dstportstart='22'/>
</rule>
<rule action='drop' direction='inout' priority='1000'>
<all/>
</rule>
# iptables-save | grep vnet5 | tee inout
:FI-vnet5 - [0:0]
:FO-vnet5 - [0:0]
:HI-vnet5 - [0:0]
-A FI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
-A FI-vnet5 -j DROP
-A FO-vnet5 -p tcp -m tcp --dport 22 -j ACCEPT
-A FO-vnet5 -j DROP
-A HI-vnet5 -p tcp -m tcp --sport 22 -j RETURN
-A HI-vnet5 -j DROP
-A libvirt-host-in -m physdev --physdev-in vnet5 -g HI-vnet5
-A libvirt-in -m physdev --physdev-in vnet5 -g FI-vnet5
-A libvirt-in-post -m physdev --physdev-in vnet5 -j ACCEPT
-A libvirt-out -m physdev --physdev-out vnet5 --physdev-is-bridged -g FO-vnet5
We note that the
-A HI-vnet5 -j DROP
-A FI-vnet5 -j DROP
-A FO-vnet5 -j DROP
Is present without regards to the state of the direction attribute on the “default” drop rule.
If the direction is “in” then the “-A FI-vnet5 -j DROP” should not exists.
What does the source code say? I worry that either the docs are imprecise and this is desired, or there is a bug and I can end up like https://superuser.com/questions/1660080/in-libvirt-network-filters-nwfilt...
As this is going to be a generic rule, applied many times – I would prefer not to have mac based source allow rules.
-Jason
3 years, 2 months
how to get enabled features
by Jiatong Shen
Hello community,
I am trying to learn how to use dtrace and systemtap for libvirt, as the
documentation says it seems to enable `--with-dtrace` when compiling. I am
installing libvirt using apt install and is it possible to determine if
dtrace is built by running some libvirt commands?
Thank you.
--
Best Regards,
Jiatong Shen
3 years, 2 months
Need more doc for libvirt-console-proxy
by Guy Godfroy
Hello,
I'm making a web app for my company that will enable different teams to
manage their own VMs. I wish to make possible to interact with each VM
console, so I plan to use some xterm.js with websockets.
So I discovered libvirt-console-proxy [1] when I looked for something to
put a libvirt console into a websocket. That seems like the right tool
for the job.
The only doc I found is this article from 2017 [2]. After trying to
understand from this article and from --help, I still have many
questions. I am really bad at reading code so I can't even get answers
from the sources.
My main concern is: How a client is supposed to talk to the proxy? It is
said that a security token must be provided. How? HTTP header? Which
header? Am I missing something in websocket protocol? I think an example
client implementation would help a lot.
Also, I tried to use virtconsoleresolveradm to set up metadata on my
domains like explained in the article [1] :
./virtconsoleresolveradm enable milou
Enabled access to domain 'milou'
But that doesn't seem to do anything (except defining the metadata
namespace in the XML):
virsh metadata milou http://libvirt.org/schemas/console-proxy/1.0
<consoles/>
I precise that I have already this in my XML:
<serial type='pty'>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
Should I remove that? Should I edit that?
Thanks for your help.
Guy Godfroy
[1] https://gitlab.com/libvirt/libvirt-console-proxy
[2]
https://www.berrange.com/posts/2017/01/26/announce-new-libvirt-console-pr...
3 years, 2 months
how/where to configure access to libvirt-sock-ro
by Marc
I wanted to create a monitoring user that can do some reporting like this:
runuser -u xxxxx -- prometheus-libvirt-exporter -libvirt.uri /var/run/libvirt/libvirt-sock-ro
But I am getting the
failed to connect: authentication required
3 years, 2 months
how/where to configure access to libvirt-sock-ro
by Marc
I wanted to create a monitoring user that can do some reporting like this:
runuser -u xxxxx -- prometheus-libvirt-exporter -libvirt.uri /var/run/libvirt/libvirt-sock-ro
But I am getting the
failed to connect: authentication required
3 years, 2 months