[libvirt-users] error: internal error: Unable to parse 'rbps=max wbps=max riops=100 wiops=max' as an integer
by Oliver Dzombic
Hi,
i am running:
libvirt-5.6.0-5.fc31.x86_64
5.4.8-200.fc31.x86_64
Using:
<blkiotune>
<device>
<path>/dev/sda1</path>
<read_iops_sec>100</read_iops_sec>
<write_iops_sec>100</write_iops_sec>
<read_bytes_sec>51200000</read_bytes_sec>
<write_bytes_sec>51200000</write_bytes_sec>
</device>
</blkiotune>
and receiving
error: internal error: Unable to parse 'rbps=max wbps=max riops=100
wiops=max' as an integer
when i try to start it.
I found on Redhat Bugzilla a case that claims that its solved by
Fixed In Version: libvirt-5.6.0-3.el8
While i am running 5.6.0-5.
Maybe its simply because the cgroup v1 is not correctly mounted:
# mount | grep cgroup
cgroup2 on /sys/fs/cgroup type cgroup2
(rw,nosuid,nodev,noexec,relatime,nsdelegate)
# cat /proc/self/cgroup
0::/user.slice/user-0.slice/session-1.scope
But i didnt find anything how to activate that properly.
I would be thankful for any suggestions.
Thank you!
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
Layer7 Networks
mailto:info@layer7.net
Anschrift:
Layer7 Networks GmbH
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 96293 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic
UST ID: DE259845632
4 years, 11 months
[libvirt-users] Fwd: (no subject)
by Eyüp Hakan Duran
Thank you so much for your informative response. The man page of virsh did
not include "snapshot=no" sub-option under the --diskspec option, but it is
very intuitive. Thanks developers for their excellent work!
Hakan
Peter Krempa <pkrempa(a)redhat.com>, 6 Oca 2020 Pzt, 02:57 tarihinde şunu
yazdı:
> On Sun, Jan 05, 2020 at 17:21:52 -0600, Eyüp Hakan Duran wrote:
> > Dear all,
> > Please let me start by indicating that I am not from a technical
> > background, so please be gentle and patient with me.
> >
> > I am trying to get a snapshot from my virtual machines (vm) and the
> > following
> > command works for all of them bar one:
> >
> > # virsh snapshot-create-as --quiesce --no-metadata --domain myvm
> myvm-state
> > --diskspec vda,file=overlay.qcow2 --disk-only --atomic
>
> You can drop --atomic if you use a qemu released in at least last 5
> years as the snapshot is always atomic if the qemu supports the
> 'transaction' command.
>
> >
> > The only exception is this one vm, which has two disks as two separete
> > qcow2 files: vda and vdb. Vdb contains my nexcloud data, resides on a
> > btrfs subvolume, and a daily snapshot of this subvolume is taken by a
> > cronjob
> > by the host machine. Therefore, I do not want to include it in the
> > snapshot taken by virsh, and therefore I did not include vdb as a
> > separate --diskspec item, and used the same command indicated above.
> > However, this fails with the following behavior: a state (or rather
> > overlay file with a qcow2 extension is created on the host machine's
> > directory where the image of vdb exists. My question is the following:
> > Is there a way to direct virsh for only taking a snapshot ignoring one
> > of the disks? Yes, I can always create that second snapshot/overlay
>
> Sure. Just use a second --diskspec vdb,snapshot=no
>
> > image of vdb and delete it later, but it doesn't feel very intuitive,
> > and efficient. However, it is quite possible that I may be completely
> > overseeing an important aspect of the process, and this may not be
> possible
> > due to that :).
>
> One disadvantage of the above operation is that the snapshot of vdb is
> not from the same time as vda, but if that doesn't pose a problem in
> your scenario it's okay to do it that way.
>
> >
> > Thanks for your inputs in advance.
> >
> > Hakan Duran
>
> > _______________________________________________
> > libvirt-users mailing list
> > libvirt-users(a)redhat.com
> > https://www.redhat.com/mailman/listinfo/libvirt-users
>
>
4 years, 11 months
[libvirt-users] Locking without virtlockd (nor sanlock)?
by Gionatan Danti
Hi list,
I would like to ask a clarification about how locking works. My test
system is CentOS 7.7 with libvirt-4.5.0-23.el7_7.1.x86_64
Is was understanding that, by default, libvirt does not use any locks.
From here [1]: "The out of the box configuration, however, currently
uses the nop lock manager plugin". As "lock_manager" is commented in my
qemu.conf file, I was expecting that no locks were used to protect my
virtual disk from guest double-start or misassignement to other vms.
However, "cat /proc/locks" shows the following (17532905 being the vdisk
inode):
[root@localhost tmp]# cat /proc/locks | grep 17532905
42: OFDLCK ADVISORY READ -1 fd:00:17532905 201 201
43: OFDLCK ADVISORY READ -1 fd:00:17532905 100 101
Indeed, try to associate and booting the disk to another machines give
me an error (stating that the disk is alredy in use).
Enabling the "lockd" plugin and starting the same machine, "cat
/proc/locks" looks different:
[root@localhost tmp]# cat /proc/locks | grep 17532905
31: POSIX ADVISORY WRITE 19266 fd:00:17532905 0 0
32: OFDLCK ADVISORY READ -1 fd:00:17532905 201 201
33: OFDLCK ADVISORY READ -1 fd:00:17532905 100 101
As you can see, an *additional* write lock was granted. Again, assigning
the disk to another vms and booting it up ends with the same error.
So, may I ask:
- why does libvirtd requests READ locks even commenting the
"lock_manager" option?
- does it means that I can avoid modifying anything, relying on libvirtd
to correctly locks image files?
- if so, I should use virtlockd for what use cases?
Thanks.
[1] https://libvirt.org/locking-lockd.html
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
4 years, 11 months
[libvirt-users] aarch64 vm doesn't boots
by daggs
Greetings,
I'm trying to bring up a alpine rpi aarch64 image within kvm but I'm ended up with a stuck system, here is the xml:
<domain type='qemu'>
<name>alpine_rpi4_dev_machine</name>
<uuid>b1b155fc-cb92-4f22-8904-c934dd24415b</uuid>
<memory unit='KiB'>4194304</memory>
<currentMemory unit='KiB'>4194304</currentMemory>
<vcpu placement='static'>4</vcpu>
<os>
<type arch='aarch64' machine='virt'>hvm</type>
</os>
<features>
<gic version='2'/>
</features>
<cpu mode='custom' match='exact'>
<model fallback='allow'>cortex-a53</model>
</cpu>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/qemu-system-aarch64</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/dagg/alpine-rpi4.qcow2'/>
<target dev='vda' bus='virtio'/>
<boot order='2'/>
<address type='virtio-mmio'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source file='/home/dagg/alpine-virt-3.11.2-aarch64.iso'/>
<target dev='sdb' bus='scsi'/>
<readonly/>
<boot order='1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
<address type='virtio-mmio'/>
</controller>
<controller type='pci' index='0' model='pcie-root'/>
<controller type='pci' index='1' model='dmi-to-pci-bridge'>
<model name='i82801b11-bridge'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x0'/>
</controller>
<controller type='pci' index='2' model='pci-bridge'>
<model name='pci-bridge'/>
<target chassisNr='2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x01' function='0x0'/>
</controller>
<controller type='virtio-serial' index='0'>
<address type='virtio-mmio'/>
</controller>
<interface type='network'>
<mac address='52:54:00:e0:7a:7b'/>
<source network='default'/>
<model type='virtio'/>
<address type='virtio-mmio'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<console type='pty'>
<target type='virtio' port='1'/>
</console>
</devices>
</domain>
generated using this cmd:
virt-install --cpu cortex-a53 --name alpine_rpi4_dev_machine --cdrom ./alpine-virt-3.11.2-aarch64.iso --disk path=alpine-rpi4.qcow2,size=8 --vcpus 4 --memory 2048 --os-type linux --arch aarch64
I've tried adding a vnc server and vga device but the screen stays black, qxl doesn't work.
I'm using ubuntu 16.04 with libvirt 1.3.1, if this is a version issue, I can upgrade to latest version.
what I'm I missing?
Thanks,
Dagg.
4 years, 11 months
[libvirt-users] (no subject)
by Eyüp Hakan Duran
Dear all,
Please let me start by indicating that I am not from a technical
background, so please be gentle and patient with me.
I am trying to get a snapshot from my virtual machines (vm) and the
following
command works for all of them bar one:
# virsh snapshot-create-as --quiesce --no-metadata --domain myvm myvm-state
--diskspec vda,file=overlay.qcow2 --disk-only --atomic
The only exception is this one vm, which has two disks as two separete
qcow2 files: vda and vdb. Vdb contains my nexcloud data, resides on a
btrfs subvolume, and a daily snapshot of this subvolume is taken by a
cronjob
by the host machine. Therefore, I do not want to include it in the
snapshot taken by virsh, and therefore I did not include vdb as a
separate --diskspec item, and used the same command indicated above.
However, this fails with the following behavior: a state (or rather
overlay file with a qcow2 extension is created on the host machine's
directory where the image of vdb exists. My question is the following:
Is there a way to direct virsh for only taking a snapshot ignoring one
of the disks? Yes, I can always create that second snapshot/overlay
image of vdb and delete it later, but it doesn't feel very intuitive,
and efficient. However, it is quite possible that I may be completely
overseeing an important aspect of the process, and this may not be possible
due to that :).
Thanks for your inputs in advance.
Hakan Duran
4 years, 11 months
[libvirt-users] Passing multiple addresses with masks to nwfilter
by Brooks Swinnerton
Hello,
I have a nwfilter that I'm using to ensure that libvirt domains can't spoof
IPv6 traffic. It looks like this:
<filter name='no-ipv6-spoofing' chain='ipv6-ip' priority='-710'>
<rule action='return' direction='out' priority='500'>
<ipv6 srcipaddr='$IPV6' srcipmask='$IPV6MASK'/>
</rule>
<rule action='drop' direction='out' priority='1000'/>
</filter>
The goal is to allow any traffic coming from the entire prefix (e.g.
2001:db8::/32). This theoretically would work fine when passing in the
variables from the domain definition like so:
<filterref filter='no-ipv6-spoofing'>
<parameter name='IPV6' value='2001:db8:1:6:dc:d2ff:fef2:2181'/>
<parameter name='IPV6_MASK' value='32'/>
</filterref>
But the problem comes when wanting to allow multiple prefixes (and thus
multiple $IPV6 and $IPV6_MASK variables). If there is more than one
definition of $IPV6, how could I associate it with a corresponding
$IPV6_MASK?
Ideally I would be able to pass an address in CIDR notation directly to
srcipaddr, but I don't believe that's an option.
Any guidance would be appreciated. The ultimate goal is to automate this
process, so having something like $IPV6_1 and $IPV6_1 would be less than
ideal.
4 years, 11 months