Re: hdd kills vm
by daggs
> Sent: Thursday, October 26, 2023 at 9:50 AM
> From: "Martin Kletzander" <mkletzan(a)redhat.com>
> To: "daggs" <daggs(a)gmx.com>
> Cc: libvir-list(a)redhat.com
> Subject: Re: hdd kills vm
>
> On Wed, Oct 25, 2023 at 03:06:55PM +0200, daggs wrote:
> >> Sent: Tuesday, October 24, 2023 at 5:28 PM
> >> From: "Martin Kletzander" <mkletzan(a)redhat.com>
> >> To: "daggs" <daggs(a)gmx.com>
> >> Cc: libvir-list(a)redhat.com
> >> Subject: Re: hdd kills vm
> >>
> >> On Mon, Oct 23, 2023 at 04:59:08PM +0200, daggs wrote:
> >> >Greetings Martin,
> >> >
> >> >> Sent: Sunday, October 22, 2023 at 12:37 PM
> >> >> From: "Martin Kletzander" <mkletzan(a)redhat.com>
> >> >> To: "daggs" <daggs(a)gmx.com>
> >> >> Cc: libvir-list(a)redhat.com
> >> >> Subject: Re: hdd kills vm
> >> >>
> >> >> On Fri, Oct 20, 2023 at 02:42:38PM +0200, daggs wrote:
> >> >> >Greetings,
> >> >> >
> >> >> >I have a windows 11 vm running on my Gentoo using libvirt (9.8.0) + qemu (8.1.2), I'm passing almost all available resources to the vm
> >> >> >(all 16 cpus, 31 out of 32 GB, nVidia gpu is pt), but the performance is not good, system lags, takes long time to boot.
> >> >>
> >> >> There are couple of things that stand out to me in your setup and I'll
> >> >> assume the host has one NUMA node with 8 cores, each with 2 threads as,
> >> >> just like you set it up in the guest XML.
> >> >thats correct, see:
> >> >$ lscpu | grep -i numa
> >> >NUMA node(s): 1
> >> >NUMA node0 CPU(s): 0-15
> >> >
> >> >however:
> >> >$ dmesg | grep -i numa
> >> >[ 0.003783] No NUMA configuration found
> >> >
> >> >can that be the reason?
> >> >
> >>
> >> no, this is fine, 1 NUMA node is not a NUMA, technically, so that's
> >> perfectly fine.
> >thanks for clarifying it for me
> >
> >>
> >> >>
> >> >> * When you give the guest all the CPUs the host has there is nothing
> >> >> left to run the host tasks. You might think that there "isn't
> >> >> anything running", but there is, if only your init system, the kernel
> >> >> and the QEMU which is emulating the guest. This is definitely one of
> >> >> the bottlenecks.
> >> >I've tried with 12 out of 16, same behavior.
> >> >
> >> >>
> >> >> * The pinning of vCPUs to CPUs is half-suspicious. If you are trying to
> >> >> make vCPU 0 and 1 be threads on the same core and on the host the
> >> >> threads are represented as CPUs 0 and 8, then that's fine. If that is
> >> >> just copy-pasted from somewhere, then it might not reflect the current
> >> >> situation and can be source of many scheduling issues (even once the
> >> >> above is dealt with).
> >> >I found a site that does it for you, if it is wrong, can you point me to a place I can read about it?
> >> >
> >>
> >> Just check what the topology is on the host and try to match it with the
> >> guest one. If in doubt, then try it without the pinning.
> >I can try to play with it, what I don't know is what should be the mapping logic?
> >
>
> Threads on the same core in the guest should map to threads on the same
> core in the host. Since there is no NUMA that should be enough to get
> the best performance. But even misconfiguration of this will not
> introduce lags in the system if it has 8 CPUs. So that's definitely not
> the root cause of the main problem, it just might be suboptimal.
>
> >>
> >> >>
> >> >> * I also seem to recall that Windows had some issues with systems that
> >> >> have too many cores. I'm not sure whether that was an issue with an
> >> >> edition difference or just with some older versions, or if it just did
> >> >> not show up in the task manager, but there was something that was
> >> >> fixed by using either more sockets or cores in the topology. This is
> >> >> probably not the issue for you though.
> >> >>
> >> >> >after trying a few ways to fix it, I've concluded that the issue might be related to the why the hdd is defined at the vm level.
> >> >> >here is the xml: https://bpa.st/MYTA
> >> >> >I assume that the hdd sits on the sata ctrl causing the issue but I'm not sure what is the proper way to fix it, any ideas?
> >> >> >
> >> >>
> >> >> It looks like your disk is on SATA, but I don't see why that would be an
> >> >> issue. Passing the block device to QEMU as VirtIO shouldn't cause that
> >> >> much of a difference. Try measuring the speed of the disk on the host
> >> >> and then in the VM maybe. Is that SSD or NVMe? I presume that's not
> >> >> spinning rust, is it.
> >> >as seen, I have 3 drives, 2 cdroms as sata and one hdd pt as virtio, I read somewhere that if the controller of the virtio
> >> >device is sata, than it doesn't uses the virtio optimally.
> >>
> >> Well it _might_ be slightly more beneficial to use virtio-scsi or even
> >> <disk type='block' device='lun'>, but I can't imagine that would make
> >> the system lag. I'm not that familiar with the details.
> >configure virtio-scsi and sata-scai at the same time?
> >
>
> Yes, forgot that, sorry. Try virtio-scsi. You could also go farther
> and pass through the LUN or the whole HBA (if you don't need to access
> any other disk on it) to the VM. Try the information presented here:
>
> https://libvirt.org/formatdomain.html#usb-pci-scsi-devices
>
> >>
> >> >it is a spindle, nvmes are too expensive where I live, frankly, I don't need lightning fast boot, the other BM machines running windows on spindle
> >> >run it quite fast and they aren't half as fast as this server
> >> >
> >>
> >> That might actually be related. The guest might think it is a different
> >> type of disk and use completely suboptimal scheduling. This might
> >> actually be solved by passing it as <disk device='lun'..., but at this
> >> point I'm just guessing.
> >I'll look into that, thanks.
so bottom line, you suggest the following:
1. remove the manual cpu pin, let qemu sort that out.
2. add a virtio scsi controller and connect the os hdd to it
3. pass the hss via scsi pt and not dev node
4. if I able to do #3, no need to add device='lun' as it won't use the disk option
Dagg.
1 year, 1 month
ANNOUNCE: Mailing list move complete
by Daniel P. Berrangé
This is an announcement to the effect that the mailing list move is now
complete. TL;DR the new list addresses are:
* announce(a)lists.libvirt.org (formerly libvirt-announce(a)redhat.com)
Low volume, announcements of releases and other important info
* users(a)lists.libvirt.org (formerly libvirt-users(a)redhat.com)
End user questions and discussions and collaboration
* devel(a)lists.libvirt.org (formerly libvir-list(a)redhat.com)
Patch submission for development of main project
* security(a)lists.libvirt.org (formerly libvir-security(a)redhat.com)
Submission of security sensitive bug reports
The online archive and membership mgmt interface is
https://lists.libvirt.org
In my original announcement[1] I mentioned that people would need to manually
re-subscribe. Due to a mixup in communications, our IT admins went ahead and
migrated across the existing entire subscriber base for all lists. Thus there
is NO need to re-subscribe to any of the lists. If you were doing filtering
of mail, you may need to update filters for the new list ID matches.
With the new list server, HyperKitty is providing the web interface. Thus
if you wish to interact with the lists entire via the browser this is now
possible. Note that it requires you to register for an account and set a
password, even if you are already a list subscriber.
If you mistakenly send to the old lists you should receive an auto-reply
about the moved destinations.
Note, we had some technical issues on Thursday/Friday, so if you sent
mails on those two days they probably will not have reached any lists,
and so you may wish to re-send them.
With regards,
Daniel
[1] https://listman.redhat.com/archives/libvirt-announce/2023-October/000650....
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
1 year, 1 month
Activate storage during domain migration
by e-m@mailbox.org
Hi,
I have a block storage which I only want to be mounted on a single node.
I know that there are many possibilities for shared storage usage but I
want to know if the following is possible (using the API).
- Have a domain running on node-A
- Initialize a migration for that domain to node-B
- Run a hook or something just before the domain starts on node-B to:
- unmount storage on node-A
- mount/prepare storage on node-B
Thanks and best regards,
Etienne
1 year, 1 month
Libvirt
by Gk Gk
Hi All,
I am trying to collect memory, disk and network stats for a VM on kvm host.
It seems that the statistics are not matching what the OS inside the VM is
reporting. Why is this discrepancy ?
Is this a known bug of libvirt ? Also I heard that libvirt shows cumulative
figures for these measures ever since the VM was created. Also I tested by
creating a new vm and comparing the stats without a reboot . Even in this
case, the stats dont agree. Can someone help me here please ?
Thanks
Kumar
1 year, 1 month
help,virsh start vm failed
by 展荣臻(信泰)
Hello,all
I start vm failed as show below:
()[root@com1 tmp]# virsh start centos
error: Failed to start domain centos
error: Start job for unit machine-qemu\x2d1\x2dcentos.scope failed with 'failed'
as the same time error "2021-02-27 08:58:31.688+0000: 22: error : virSystemdCreateMachine:361 : Start job for unit machine-qemu\x2d4\x2dcentos.scope failed with 'failed' " in libvirt.log.
How I dig into this question.thanks
1 year, 1 month
Can't start vm with enc backing files, No secret with id 'sec0' ?
by 18781374080
Hey, guys
I've been working on whether libvirt supports encrypted snapshots,Here are my versions of libvirt and qemu
[root@xx ~]# libvirtd -V
libvirtd (libvirt) 4.5.0
[root@xx ~]# qemu-img -V
qemu-img version 2.12.0 (qemu-kvm-ev-2.12.0-33.1.el7_7.4)
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
1. assign $MYSECRET to libvirt secret using the secret-define and secret-set-value commands,and $MYSECRET is in base64 format
MYSECRET=`printf %s "123456" | base64`
2. created a disk encrypted in luks format
qemu-img create --object secret,id=sec0,data=$MYSECRET,format=base64 -f qcow2 -o encrypt.format=luks,encrypt.key-secret=sec0 enc.qcow220G
3. The encrypted disk is defined in the XML configuration file, as shown below.Then I successfully started the virtual machine.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/root/enc.qcow2'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<encryption format='luks'>
<secret type='passphrase' uuid='694bdf38-214e-48d3-8c4c-9dbbcf0f5fa0'/>
</encryption>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
4. According to the qemu documentation, an encrypted snap.qcow2 disk was created with enc.qcow2 as backing
qemu-img create -f qcow2 -F qcow2 --object secret,id=sec0,data=$MYSECRET,format=base64 --object secret,id=sec1,data=$MYSECRET,format=base64 -o encrypt.format=luks,encrypt.key-secret=sec1 -b 'json:{"encrypt.key-secret": "sec0", "driver": "qcow2", "file": {"driver": "file", "filename": "/root/enc/enc.qcow2"}}' snap.qcow2
I used the same $MYSECRET as the password data for the disk. Here is the disk information for snap.qcow2
image: snap.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 480K
encrypted: yes
cluster_size: 65536
backing file: json:{"encrypt.key-secret": "sec0", "driver": "qcow2", "file": {"driver": "file", "filename": "/root//enc.qcow2"}}
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
encrypt:
ivgen alg: plain64
hash alg: sha256
cipher alg: aes-256
uuid: ab0e3f87-35e7-40cb-9888-9fe9bb54e981
format: luks
cipher mode: xts
slots:
[0]:
active: true
iters: 115582
key offset: 4096
stripes: 4000
[1]:
active: false
key offset: 262144
[2]:
active: false
key offset: 520192
[3]:
active: false
key offset: 778240
[4]:
active: false
key offset: 1036288
[5]:
active: false
key offset: 1294336
[6]:
active: false
key offset: 1552384
[7]:
active: false
key offset: 1810432
payload offset: 2068480
master key iters: 30085
corrupt: false
5. Then I changed the configuration of the XML, as shown below.And re-define and start the virtual machine
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/root/snap.qcow2'/>
<backingStore type='file'>
<format type='qcow2'/>
<source file='/root/enc.qcow2'/>
<backingStore/>
</backingStore>
<target dev='hda' bus='ide'/>
<encryption format='luks'>
<secret type='passphrase' uuid='694bdf38-214e-48d3-8c4c-9dbbcf0f5fa0'/>
</encryption>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
Then the startup failed and an error was thrown. As shown below.
qemu-kvm: -drive file=/root/enc/vm/enc-snap.qcow2,encrypt.format=luks,encrypt.key-secret=ide0-0-0-luks-secret0,format=qcow2,if=none,id=drive-ide0-0-0: Could not open backing file: No secret with id 'sec0'
The sec0 secret id could not be found in the backing file, this is my problem.
Is there a problem with the way I implemented it, or does libvirt currently not support this?
Any tips or help will be appreciated, Looking forward to your reply. Thank you
| |
18781374080
|
|
18781374080(a)163.com
|
签名由网易邮箱大师定制
1 year, 1 month
[libvirt-users] how xml generated
by 李卓瑶
hi :
i create a domain by virt-manager tool, then there is a xxx.xml generated in /etc/libvirt/qemu/.
so, my question is how the xx.xml generated? What the code involves in libvirt ?
Thanks!
--
Have a good day
1 year, 1 month
[libvirt-users] How to convert "device_model_version = "qemu-xen-traditiona"" into libvirt xml file How to convert "device_model_version = "qemu-xen-traditiona"" into libvirt xml file
by hanyandong
I am using Xen-4.4.0, libvirt-1.2.9
My ubuntu.cfg is:
ubuntu10.cfg
bootloader = "/usr/local/lib/xen/boot/hvmloader"
builder="hvm"
memory = 512
name = "ubuntu"
vif = [ "type=ioemu,bridge=ovsbr0", "type=ioemu,bridge=ovsbr0","type=ioemu,bridge=ovsbr0","type=ioemu,bridge=ovsbr0",,"type=ioemu,bridge=ovsbr0"]
device_model_version = "qemu-xen-traditional"
If I didnot add " device_model_version = "qemu-xen-traditional" " to ubuntu.cfg, I only can add four NICs to VM.
If I add five or more,, I got these errors:
libxl: error: libxl_dm.c:1371:device_model_spawn_outcome: domain 12 device model: spawn failed (rc=-3)
libxl: error: libxl_create.c:1186:domcreate_devmodel_started: device model did not start: -3
libxl: error: libxl_dm.c:1475:kill_device_model: Device Model already exited
If I add " device_model_version = "qemu-xen-traditional" " to ubuntu.cfg, I can apply 8 NICs to VM.
But, I want to create domU by virsh(libvirt), so How convert " device_model_version = "qemu-xen-traditional"" to xml file used by virsh?
and How many NIC can a VM have? what leads to this limit?
--
Best Regards,
yandong
1 year, 1 month