[FOSDEM] Call for participation: Virtualization and Cloud
infrastructure Room at FOSDEM 2024
by Piotr Kliczewski
We are excited to announce that the call for proposals is now open for the
Virtualization and Cloud infrastructure devroom at the upcoming FOSDEM
2024, to be hosted on February 3rd 2024.
This devroom is a collaborative effort, and is organized by dedicated folks
from projects such as OpenStack, Xen Project, KubeVirt, QEMU, KVM, and
Foreman. We would like to invite all those who are involved in these fields
to submit your proposals by December 8th, 2023.
About the Devroom
The Virtualization & IaaS devroom will feature session topics such as open
source hypervisors or virtual machine managers such as Xen Project, KVM,
bhyve and VirtualBox as well as Infrastructure-as-a-Service projects such
as KubeVirt, Apache CloudStack, OpenStack, QEMU and OpenNebula.
This devroom will host presentations that focus on topics of shared
interest, such as KVM; libvirt; shared storage; virtualized networking;
cloud security; clustering and high availability; interfacing with multiple
hypervisors; hyperconverged deployments; and scaling across hundreds or
thousands of servers.
Presentations in this devroom will be aimed at developers working on these
platforms who are looking to collaborate and improve shared infrastructure
or solve common problems. We seek topics that encourage dialog between
projects and continued work post-FOSDEM.
Important Dates
Submission deadline: 8th December 2023
Acceptance notifications: 10th December 2023
Final schedule announcement: 15th December 2023
Devroom: 3rd February 2024
Submit Your Proposal
All submissions must be made via the Pretalx event planning site[1]. It is
a new submission system so you will need to create an account. If you
submitted proposals for FOSDEM in previous years, you won’t be able to use
your existing account.
During submission please make sure to select Virtualization and Cloud
infrastructure from the Track list. Please fill out all the required
fields, and provide a meaningful abstract and description of your proposed
session.
Submission Guidelines
We expect more proposals than we can possibly accept, so it is vitally
important that you submit your proposal on or before the deadline. Late
submissions are unlikely to be considered.
All presentation slots are 30 minutes, with 20 minutes planned for
presentations, and 10 minutes for Q&A.
All presentations will be recorded and made available under Creative
Commons licenses. In the Submission notes field, please indicate that you
agree that your presentation will be licensed under the CC-By-SA-4.0 or
CC-By-4.0 license and that you agree to have your presentation recorded.
For example:
"If my presentation is accepted for FOSDEM, I hereby agree to license all
recordings, slides, and other associated materials under the Creative
Commons Attribution Share-Alike 4.0 International License.
Sincerely,
<NAME>."
In the Submission notes field, please also confirm that if your talk is
accepted, you will be able to attend FOSDEM and deliver your presentation.
We will not consider proposals from prospective speakers who are unsure
whether they will be able to secure funds for travel and lodging to attend
FOSDEM. (Sadly, we are not able to offer travel funding for prospective
speakers.)
Code of Conduct
Following the release of the updated code of conduct for FOSDEM, we'd like
to remind all speakers and attendees that all of the presentations and
discussions in our devroom are held under the guidelines set in the CoC and
we expect attendees, speakers, and volunteers to follow the CoC at all
times.
If you submit a proposal and it is accepted, you will be required to
confirm that you accept the FOSDEM CoC. If you have any questions about the
CoC or wish to have one of the devroom organizers review your presentation
slides or any other content for CoC compliance, please email us and we will
do our best to assist you.
Questions?
If you have any questions about this devroom, please send your questions to
our devroom mailing list. You can also subscribe to the list to receive
updates about important dates, session announcements, and to connect with
other attendees.
See you all at FOSDEM!
[1] <https://penta.fosdem.org/submission/FOSDEM17>
https://pretalx.fosdem.org/fosdem-2024/cfp
[2] virtualization-devroom-manager at fosdem.org
1 year
Passing through a YubiKey to a Windows VM for physical touch
activation
by Michael Kjörling
I have a need to pass through a YubiKey to a Windows (10) VM guest
such that Windows in the guest will let me use it with physical touch
activation for 2FA.
For those times, I am physically at the VM host, so I don't need
_remote_ redirection into the guest, and I'm fine with plugging and
unplugging the YubiKey physically on an as-needed basis.
If I simply redirect the USB device through the virt-manager GUI, my
experience is that it has at best worked very much unreliably, and
often not at all.
Searching the web hasn't helped.
Does anyone have a recipe for that to work _reliably_?
--
Michael Kjörling 🔗 https://michael.kjorling.se
“Remember when, on the Internet, nobody cared that you were a dog?”
1 year
Clocks and Timers
by Simon Fairweather
Hi, If I have the following XML section does this imply that presence is
default of on for rtc and pit?
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
Are there any other reference guides to best setup of timers apart from the
XML document for different OS types?
Regards
Simon.
1 year
Re: hdd kills vm
by daggs
> Sent: Thursday, October 26, 2023 at 9:50 AM
> From: "Martin Kletzander" <mkletzan(a)redhat.com>
> To: "daggs" <daggs(a)gmx.com>
> Cc: libvir-list(a)redhat.com
> Subject: Re: hdd kills vm
>
> On Wed, Oct 25, 2023 at 03:06:55PM +0200, daggs wrote:
> >> Sent: Tuesday, October 24, 2023 at 5:28 PM
> >> From: "Martin Kletzander" <mkletzan(a)redhat.com>
> >> To: "daggs" <daggs(a)gmx.com>
> >> Cc: libvir-list(a)redhat.com
> >> Subject: Re: hdd kills vm
> >>
> >> On Mon, Oct 23, 2023 at 04:59:08PM +0200, daggs wrote:
> >> >Greetings Martin,
> >> >
> >> >> Sent: Sunday, October 22, 2023 at 12:37 PM
> >> >> From: "Martin Kletzander" <mkletzan(a)redhat.com>
> >> >> To: "daggs" <daggs(a)gmx.com>
> >> >> Cc: libvir-list(a)redhat.com
> >> >> Subject: Re: hdd kills vm
> >> >>
> >> >> On Fri, Oct 20, 2023 at 02:42:38PM +0200, daggs wrote:
> >> >> >Greetings,
> >> >> >
> >> >> >I have a windows 11 vm running on my Gentoo using libvirt (9.8.0) + qemu (8.1.2), I'm passing almost all available resources to the vm
> >> >> >(all 16 cpus, 31 out of 32 GB, nVidia gpu is pt), but the performance is not good, system lags, takes long time to boot.
> >> >>
> >> >> There are couple of things that stand out to me in your setup and I'll
> >> >> assume the host has one NUMA node with 8 cores, each with 2 threads as,
> >> >> just like you set it up in the guest XML.
> >> >thats correct, see:
> >> >$ lscpu | grep -i numa
> >> >NUMA node(s): 1
> >> >NUMA node0 CPU(s): 0-15
> >> >
> >> >however:
> >> >$ dmesg | grep -i numa
> >> >[ 0.003783] No NUMA configuration found
> >> >
> >> >can that be the reason?
> >> >
> >>
> >> no, this is fine, 1 NUMA node is not a NUMA, technically, so that's
> >> perfectly fine.
> >thanks for clarifying it for me
> >
> >>
> >> >>
> >> >> * When you give the guest all the CPUs the host has there is nothing
> >> >> left to run the host tasks. You might think that there "isn't
> >> >> anything running", but there is, if only your init system, the kernel
> >> >> and the QEMU which is emulating the guest. This is definitely one of
> >> >> the bottlenecks.
> >> >I've tried with 12 out of 16, same behavior.
> >> >
> >> >>
> >> >> * The pinning of vCPUs to CPUs is half-suspicious. If you are trying to
> >> >> make vCPU 0 and 1 be threads on the same core and on the host the
> >> >> threads are represented as CPUs 0 and 8, then that's fine. If that is
> >> >> just copy-pasted from somewhere, then it might not reflect the current
> >> >> situation and can be source of many scheduling issues (even once the
> >> >> above is dealt with).
> >> >I found a site that does it for you, if it is wrong, can you point me to a place I can read about it?
> >> >
> >>
> >> Just check what the topology is on the host and try to match it with the
> >> guest one. If in doubt, then try it without the pinning.
> >I can try to play with it, what I don't know is what should be the mapping logic?
> >
>
> Threads on the same core in the guest should map to threads on the same
> core in the host. Since there is no NUMA that should be enough to get
> the best performance. But even misconfiguration of this will not
> introduce lags in the system if it has 8 CPUs. So that's definitely not
> the root cause of the main problem, it just might be suboptimal.
>
> >>
> >> >>
> >> >> * I also seem to recall that Windows had some issues with systems that
> >> >> have too many cores. I'm not sure whether that was an issue with an
> >> >> edition difference or just with some older versions, or if it just did
> >> >> not show up in the task manager, but there was something that was
> >> >> fixed by using either more sockets or cores in the topology. This is
> >> >> probably not the issue for you though.
> >> >>
> >> >> >after trying a few ways to fix it, I've concluded that the issue might be related to the why the hdd is defined at the vm level.
> >> >> >here is the xml: https://bpa.st/MYTA
> >> >> >I assume that the hdd sits on the sata ctrl causing the issue but I'm not sure what is the proper way to fix it, any ideas?
> >> >> >
> >> >>
> >> >> It looks like your disk is on SATA, but I don't see why that would be an
> >> >> issue. Passing the block device to QEMU as VirtIO shouldn't cause that
> >> >> much of a difference. Try measuring the speed of the disk on the host
> >> >> and then in the VM maybe. Is that SSD or NVMe? I presume that's not
> >> >> spinning rust, is it.
> >> >as seen, I have 3 drives, 2 cdroms as sata and one hdd pt as virtio, I read somewhere that if the controller of the virtio
> >> >device is sata, than it doesn't uses the virtio optimally.
> >>
> >> Well it _might_ be slightly more beneficial to use virtio-scsi or even
> >> <disk type='block' device='lun'>, but I can't imagine that would make
> >> the system lag. I'm not that familiar with the details.
> >configure virtio-scsi and sata-scai at the same time?
> >
>
> Yes, forgot that, sorry. Try virtio-scsi. You could also go farther
> and pass through the LUN or the whole HBA (if you don't need to access
> any other disk on it) to the VM. Try the information presented here:
>
> https://libvirt.org/formatdomain.html#usb-pci-scsi-devices
>
> >>
> >> >it is a spindle, nvmes are too expensive where I live, frankly, I don't need lightning fast boot, the other BM machines running windows on spindle
> >> >run it quite fast and they aren't half as fast as this server
> >> >
> >>
> >> That might actually be related. The guest might think it is a different
> >> type of disk and use completely suboptimal scheduling. This might
> >> actually be solved by passing it as <disk device='lun'..., but at this
> >> point I'm just guessing.
> >I'll look into that, thanks.
so bottom line, you suggest the following:
1. remove the manual cpu pin, let qemu sort that out.
2. add a virtio scsi controller and connect the os hdd to it
3. pass the hss via scsi pt and not dev node
4. if I able to do #3, no need to add device='lun' as it won't use the disk option
Dagg.
1 year
ANNOUNCE: Mailing list move complete
by Daniel P. Berrangé
This is an announcement to the effect that the mailing list move is now
complete. TL;DR the new list addresses are:
* announce(a)lists.libvirt.org (formerly libvirt-announce(a)redhat.com)
Low volume, announcements of releases and other important info
* users(a)lists.libvirt.org (formerly libvirt-users(a)redhat.com)
End user questions and discussions and collaboration
* devel(a)lists.libvirt.org (formerly libvir-list(a)redhat.com)
Patch submission for development of main project
* security(a)lists.libvirt.org (formerly libvir-security(a)redhat.com)
Submission of security sensitive bug reports
The online archive and membership mgmt interface is
https://lists.libvirt.org
In my original announcement[1] I mentioned that people would need to manually
re-subscribe. Due to a mixup in communications, our IT admins went ahead and
migrated across the existing entire subscriber base for all lists. Thus there
is NO need to re-subscribe to any of the lists. If you were doing filtering
of mail, you may need to update filters for the new list ID matches.
With the new list server, HyperKitty is providing the web interface. Thus
if you wish to interact with the lists entire via the browser this is now
possible. Note that it requires you to register for an account and set a
password, even if you are already a list subscriber.
If you mistakenly send to the old lists you should receive an auto-reply
about the moved destinations.
Note, we had some technical issues on Thursday/Friday, so if you sent
mails on those two days they probably will not have reached any lists,
and so you may wish to re-send them.
With regards,
Daniel
[1] https://listman.redhat.com/archives/libvirt-announce/2023-October/000650....
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
1 year