[FOSDEM][CFP] Virtualization & IaaS Devroom
by Piotr Kliczewski
We are excited to announce that the
call for proposals is now open for the Virtualization & IaaS devroom at the
upcoming FOSDEM 2023, to be hosted on February 4th 2023.
This devroom is a collaborative effort, and is organized by dedicated folks
from projects such as OpenStack, Xen Project, KubeVirt, QEMU, KVM, and
Foreman. We would like to invite all those who are involved in these fields
to submit your proposals by December 10th, 2022.
About the Devroom
The Virtualization & IaaS devroom will feature session topics such as open
source hypervisors or virtual machine managers such as Xen Project, KVM,
bhyve and VirtualBox as well as Infrastructure-as-a-Service projects such
as KubeVirt,
Apache CloudStack, OpenStack, QEMU and OpenNebula.
This devroom will host presentations that focus on topics of shared
interest, such as KVM; libvirt; shared storage; virtualized networking;
cloud security; clustering and high availability; interfacing with multiple
hypervisors; hyperconverged deployments; and scaling across hundreds or
thousands of servers.
Presentations in this devroom will be aimed at developers working on these
platforms who are looking to collaborate and improve shared infrastructure
or solve common problems. We seek topics that encourage dialog between
projects and continued work post-FOSDEM.
Important Dates
Submission deadline: 10th December 2022
Acceptance notifications: 15th December 2022
Final schedule announcement: 20th December 2022
Devroom: First half of 4th February 2023
Submit Your Proposal
All submissions must be made via the Pentabarf event planning site[1]. If
you have not used Pentabarf before, you will need to create an account. If
you submitted proposals for FOSDEM in previous years, you can use your
existing account.
After creating the account, select Create Event to start the submission
process. Make sure to select Virtualization and IaaS devroom from the Track
list. Please fill out all the required fields, and provide a meaningful
abstract and description of your proposed session.
Submission Guidelines
We expect more proposals than we can possibly accept, so it is vitally
important that you submit your proposal on or before the deadline. Late
submissions are unlikely to be considered.
All presentation slots are 30 minutes, with 20 minutes planned for
presentations, and 10 minutes for Q&A.
All presentations will be recorded and made available under Creative
Commons licenses. In the Submission notes field, please indicate that you
agree that your presentation will be licensed under the CC-By-SA-4.0 or
CC-By-4.0 license and that you agree to have your presentation recorded.
For example:
"If my presentation is accepted for FOSDEM, I hereby agree to license all
recordings, slides, and other associated materials under the Creative
Commons Attribution Share-Alike 4.0 International License. Sincerely,
<NAME>."
In the Submission notes field, please also confirm that if your talk is
accepted, you will be able to attend FOSDEM and deliver your presentation.
We will not consider proposals from prospective speakers who are unsure
whether they will be able to secure funds for travel and lodging to attend
FOSDEM. (Sadly, we are not able to offer travel funding for prospective
speakers.)
Submission Guidelines
Mentored presentations will have 25-minute slots, where 20 minutes will
include the presentation and 5 minutes will be reserved for questions.
The number of newcomer session slots is limited, so we will probably not be
able to accept all applications.
You must submit your talk and abstract to apply for the mentoring program,
our mentors are volunteering their time and will happily provide feedback
but won't write your presentation for you!
If you are experiencing problems with Pentabarf, the proposal submission
interface, or have other questions, you can email our devroom mailing
list[2] and we will try to help you.
How to Apply
In addition to agreeing to video recording and confirming that you can
attend FOSDEM in case your session is accepted, please write "speaker
mentoring program application" in the "Submission notes" field, and list
any prior speaking experience or other relevant information for your
application.
Code of Conduct
Following the release of the updated code of conduct for FOSDEM, we'd like
to remind all speakers and attendees that all of the presentations and
discussions in our devroom are held under the guidelines set in the CoC and
we expect attendees, speakers, and volunteers to follow the CoC at all
times.
If you submit a proposal and it is accepted, you will be required to
confirm that you accept the FOSDEM CoC. If you have any questions about the
CoC or wish to have one of the devroom organizers review your presentation
slides or any other content for CoC compliance, please email us and we will
do our best to assist you.
Call for Volunteers
We are also looking for volunteers to help run the devroom. We need
assistance watching time for the speakers, and helping with video for the
devroom. Please contact devroom mailing list [2] for more information.
Questions?
If you have any questions about this devroom, please send your questions to
our devroom mailing list. You can also subscribe to the list to receive
updates about important dates, session announcements, and to connect with
other attendees.
See you all at FOSDEM!
[1] <https://penta.fosdem.org/submission/FOSDEM17>
https://penta.fosdem.org/submission/FOSDEM23
[2] iaas-virt-devroom at lists.fosdem.org
2 years
Serial traffic hoses virtual machine interactive performance?
by Lars Kellogg-Stedman
I have a pair of virtual machines connected via a serial link. Machine
`node0.virt` is configured like this:
<serial type="unix">
<source mode="bind" path="/tmp/serial0"/>
<target type="isa-serial" port="1">
<model name="isa-serial"/>
</target>
<alias name="serial1"/>
</serial>
And machine `node1.virt` is configured like this:
<serial type="unix">
<source mode="connect" path="/tmp/serial0"/>
<target type="isa-serial" port="1">
<model name="isa-serial"/>
</target>
<alias name="serial1"/>
</serial>
If I have any sort of file transfer running over the serial link
(e.g., a zmodem file transfer over raw serial, or an `scp` over a SLIP
or PPP connection), the interactive performance of the *receiving*
machine (accessed via the console or over a network connection) tanks
-- it becomes almost unusable until the file transfer completes (or is
cancelled).
What's going on there, and is there a way to improve the performance?
(Same behavior when connected using a pty device instead of a unix
socket.)
--
Lars Kellogg-Stedman <lars(a)redhat.com> | larsks @ {irc,twitter,github}
http://blog.oddbit.com/ | N1LKS
2 years
Re: Predictable and consistent net interface naming in guests
by Igor Mammedov
On Thu, 3 Nov 2022 00:13:16 +0200
Amnon Ilan <ailan(a)redhat.com> wrote:
> On Wed, Nov 2, 2022 at 6:47 PM Laine Stump <laine(a)redhat.com> wrote:
>
> > On 11/2/22 11:58 AM, Igor Mammedov wrote:
> > > On Wed, 2 Nov 2022 15:20:39 +0000
> > > Daniel P. Berrangé <berrange(a)redhat.com> wrote:
> > >
> > >> On Wed, Nov 02, 2022 at 04:08:43PM +0100, Igor Mammedov wrote:
> > >>> On Wed, 2 Nov 2022 10:43:10 -0400
> > >>> Laine Stump <laine(a)redhat.com> wrote:
> > >>>
> > >>>> On 11/1/22 7:46 AM, Igor Mammedov wrote:
> > >>>>> On Mon, 31 Oct 2022 14:48:54 +0000
> > >>>>> Daniel P. Berrangé <berrange(a)redhat.com> wrote:
> > >>>>>
> > >>>>>> On Mon, Oct 31, 2022 at 04:32:27PM +0200, Edward Haas wrote:
> > >>>>>>> Hi Igor and Laine,
> > >>>>>>>
> > >>>>>>> I would like to revive a 2 years old discussion [1] about
> > consistent network
> > >>>>>>> interfaces in the guest.
> > >>>>>>>
> > >>>>>>> That discussion mentioned that a guest PCI address may change in
> > two cases:
> > >>>>>>> - The PCI topology changes.
> > >>>>>>> - The machine type changes.
> > >>>>>>>
> > >>>>>>> Usually, the machine type is not expected to change, especially if
> > one
> > >>>>>>> wants to allow migrations between nodes.
> > >>>>>>> I would hope to argue this should not be problematic in practice,
> > because
> > >>>>>>> guest images would be made per a specific machine type.
> > >>>>>>>
> > >>>>>>> Regarding the PCI topology, I am not sure I understand what changes
> > >>>>>>> need to occur to the domxml for a defined guest PCI address to
> > change.
> > >>>>>>> The only think that I can think of is a scenario where
> > hotplug/unplug is
> > >>>>>>> used,
> > >>>>>>> but even then I would expect existing devices to preserve their
> > PCI address
> > >>>>>>> and the plug/unplug device to have a reserved address managed by
> > the one
> > >>>>>>> acting on it (the management system).
> > >>>>>>>
> > >>>>>>> Could you please help clarify in which scenarios the PCI topology
> > can cause
> > >>>>>>> a mess to the naming of interfaces in the guest?
> > >>>>>>>
> > >>>>>>> Are there any plans to add the acpi_index support?
> > >>>>>>
> > >>>>>> This was implemented a year & a half ago
> > >>>>>>
> > >>>>>> https://libvirt.org/formatdomain.html#network-interfaces
> > >>>>>>
> > >>>>>> though due to QEMU limitations this only works for the old
> > >>>>>> i440fx chipset, not Q35 yet.
> > >>>>>
> > >>>>> Q35 should work partially too. In its case acpi-index support
> > >>>>> is limited to hotplug enabled root-ports and PCIe-PCI bridges.
> > >>>>> One also has to enable ACPI PCI hotplug (it's enled by default
> > >>>>> on recent machine types) for it to work (i.e.it's not supported
> > >>>>> in native PCIe hotplug mode).
> > >>>>>
> > >>>>> So if mgmt can put nics on root-ports/bridges, then acpi-index
> > >>>>> should just work on Q35 as well.
> > >>>>
> > >>>> With only a few exceptions (e.g. the first ich9 audio device, which is
> > >>>> placed directly on the root bus at 00:1B.0 because that is where the
> > >>>> ich9 audio device is located on actual Q35 hardware), libvirt will
> > >>>> automatically put all PCI devices (including network interfaces) on a
> > >>>> pcie-root-port.
> > >>>>
> > >>>> After seeing reports that "acpi index doesn't work with Q35
> > >>>> machinetypes" I just assumed that was correct and didn't try it. But
> > >>>> after seeing the "should work partially" statement above, I tried it
> > >>>> just now and an <interface> of a Q35 guest that had its PCI address
> > >>>> auto-assigned by libvirt (and so was placed on a pcie-root-port)m and
> > >>>> had <acpi index='4'/> was given the name "eno4". So what exactly is it
> > >>>> that *doesn't* work?
> > >>>
> > >>> From QEMU side:
> > >>> acpi-index requires:
> > >>> 1. acpi pci hotplug enabled (which is default on relatively new q35
> > machine types)
> > >>> 2. hotpluggble pci bus (root-port, various pci bridges)
> > >>> 3. NIC can be cold or hotplugged, guest should pick up acpi-index of
> > the device
> > >>> currently plugged into slot
> > >>> what doesn't work:
> > >>> 1. device attached to host-bridge directly (work in progress)
> > >>> (q35)
> > >>> 2. devices attached to any PXB port and any hierarchy hanging of it
> > (there are not plans to make it work)
> > >>> (q35, pc)
> > >>
> > >> I'd say this is still a relatively important, as the PXBs are needed
> > >> to create a NUMA placement aware topology for guests, and I'd say it
> > >> is undesirable to loose acpi-index if a guest is updated to be NUMA
> > >> aware, or if a guest image can be deployed in either normal or NUMA
> > >> aware setups.
...
> How big of a project would it be to enable ACPI-indexing/hotplug with PXB?
> Since native PCI was improved, we can still compromise on switching to
> native-PCI-hotplug when PXB is required (and no fixed indexing)
My guesstimate would be it's not terribly difficult.
Maybe we could even marry native hotplug & acpi-index after the later is
decoupled from ACPI PCI hotplug as much as possible.
>
> Thanks,
> Amnon
>
>
>
> >
> > Anyway, it sounds like (*within the confines of how libvirt constructs
> > the PCI topology*) we actually have functional parity of acpi-index
> > between 440fx and Q35.
> >
> >
2 years
Predictable and consistent net interface naming in guests
by Edward Haas
Hi Igor and Laine,
I would like to revive a 2 years old discussion [1] about consistent network
interfaces in the guest.
That discussion mentioned that a guest PCI address may change in two cases:
- The PCI topology changes.
- The machine type changes.
Usually, the machine type is not expected to change, especially if one
wants to allow migrations between nodes.
I would hope to argue this should not be problematic in practice, because
guest images would be made per a specific machine type.
Regarding the PCI topology, I am not sure I understand what changes
need to occur to the domxml for a defined guest PCI address to change.
The only think that I can think of is a scenario where hotplug/unplug is
used,
but even then I would expect existing devices to preserve their PCI address
and the plug/unplug device to have a reserved address managed by the one
acting on it (the management system).
Could you please help clarify in which scenarios the PCI topology can cause
a mess to the naming of interfaces in the guest?
Are there any plans to add the acpi_index support?
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1874096#c15
Thank you,
Edy.
2 years
DOS
by Tomas By
Hi all,
I am trying to set up an `isolated virtual network' with two or more
MS-DOS guests, all on one single Linux box.
The aim is for them to share a disk (or just one directory on one
disk).
As I could not get it to work with IPX, am now trying TCP/IP. However,
I suspect the MS tools are still attempting to use IPX as well.
Is there some simple solution that I am missing?
Does libvirt support IPX at all?
Any other easy way to do DOS file sharing?
(I believe I can use any version of DOS, and any sort of tool as long
as I get a `remote' directory available as some drive letter.)
/Tomas
2 years