DNS forwarding for guest domains on isolated network
by Jörg Kastning
Hi @all,
I'm having trouble to realize my use case and hope somebody could help me.
# Use case
For a home lab I want to deploy several guest domains. These domains
must not have a direct or NAT connection to the internet or my LAN. They
should only be able to reach my LAN and the internet through a proxy.
# What I've done
I've created the following virtual switch in isolated mode:
$ sudo virsh net-dumpxml private1
<network connections='3'>
<name>private1</name>
<uuid>THE-UUID</uuid>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='DE:AD:BE:EF:FF:FF'/>
<domain name='private1'/>
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.128' end='192.168.100.254'/>
</dhcp>
</ip>
</network>
I've setup a guest domain that serves as a proxy and several other guests.
# My issue
Nameresolution for *.private1 works fine on this network. But I'm not
able to resolve domains from the outside world like github.com.
I understood that libvirt is forwarding dns resolution requests to the
hosts nameserver configured in /etc/resolv.conf in case the dnsmasq
instance for the virtual network is not able to resolve the name.
My guess, in my setup this don't work, because the virtual switch is in
isolated mode, right?
# My questions
* What can I do to achieve my use case described above?
* Is it possible to use the isolated mode here or do I have to use a
different mode?
It's important that the guest domains could only connect to the internet
by using the proxy.
Regards,
Joerg
4 years
Upgrade CentOS 7 to 8, error: network is already in use by interface
by Filip Hruska
Hi,
I've been trying to migrate some of my CentOS 7 KVM hypervisors to
CentOS 8, and I have encountered the following issue while trying to
load my network config:
virsh:
error: Failed to start network test1
error: internal error: Network is already in use by interface virbr2
journalctl:
error : networkCheckRouteCollision:123 : internal error: Network is
already in use by interface virbr2
I use the following network definitions, which are a bit non-standard,
however they work perfectly on CentOS 7:
<network>
<name>test1</name>
<forward mode='open'/>
<bridge name='virbr1' stp='off' delay='0'/>
<mac address='52:54:00:c1:a7:7c'/>
<dns enable='no'/>
<ip address='10.0.0.1' netmask='255.255.255.255'></ip>
<route address='192.168.1.22' prefix='32' gateway='10.0.0.1'/>
</network>
<network>
<name>test2</name>
<forward mode='open'/>
<bridge name='virbr2' stp='off' delay='0'/>
<mac address='52:54:00:54:09:3c'/>
<dns enable='no'/>
<ip address='10.0.0.1' netmask='255.255.255.255'></ip>
<route address='192.168.1.33' prefix='32' gateway='10.0.0.1'/>
</network>
My question is, why is the behaviour different across CentOS releases,
despite the libvirt versions apparently matching? Output from both
systems is the same:
# virsh version
Compiled against library: libvirt 4.5.0
Using library: libvirt 4.5.0
Using API: QEMU 4.5.0
Running hypervisor: QEMU 2.12.0
What would be the best approach on patching out this check, so I can
continue using my network config?
Just to summarize the expected output:
- virbr1 and virbr2 get created, each one with an address 10.0.0.1/32.
- Two routes get created:
192.168.1.22 via 10.0.0.1 dev virbr1
192.168.1.33 via 10.0.0.1 dev virbr2
Thanks,
Filip Hruska
4 years
consume existing tap device when libvirt / qemu run as different users
by Miguel Duarte de Mora Barroso
Hello,
I'm having some doubts about consuming an existing - already
configured - tap device from libvirt (with `managed='no' ` attribute
set).
In KubeVirt, we want to have the consumer side of the tap device run
without the NET_ADMIN capability, which requires the UID / GID of the
tap creator / opener to match, as per the kernel code in [0]. As such,
we create the tap device (with the qemu user / group on behalf of
qemu), which will ultimately be the tap consumer.
This leads me to question: why is libvirt opening / calling
`ioctl(..., TUNSETIFF, ...) ` on the tap device when it already exists
- [1] & [2] ? Why can't the tap device (already configured) be left
alone, and let qemu consume it ?
The above is problematic for KubeVirt, since our setup currently has
libvirt running as root (while qemu runs as a different user), which
is preventing us from removing NET_ADMIN (libvirt & qemu run as
different users).
Thanks in advance for your time,
Miguel
[0] - https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/d...
[1] - https://github.com/libvirt/libvirt/blob/99a1cfc43889c6d425a64013a12b234dd...
[2] - https://github.com/libvirt/libvirt/blob/v6.0.0/src/util/virnetdevtap.c#L274
4 years
Set hostname of guest during installation time
by john doe
Hi,
I would like to set the hostname when installing a guest, with the below
command the hostname is not set to 'try06' in the guest:
virt-install --name=try06 --graphic none --pxe --network bridge=virbr0
How can I set the hostname of the guest during installation time?
I realy appriciate the support I'm getting in here, I'm fairly new to
libvirt.
--
John Doe
4 years
Libvirt driver iothread property for virtio-scsi disks
by Nir Soffer
The docs[1] say:
- The optional iothread attribute assigns the disk to an IOThread as defined by
the range for the domain iothreads value. Multiple disks may be assigned to
the same IOThread and are numbered from 1 to the domain iothreads value.
Available for a disk device target configured to use "virtio" bus and "pci"
or "ccw" address types. Since 1.2.8 (QEMU 2.1)
Does it mean that virtio-scsi disks do not use iothreads?
I'm experiencing a horrible performance using nested vms (up to 2 levels of
nesting) when accessing NFS storage running on one of the VMs. The NFS
server is using scsi disk.
My theory is:
- Writing to NFS server is very slow (too much nesting, slow disk)
- Not using iothreads (because we don't use virtio?)
- Guest CPU is blocked by slow I/O
Does this make sense?
[1] https://libvirt.org/formatdomain.html#hard-drives-floppy-disks-cdroms
Nir
4 years
proper config for qemu's host_cdrom
by daggs
Greetings,
I was wondering what is the proper way to configure a scsi cdrom pass-through so in when the qemu line is generated, host_Cdrom will be used instead of host_device.
looking at https://gitlab.com/libvirt/libvirt/-/blob/master/src/qemu/qemu_block.c#L1090, I see that hostcdrom must be true.
in order for that to be true, the following must be (see https://gitlab.com/libvirt/libvirt/-/blob/master/src/qemu/qemu_domain.c#L...):
1. disk->device == VIR_DOMAIN_DISK_DEVICE_CDROM
2. disksrc->format == VIR_STORAGE_FILE_RAW
3. virStorageSourceIsBlockLocal(disksrc)
4. virFileIsCDROM(disksrc->path) == 1
virFileIsCDROM uses the kernel, so I assume that as disksrc->path points to the actual path (I can see it in the qemu line) than #4 returns 1.
the other 3 are more complicated. my xml snippet is this:
<devices>
<hostdev mode='subsystem' type='scsi' rawio='yes'>
<source>
<adapter name='scsi_host0'/>
<address bus='0' target='0' unit='0'/>
</source>
<readonly/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</hostdev>
</devices>
Thanks,
Dagg
4 years
about the new added attributes "check" and "type" for interface mac element
by Yalan Zhang
Hi all,
I have done some tests for the new attributes "check" and "type", could you
please help to have a check? And I have some questions about the patch,
please help to have a look, Thank you!
The questions:
1. in step 4 below, the error message should be updated:
Actual results:
XML error: invalid mac address **check** value: 'next'. Valid values are
"generated" and "static".
expected results:
XML error: invalid mac address **type** value: 'next'. Valid values are
"generated" and "static".
2. I have checked the vmware OUI definition and found this:
https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.networki...
it says the VMware OUI is 00:50:56, not 00:0c:29 in the patches. Am I
missing something?
3. Could you please tell more about the user story? as I can not understand
the scenario when " it will ignore all the checks libvirt does about the
origin of the MAC address(whether or not it's in a VMWare OUI) and forward
the original one to the ESX server telling it not to check it either". Does
it happen when we try to transform a kvm guest to a vmware guest?
4. How to test it as a libvirt QE? Are the test scenarios below enough
without ESX env?
Test steps:
1. Start vm with different configuration with mac in "00:0c:29" range:
# virsh dumpxml rhel | grep /interface -B12
...
<interface type='network'>
<mac address='00:0c:29:e7:9b:cb' type='generated' check='yes'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x04' slot='0x00'
function='0x0'/>
</interface>
<interface type='network'>
<mac address='00:0c:29:3b:e0:50' type='static' check='no'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x0d' slot='0x00'
function='0x0'/>
</interface>
<interface type='network'>
<mac address='00:0c:29:73:f6:dc' type='generated' check='no'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x0e' slot='0x00'
function='0x0'/>
</interface>
<interface type='network'>
<mac address='00:0c:29:aa:dc:6c' type='static' check='yes'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x0f' slot='0x00'
function='0x0'/>
</interface>
# virsh start rhel
Domain rhel started
2. login guest and check the interfaces:
# ip addr
...
2: enp5s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 00:0c:29:e7:9b:cb brd ff:ff:ff:ff:ff:ff
inet 192.168.122.142/24 brd 192.168.122.255 scope global dynamic
noprefixroute enp5s0
valid_lft 3584sec preferred_lft 3584sec
inet6 fe80::351c:686a:863e:4a7f/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3: enp11s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 00:0c:29:3b:e0:50 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.202/24 brd 192.168.122.255 scope global dynamic
noprefixroute enp11s0
valid_lft 3584sec preferred_lft 3584sec
inet6 fe80::2b79:4675:6c59:6822/64 scope link noprefixroute
valid_lft forever preferred_lft forever
4: enp12s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 00:0c:29:73:f6:dc brd ff:ff:ff:ff:ff:ff
inet 192.168.122.33/24 brd 192.168.122.255 scope global dynamic
noprefixroute enp12s0
valid_lft 3584sec preferred_lft 3584sec
inet6 fe80::e43d:555:ba85:4030/64 scope link noprefixroute
valid_lft forever preferred_lft forever
5: enp13s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state
UP group default qlen 1000
link/ether 00:0c:29:aa:dc:6c brd ff:ff:ff:ff:ff:ff
inet 192.168.122.161/24 brd 192.168.122.255 scope global dynamic
noprefixroute enp13s0
valid_lft 3584sec preferred_lft 3584sec
inet6 fe80::f32d:e2e8:9c8b:47fd/64 scope link noprefixroute
valid_lft forever preferred_lft forever
3. start vm without the "check" and "type" attributes, and check the live
xml do not include these attributes, either.
# virsh start vm1
virshDomain vm1 started
# virsh dumpxml vm1 | grep /interface -B8
</controller>
<interface type='network'>
<mac address='52:54:00:bb:cd:89'/>
<source network='default'
portid='b02dc78f-69ad-4db7-870c-f371fd730537' bridge='virbr0'/>
<target dev='vnet22'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00'
function='0x0'/>
</interface>
4. negative test:
Set "<mac address='52:54:00:bb:cd:89' type='next'/>" in virsh edit
# virsh edit vm1
error: XML document failed to validate against schema: Unable to validate
doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
Element domain failed to validate content
Failed. Try again? [y,n,i,f,?]: ----> press 'i'
error: XML error: invalid mac address check value: 'next'. Valid values are
"generated" and "static".
Failed. Try again? [y,n,f,?]:
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
4 years