Virtual Bridge "Network" for Sandbox
by Paul O'Rorke
Hi all,
I couldn't find any documentation on this, hopefully someone can point
me in the right direction.
I recently set up a sand-boxed environment for our developers. There are
domain controller(s), workstations and servers in there. The whole
thing is running on a single host using a "Virtual Network" defined in
virt-manager on that host.
Now I find I want to add more guests and there are not enough resources
on this one host. Can I somehow make this Virtual Network available to
two hosts? I do not want to move to a bridged network and have to
physically join the two hosts with a discrete link when they are already
on the same subnet at the host level.
Is that possible?
--
*Paul O'Rorke*
4 years, 5 months
get contents of a storage volume in bytes
by Shashwat shagun
Hi, i have two machines with libvirt installed. I want to move a storage
volume from the 1st machine to 2nd machine with code.
I'm thinking of reading a storage volume into a byte array/stream then
uploading through my own/custom handler (written in go) to the 2nd
machine's custom handler which will then be writing the byte stream to
libvirt storage volume.
Is there a libvirt function to read storage volume into byte stream?
Shashwat.
shashwatshagun2581(a)gmail.com
4 years, 5 months
RBD volume pools and locks
by Guy Godfroy
Hello,
One of my libvirt cluster is using a RBD volume pool. I wish that I
could configure libvirt lock manager to use RBD locks. Is that possible?
I read here https://libvirt.org/kbase/locking.html that libvirt lock
manager is shipped with 2 active plugins. Is there any other plugins
that exist?
Thanks.
Guy Godfroy
4 years, 5 months
virsh edit does not work when <initiator> and <auth> is used in config
by Fourhundred Thecat
Hello,
I am having problem when using: "virsh edit <vm_name>"
my VM has network iscsi disk defined:
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
<source protocol='iscsi'
name='iqn.1992-08.com.netapp:5481.60080e50001ff2000000000051aee24d/0'>
<host name='10.1.212.52' port='3260'/>
<initiator>
<iqn name='iqn.2013-01.bla.bla:01:test'/>
</initiator>
<auth username='myname'>
<secret type='iscsi' usage='libvirtiscsi'/>
</auth>
</source>
...
</disk>
when I defined thje VM the first time, as always, libvirt reorders the
lines in the XML config file as it likes.
One of the reordering it did, was to put the "<initiator>" block above
the "<auth>" block.
But once I want to edit, "virsh edit <vm_name>", whatever change I make,
even unrelated to iscsi disk, it reports error:
error: XML document failed to validate against schema: Unable to
validate doc against /usr/share/libvirt/schemas/domain.rng
Extra element devices in interleave
Element domain failed to validate content
after long trial and error, I found I can finally save it when I reorder
the
"<initiator>" and "<auth>" blocks, so that "<auth>" is above
"<initiator>". Once i save it, libvirt then reorders it back to the
order, which will generate erro next time I attemt to edit.
Anyway, this seems like a bug, and especially evil one.
How could I get rid of this behaviour?
thanks,
4 years, 6 months
volume pool for LXC
by Guy Godfroy
Hello,
Is it possible to use volume pools for LXC ? For instance LVM pools.
It seems to be supporte by LXC bare commands. From the man page of
lxc-create :
-B, --bdev backingstore
'backingstore' is one of 'dir', 'lvm', 'loop', 'btrfs', 'zfs',
'rbd', or 'best'. The default is 'dir', meaning that the container root
filesystem will be a directory under /var/lib/lxc/container/rootfs. This
backing store type allows the optional --dir ROOTFS to be specified,
meaning that the container rootfs should be placed under the specified
path, rather than the default. (The 'none' backingstore type is an alias
for 'dir'.) If 'btrfs' is specified, then the target filesystem must be
btrfs, and the container rootfs will be created as a new subvolume. This
allows snapshotted clones to be created, but also causes rsync
--one-filesystem to treat it as a separate filesystem. If backingstore
is 'lvm', then an lvm block device will be used and the following
further options are available: --lvname lvname1 will create an LV named
lvname1 rather than the default, which is the container name. --vgname
vgname1 will create the LV in volume group vgname1 rather than the
default, lxc. --thinpool thinpool1 will create the LV as a
thin-provisioned volume in the pool named thinpool1 rather than the
default, lxc. --fstype FSTYPE will create an FSTYPE filesystem on the
LV, rather than the default, which is ext4. --fssize SIZE will create a
LV (and filesystem) of size SIZE rather than the default, which is 1G.
If backingstore is 'loop', you can use --fstype FSTYPE and
--fssize SIZE as 'lvm'. The default values for these options are the
same as 'lvm'.
If backingstore is 'rbd', then you will need to have a valid
configuration in ceph.conf and a ceph.client.admin.keyring defined. You
can specify the following options : --rbdname RBDNAME will create a
blockdevice named RBDNAME rather than the default, which is the
container name. --rbdpool POOL will create the blockdevice in the pool
named POOL, rather than the default, which is 'lxc'.
If backingstore is 'best', then lxc will try, in order, btrfs,
zfs, lvm, and finally a directory backing store.
4 years, 6 months
qemu hook: event for source host too
by Guy Godfroy
Hello, this is my first time posting on this mailing list.
I wanted to suggest a addition to the qemu hook. I will explain it
through my own use case.
I use a shared LVM storage as a volume pool between my nodes. I use
lvmlockd in sanlock mode to protect both LVM metadata corruption and
concurrent volume mounting.
When I run a VM on a node, I activate the desired LV with exclusive lock
(lvchange -aey). When I stop the VM, I deactivate the LV, effectively
releasing the exclusive lock (lvchange -an).
When I migrate a VM (both live and offline), the LV has to be activated
on both source and target nodes, so I have to use a shared lock
(lvchange -asy). That's why I need a hook event on the source host too
(as far as I know after my tests, the migration event is only triggered
on the target host).
Is such a feature a possibility?
Thanks for your attention.
Guy Godfroy
4 years, 6 months
Missing vnet interface in migration
by Ales Musil
Hi,
in oVirt we have come across issue with a missing vnet interface (Original
bug report [0]). This happened once in a huge batch of migrations e.g 2
failed out of 1200 and we came to conclusion that this is happening
somewhere out of our control as libvirt reports:
2020-05-17 02:34:11,530+0000 ERROR (migsrc/623f6cc1) [virt.vm]
(vmId='623f6cc1-d49f-43a4-8170-b5fec01f2897') operation failed:
binding 'vnet54' is already being removed (migration:278)
Unfortunately we don't have libvirt logs and producing them might not be
possible. Is there anything else we can do to tackle down this issue?
Thank you.
Regards,
Ales
[0] https://bugzilla.redhat.com/1837395
--
Ales Musil
Software Engineer - RHV Network
Red Hat EMEA <https://www.redhat.com>
amusil(a)redhat.com IM: amusil
<https://red.ht/sig>
4 years, 6 months
Feature request? Auto start vm upon next shutdown
by Marc Roos
Sometimes when you change the configuration, this configuration change
will only be available after a shutdown. Not to have to monitor when the
vm shuts down, to start it again. It could be nice to have libvirt start
it one time. Something like:
1. change the network interface of a running guest
2. at the prompt of the virt-manager "Some changes may require a guest
shutdown to take effect"
introduce check box 'Auto start vm upon next shutdown'
3. shutdown the guest
4. after the guest completed shutdown, libvirt is automatically starting
it again.
4 years, 6 months
NVDIMM sizes and DIMM hot plug
by Milan Zamazal
Hi,
I've found out that NVDIMM size and label size matter for regular
(non-NV) DIMM hot plug. If the NVDIMM is not aligned correctly, the
guest OS will not accept the hot plugged memory and will complain with
messages such as
Block size [0x8000000] unaligned hotplug range: start 0x225000000, size 0x10000000
The start address above is also reported within <memory> element of the
hot plugged memory in the domain XML:
<address type='dimm' slot='1' base='0x225000000'/>
Apparently, in order to make memory hot plug working in the guest OS,
the inserted memory must be aligned to the platform memory alignment
(128 MB on x86_64).
I'd like to clarify, how libvirt makes the DIMM address above. How is
the NVDIMM memory range determined? According to my experiments, it
seems the NVDIMM specified <size> is taken, NVDIMM <label> size is
subtracted from it and the resulting value is reduced to the nearest
multiple of NVDIMM <alignsize>. Is this observation correct? Is it
guaranteed to be stable in future versions? I need to determine the
right NVDIMM size to make the subsequent memory modules correctly
aligned and then I can't change the NVDIMM size, to not damage data
stored in the NVDIMM.
Additionally, when adjusting maxMemory due to NVDIMM presence, should I
increase it by the specified NVDIMM <size> or a different value?
Thank you,
Milan
4 years, 6 months
Guidance for First Timer
by Sanyam Rajpal
Hey everyone,
I am a first-time contributor and I was interested in this project. I
wanted to make some open source contributions and learn. I am not looking
for getting enrolled in the program.
Sanyam
4 years, 6 months