[libvirt-users] virDomainBlockInfo for local volume
by Shahar Havivi
Hi,
I am using virStorageVolGetInfo to get the volume size virStorageVol.download to download a
normal file for VDSM project.
I want to add support for block devices as well, I was able to see the size
via virDomainBlockInfo and download via virDomain.blockPeek which is works
fine for both file and block device.
1. Can I depend on virDomainBlockInfo for non block devices?
2. Does the virDomainBlockInfo return the correct physical size for normal file?
3. Performance wise do we have a difference between virStorageVol.download and virDomain.blockPeek?
Assuming the VMs are down, this is the code I intend to use for all formats:
vm = con.lookupByName(options.vmname)
info = vm.blockInfo(src)
physical = info[2]
off = 0
size = 0
with open(dest, 'w+') as f:
while off < physical:
if (physical - off) < bufsize:
size = physical - off
else:
size = bufsize
buf = vm.blockPeek(src, off, size)
f.write(buf)
off += size
7 years, 11 months
[libvirt-users] Trying virgl in fedora 25
by Gianluca Cecchi
Hello,
I'm trying to test what found here:
http://blog.wikichoon.com/2016/05/spice-openglvirgl-acceleration-on.html
and here:
https://www.kraxel.org/blog/tag/virgl/
My system is a fedora 25 laptop, born in version 23 and gradually updated
to 24 and 25 now.
I had a fedora 25 guest that worked ok with "normal"spice and I'm trying to
configure with virgl
Main components currently installed on host:
qemu-kvm-2.7.0-7.fc25.x86_64
libvirt-2.2.0-2.fc25.x86_64
virt-manager-1.4.0-4.fc25.noarch
VirtualGL-2.4-7.fc25.x86_64
Commands executed to modify configuration:
[root@ope46 qemu]# virt-xml f25 --confirm --edit --video
clearxml=yes,model=virtio,accel3d=yes
--- Original XML
+++ Altered XML
@@ -94,8 +94,9 @@
<address type="pci" domain="0x0000" bus="0x00" slot="0x04"
function="0x0"/>
</sound>
<video>
- <model type="virtio" heads="1" primary="yes"/>
- <address type="pci" domain="0x0000" bus="0x00" slot="0x02"
function="0x0"/>
+ <model type="virtio">
+ <acceleration accel3d="yes"/>
+ </model>
</video>
<redirdev bus="usb" type="spicevmc">
<address type="usb" bus="0" port="1"/>
Define 'f25' with the changed XML? (y/n): y
Domain 'f25' defined successfully.
[root@ope46 qemu]# virt-xml f25 --confirm --edit --graphics
clearxml=yes,type=spice,gl=on,listen=none
WARNING qemu/libvirt version may not support spice GL
--- Original XML
+++ Altered XML
@@ -86,9 +86,9 @@
</channel>
<input type="mouse" bus="ps2"/>
<input type="keyboard" bus="ps2"/>
- <graphics type="spice" autoport="yes">
- <listen type="address"/>
- <image compression="off"/>
+ <graphics type="spice" autoport="no">
+ <gl enable="yes"/>
+ <listen type="none"/>
</graphics>
<sound model="ich6">
<address type="pci" domain="0x0000" bus="0x00" slot="0x04"
function="0x0"/>
Define 'f25' with the changed XML? (y/n): y
Domain 'f25' defined successfully.
[root@ope46 qemu]#
I don't know if the WARNING above is a generic one or if it makes any
pre-check to system...
Anyway both from vrt-manager and virsh I get error
[root@ope46 qemu]# virsh start f25
error: Failed to start domain f25
error: internal error: process exited while connecting to monitor:
2016-12-13T11:18:37.784324Z qemu-system-x86_64: egl: no drm render node
available
2016-12-13T11:18:37.784375Z qemu-system-x86_64: Failed to initialize EGL
render node for SPICE GL
[root@ope46 qemu]#
and this in /var/log/libvirt/qemu/f25.log:
2016-12-13 11:18:37.584+0000: starting up libvirt version: 2.2.0, package:
2.fc25 (Fedora Project, 2
016-11-14-21:04:29, buildvm-25.phx2.fedoraproject.org), qemu version:
2.7.0(qemu-2.7.0-7.fc25), host
name: ope46.ceda.polimi.it
LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
QEMU_AUDIO_DRV=spice /usr/bin/qemu-k
vm -name guest=f25,debug-threads=on -S -object secret,id=masterKey0,format=
raw,file=/var/lib/libvirt
/qemu/domain-2-f25/master-key.aes -machine
pc-i440fx-2.6,accel=kvm,usb=off,vmport=off
-cpu Nehalem -
m 4096 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid
d4c23620-b805-4656-9b97-d9d4ab9d
ba63 -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/
var/lib/libvirt/qemu/domain-2-
f25/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control
-rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=discard
-no-hpet -no-shutdown -global PIIX4_PM.disable_s3=1 -global
PIIX4_PM.disable_s4=1 -boot strict=on -device
ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x6.0x7
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x6
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x6.0x1
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x6.0x2
-device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x8 -device
virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=
drive-ide0-0-0,id=ide0-0-0,bootindex=1 -drive file=/var/lib/libvirt/images/
f25.qcow2,format=qcow2,if=none,id=drive-scsi0-0-0-0 -device
scsi-hd,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=drive-
scsi0-0-0-0,id=scsi0-0-0-0,bootindex=2 -netdev tap,fd=26,id=hostnet0
-device rtl8139,netdev=hostnet0,id=net0,mac=52:54:00:17:49:49,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0
-chardev spicevmc,id=charchannel0,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=
charchannel0,id=channel0,name=com.redhat.spice.0 -spice
port=0,disable-ticketing,gl=on,seamless-migration=on -device
virtio-vga,id=video0,virgl=on,bus=pci.0,addr=0x2 -device
intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0
-chardev spicevmc,id=charredir0,name=usbredir -device
usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=1 -chardev
spicevmc,id=charredir1,name=usbredir -device
usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=2
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on
char device redirected to /dev/pts/1 (label charserial0)
2016-12-13T11:18:37.784324Z qemu-system-x86_64: egl: no drm render node
available
2016-12-13T11:18:37.784375Z qemu-system-x86_64: Failed to initialize EGL
render node for SPICE GL
2016-12-13 11:18:37.891+0000: shutting down
What can I check? Any pre-requisite in laptop video adapter or other?
Thanks in advance,
Gialuca
7 years, 11 months
Re: [libvirt-users] virsh not detecting hugepage mount; disabled by config?
by Manuel Ullmann
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256
Thanks a lot you two,
yes hugetlbfs is not mounted, when libvirtd is started… It was a long
night. Did put it to fstab and since I do not allocate them on regular
boots, it won’t eat my ram (that was the main consideration for doing it
like this).
As a last means before going to sleep, I added
<qemu:arg value='-numa'/>
<qemu:arg value='node,memdev=mem42'/>
<qemu:arg value='-object'/>
<qemu:arg
value='memory-backend-file,size=6G,mem-path=/var/lib/hugetlbfs,share=on,id=mem42'/>
to the xml which worked too (surprise), but I have it working now in the
intended way.
Then I never got to the last part of the qemu.conf option comment so I
had to do another 3 reboots. Well, reading helps.
So the fstab entry reads now:
hugetlbfs /var/lib/hugetlbfs/libvirt/qemu hugetlbfs
relatime,rw,pagesize=1073741824 0 0
with /var/lib/hugetlbfs being the mount point in qemu.conf.
At least I didn’t forget to create the mount point before rebooting.
Best regards,
Manuel
> Hi, > > I use gentoo with desktop+systemd profile to run Windows 10 VM with
pci > passthrough and hugetlbs. > > Things I had to do for hugetlbs: >
1) Made sure their support was enabled in kernel > 2) Passed parameters
for their allocation to kernel. There are other > ways, but I use 1GB
page size and I have to allocate them before memory > gets fragmented.
Note, you do that with hugepages=3072. > 3) Passed hugetlbs settings to
libvirt VM domain (XML). > > That is it. I don't mount them myself. I
think (can't check now) they > are automounted to /dev/hugetlbs. By
default, libvirtd runs as root, and > qemu as user qemu. Consuming
hugetlb and releasing them should be > transparent. > > I suggest
starting without softlevel=qemuvm into default runlevel, and > then
check /proc/meminfo. Hugetlbs total and free should be > 0. Do the >
same with qemuvm runlevel. Maybe this runlevel misses something. > > On
2016-12-14 00:05, Manuel Ullmann wrote: >> Hi, >> >> I’m struggling with
virsh not detecting my hugepage mount. >> >> I have the following kernel
command line: >> BOOT_IMAGE=/vmlinuz-4.8.13-gentoo
root=/dev/mapper/gensd-gentoo ro quiet >> splash intel_iommu=on
video=efifb:off,vesafb:off,simplefb:off >>
splash=verbose,theme:livedvd-aurora kvm.ignore_msrs=1 >>
transparent_hugepage=never hugepages=3072 softlevel=qemuvm >> >> My
startup script outputs the following: >> hugetlbfs /var/lib/hugetlbfs
hugetlbfs >> rw,relatime,pagesize=2097152,uid=77,gid=77,mode=0770 0 0 >>
[2016.12.13 22:22:50 virsh 2808] ERROR Failed to start domain win10 >>
[2016.12.13 22:22:50 virsh 2808] ERROR internal error: hugetlbfs >>
filesystem is not mounted or disabled by administrator config >> virsh
was unsuccessful >> >> So hugetlbfs is definitively mounted, but virsh
does either not detect >> it or some magical option, that I’ve not found
in a day searching for >> it, disables it (the documentation could
explain that better). Since the >> Ubuntu guys refer to the
KVM_HUGEPAGES environment, I tried adding it to >> start-stop-daemon
environment, but it seems non-relevant. 77 is the qemu >> user id, but
I’m quite sure the permission issues were excluded (tried >> root
permissions as well). Besides I had permission errors occasionally >>
occasionally and don’t have them anymore. If you could provide a hint to
>> the magical administrator configuration option, that would be
helpful. >> Tried hugeadm as well, pointing the config to the correct
destination. >> >> Thanks in advance, >> Manuel >> >> PS: >> The virsh
script reads like this: >> #!/bin/sh >> cmdline="$(cat /proc/cmdline)"
>> if [[ "${cmdline##* }" == "softlevel=qemuvm" ]]; then >> mount -o
rw,relatime,pagesize=2097152,uid=77,gid=77,mode=0770 -t >> hugetlbfs
hugetlbfs /var/lib/hugetlbfs >> sysctl kernel.shmmax=6442450944
>> cat /proc/mounts | grep hugetlb >> /var/log/virsh.log >> sed
-i -e '/^hugetlb/{s/^/\#/}' /etc/libvirt/qemu.conf >> counter=0
>> while [ "${counter}" -lt 10 && ! -S /var/run/libvirt/libvirt-sock
]; do >> sleep 1 >> counter=$(( ++counter )) >> done >>
if [ $counter -gt 9 ]; then >> echo "libvirtd socket generation
timed out" >> /var/log/virsh.log >> fi >> if pidof libvirtd
&>/dev/null; then >> LC_ALL=C /usr/bin/virsh -l /var/log/virsh.log
start win10 >> if [ $? -gt 0 ]; then >> echo "virsh was
unsuccessful" >> /var/log/virsh.log >> reboot >> fi >>
else >> echo "libvirtd is not started" >> /var/log/virsh.log >>
reboot >> fi >> fi >> >> reboot is more convenient, since this is a
vfio-igd passthrough vm. >> >> >>
_______________________________________________ >> libvirt-users mailing
list >> libvirt-users(a)redhat.com >>
https://www.redhat.com/mailman/listinfo/libvirt-users >>
-----BEGIN PGP SIGNATURE-----
iQQcBAEBCAAGBQJYUVAKAAoJELgK9xQKkpX6sTof/2fdLX9qjiNgecKEIzHKEjBY
ln6o7E0nMY796Jf3HQ/v620uVZS3Nf4dRSFC9NVetzAgVcQizwko0ikVDsaCZlNP
5HojcRm2SK8c5CgU/ZLdLPp73PgrORtKYytYldhPQFebgOpNCJGzWAPXGnoLK78s
aHsKGKb5R79+K2Bq5DAQGR5N9wiV84Oi+afNjBunjj1dT+80YPAAtXuRKkDdD3lQ
hVBEdzgRLWW48FdEhIp2TlTJCyx4OwMGnIXaOPXrQSyMW6Omn49ZXYqwIgsOW45S
bzFZ4BPuqhrANne4YDPjuxwtL65YUKAyHZWl/bTBmqkFrpjecxYDnlg219t2KsiM
k3W9GQ0CWR74f0lJLULaV+ome13ydga5pDRe6+dyi7g0O7/4DTPuxQTRo07UiA/i
dQmvtzTAbGdSpGqyFBQqIFGMG4sB8kypXp1PIK06jX1Muon6WCdzAzt1IztLeuSW
qUj/srdnD9rLWWPlIV2NCZ3E5Da4szRD8gbhNIG2hg+Ss50pLdIFwlUbfksryY7B
B6L6KmpOw4LrpCZVFBeFhG5lFns9YSvYNKvSBRwJaxl838j8Kr8B1bo51nMYERL1
wJXK9IyFqx8jkbmFvfwXij5lx3DFBrLEQwymGmaLmJ4FQmnA9491L2F3K+NyG8kX
Yi/w+Xh9ZARoRy8/2SmVYwvB1vZkFpf04adocHxIYMT1P3ZLpCktu3P/DMdkdR26
DcnQedl7qbf7NeldxMiY0zTAti85VFuQggFQlRrB36pRdOXvFMk3mNLBiifICNvP
32sTl/9HnmscWCTSebmK/KjQjrLf9ijbc3V942WziwkEyDbOhfTUn0JfcZJl0FdM
OQ0j00t5wv14KFPP/hpWqRiypwXpboHLpB4k718NZElObUOob0ziz6PJc5iCzJDW
AkNbO1sEDgWN/pPorAYKPtre3TV6qk/k6A13zgNmMdZA4ZHRq6dsZLzCLnEaLwJE
G5WFveDyi4COVpelysArTPrw6aL4LAzIhkRN5A5YCANgsPtnhjql6XmL3sTSN93Z
R42klgPKNZVq5Bkdp2EpE70X+F/4j2jhRNznVp3YDU+6q6aNWHdqw7jCAhEji0aA
49gR3tIUgXp1883zc0vSFIMnCwYQQoCjqGpVZXWLtbxAmJrz5c4XeUCnpplfJW3Z
jRS19MEy6HgKmdkq1a4bRG5XFc87Ga2JWqwSXpGb1EsMvyMU3CQq47D98WqaeC9p
JTOuLMpYvlH+EEbEkL/deysLGVZ7QDTr5P4L+wOzN9jhSOrnIoNlVU5DVP7dhT2L
z1d3UY3cFhoVFLKyfbLF+flK7Y5aygQ/eO/+MlXmd9TI+2qTVehNSXiYTEnnsO0=
=1DU1
-----END PGP SIGNATURE-----
7 years, 11 months
[libvirt-users] virsh not detecting hugepage mount; disabled by config?
by Manuel Ullmann
Hi,
I’m struggling with virsh not detecting my hugepage mount.
I have the following kernel command line:
BOOT_IMAGE=/vmlinuz-4.8.13-gentoo root=/dev/mapper/gensd-gentoo ro quiet
splash intel_iommu=on video=efifb:off,vesafb:off,simplefb:off
splash=verbose,theme:livedvd-aurora kvm.ignore_msrs=1
transparent_hugepage=never hugepages=3072 softlevel=qemuvm
My startup script outputs the following:
hugetlbfs /var/lib/hugetlbfs hugetlbfs
rw,relatime,pagesize=2097152,uid=77,gid=77,mode=0770 0 0
[2016.12.13 22:22:50 virsh 2808] ERROR Failed to start domain win10
[2016.12.13 22:22:50 virsh 2808] ERROR internal error: hugetlbfs
filesystem is not mounted or disabled by administrator config
virsh was unsuccessful
So hugetlbfs is definitively mounted, but virsh does either not detect
it or some magical option, that I’ve not found in a day searching for
it, disables it (the documentation could explain that better). Since the
Ubuntu guys refer to the KVM_HUGEPAGES environment, I tried adding it to
start-stop-daemon environment, but it seems non-relevant. 77 is the qemu
user id, but I’m quite sure the permission issues were excluded (tried
root permissions as well). Besides I had permission errors occasionally
occasionally and don’t have them anymore. If you could provide a hint to
the magical administrator configuration option, that would be helpful.
Tried hugeadm as well, pointing the config to the correct destination.
Thanks in advance,
Manuel
PS:
The virsh script reads like this:
#!/bin/sh
cmdline="$(cat /proc/cmdline)"
if [[ "${cmdline##* }" == "softlevel=qemuvm" ]]; then
mount -o rw,relatime,pagesize=2097152,uid=77,gid=77,mode=0770 -t
hugetlbfs hugetlbfs /var/lib/hugetlbfs
sysctl kernel.shmmax=6442450944
cat /proc/mounts | grep hugetlb >> /var/log/virsh.log
sed -i -e '/^hugetlb/{s/^/\#/}' /etc/libvirt/qemu.conf
counter=0
while [ "${counter}" -lt 10 && ! -S /var/run/libvirt/libvirt-sock ]; do
sleep 1
counter=$(( ++counter ))
done
if [ $counter -gt 9 ]; then
echo "libvirtd socket generation timed out" >> /var/log/virsh.log
fi
if pidof libvirtd &>/dev/null; then
LC_ALL=C /usr/bin/virsh -l /var/log/virsh.log start win10
if [ $? -gt 0 ]; then
echo "virsh was unsuccessful" >> /var/log/virsh.log
reboot
fi
else
echo "libvirtd is not started" >> /var/log/virsh.log
reboot
fi
fi
reboot is more convenient, since this is a vfio-igd passthrough vm.
7 years, 11 months
[libvirt-users] Kernel panic after kernel update
by Michael Ströder
HI!
On one qemu-kvm instance a kernel update failed and booting the image is not
possible anymore (see below). A dozen other VMs are correctly working after the
upgrade. Anything I can quickly try to make it work again?
(This is not really a bad emergency case because I could simply re-install and
re-configure with ansible. But I'd like to learn about these cases...)
Ciao, Michael.
Loading Linux 4.8.12-1-default ...
[ 0.651328] Kernel panic - not syncing: VFS: Unable to mount root fs on
unknown-block(0,0)
[ 0.654408] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 4.8.12-1-default #1
[ 0.656941] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS
rel-1.9.1-0-gb3ef39f-prebuilt.qemu-project.org 04/01/2014
[ 0.661166] 0000000000000000 ffffffff813a4252 ffff9a5348c4d000 ffff9a5349957e60
[ 0.664652] ffffffff8118e3fa ffff9a5300000010 ffff9a5349957e70 ffff9a5349957e08
[ 0.668124] ffffffff8118e715 ffff9a5349957e78 0000000000000001 0000000000000000
[ 0.671457] Call Trace:
[ 0.672460] [<ffffffff8102eefe>] dump_trace+0x5e/0x310
[ 0.674389] [<ffffffff8102f2cb>] show_stack_log_lvl+0x11b/0x1a0
[ 0.676570] [<ffffffff81030001>] show_stack+0x21/0x40
[ 0.678480] [<ffffffff813a4252>] dump_stack+0x5c/0x7a
[ 0.680369] [<ffffffff8118e3fa>] panic+0xd5/0x217
[ 0.682169] [<ffffffff81f56419>] mount_block_root+0x1fe/0x2c0
[ 0.684285] [<ffffffff81f56668>] prepare_namespace+0x12b/0x161
[ 0.686439] [<ffffffff81f56103>] kernel_init_freeable+0x1f6/0x20a
[ 0.688675] [<ffffffff816c727a>] kernel_init+0xa/0x100
[ 0.690623] [<ffffffff816d485f>] ret_from_fork+0x1f/0x40
[ 0.696659] DWARF2 unwinder stuck at ret_from_fork+0x1f/0x40
[ 0.698858]
[ 0.699533] Leftover inexact backtrace:
[ 0.699533]
[ 0.701580] [<ffffffff816c7270>] ? rest_init+0x90/0x90
[ 0.703944] Kernel Offset: 0x0 from 0xffffffff81000000 (relocation range:
0xffffffff80000000-0xffffffffbfffffff)
7 years, 11 months
[libvirt-users] How to best I/O performance for Window2008 and MSSQL guest VM
by Roberto Fichera
Hi There,
I've moved some Windows2012 with MSSQL VMs from an hold ESXi 5.5 machine
to a more recent and powerful machine running Fedora 24 x86_64 and related libvirt + KVM
virtualization. I've moved the VMs filesystem to LVM slices and installed the VirtIO drivers
in to all Windows VMs. I've also set both Disk and Network interface to work using VirtIO.
So far so good everything works pretty fine. Now I would like to tune at best the MSSQL VM
for both Disk and Network interfaces in order to get the best performance possible.
Regarding network interface I've set it to work as bridge instead to go through to macvtap, so I'm
not sure what is the best in this case.
Regarding disk, since it's LVM I've chosen to go to cache mode none and IO mode native. Also
here cannot judge what's the best setup for the workload. I'm undecided to use IO mode threads
along directsync cache mode.
Finally I've set "ionice -c 1 -p <qemu-pid> -n 0" and "renice -n -10 <qemu-pid>" for the interested
VM that I want to get best performance possible.
Even with the above setup, the MSSQL VM has performance similar to the old machine running
ESXi 5.5, so does anyone can suggest where too look at and/or what would be the right setup?
Thanks in advance.
Roberto Fichera
7 years, 11 months
[libvirt-users] How can openstack retrieve the CPU usage of a lxc via libvirt?
by WANG Cheng D
Dear all,
I want to have a view of lxc CPU usage in openstack Dashboard. According the Openstack official site , Openstack Ceilometer can poll libvirt daemon to obtain CPU usage of a virtual machine. I tried the command locally on the libvirtd host "virsh -c lxc:/// domjobinfo", I got an error "error: this function is not supported by the connection driver: virDomainGetJobInfo"
I am not sure if Openstack Ceilometer can retrieve the CPU usage of a linux container, if it can, which virsh command or API should be used to this?
Thank you for your response in advance.
Cheng
7 years, 11 months
[libvirt-users] Understanding and converting between type='drive' and type='pci'
by Leroy Tennison
First, if this isn't the right list please tell me where i should post.
I've searched for an explanation and I'm either searching for the wrong
thing or it's not there. I'm trying to get an understanding of the
"controller='0' bus='0' target='0' unit='0' " parameters for 'drive'
compared to the "domain='0x0000' bus='0x00' slot='0x05' function='0x0' "
parameters for 'pci'. It appears that 'dev hda' to 'dev hdd' is the
maximum associated with 'drive', how many are available with 'pci' and
what are the valid ranges for those parameters?
If I have an xml using 'drive' and want to convert it to 'pci' what do I
need to do? I've had trouble with this when moving disk images between
hypervisors but have never had the time to investigate fully, does it
have to do with how the drives are referenced (/dev/disk/by0id or
/dev/[h|s|v]da as opposed to UUID)?
Thanks for your help.
7 years, 11 months