[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] virDomainCoreDumpWithFormat files created as root
by NoxDaFox
Greetings,
I am dumping a guest VM memory for inspection using the command
"virDomainCoreDumpWithFormat" and the created files appear to belong to
root (both user and group).
I have searched around but didn't find any answer. Is there a way to
instruct QEMU to create those files under a different user?
Thank you.
9 years
[libvirt-users] Access to virtualization on a multi-user system
by sbaugh@catern.com
Hi libvirt-users,
I find myself wanting to do something that seems like it must have some
obvious solution: I have multiple users (let's just assume local Unix
accounts) on a Linux system, and I want them all to have access to
KVM-accelerated virtualization. But, I don't want them to be able to
meddle with each other's virtual machines. Is there a solution to this
problem?
Methods of attack that have occured to me:
- Use PolicyKit to only allow a user to access qemu:///system VMs that
are somehow marked as owned by that user
- Run multiple libvirt qemu:///system daemons and restrict access to
each on a per-user basis
- Allow qemu:///session VMs to actually be KVM-accelerated (this seems
like the best way to do it, but I have no idea if that's even
possible)
Again, the third seems like the best way, but I'm not sure of how to
allow such VMs to be KVM-accelerated, and not sure if it's possible for
them to use anything other than usermode networking.
Hopefully I'm missing some obvious way to do it!
Thanks for any assistance!
9 years, 4 months
[libvirt-users] Slow network performance issues
by Alex Regan
Hi,
I have a fedora21 system that's been running fine under normal network
activity, but trying to perform a full backup of the 300GB filesystem is
taking forever because the network speed appears to be very slow.
Using rsync, the transfer speeds reach a max of like 180kB/s. Using
rsync to copy files on the local filesystem is greater than 55MB/s, so I
don't think it's a disk issue.
What kind of network speed can I expect copying data across the network
from the guest to another host on the same gigabit network?
I'm using the virtio driver:
# lsmod|grep virtio
virtio_console 28672 0
virtio_balloon 16384 0
virtio_net 32768 0
virtio_blk 20480 4
virtio_pci 24576 0
virtio_ring 20480 5
virtio_blk,virtio_net,virtio_pci,virtio_balloon,virtio_console
virtio 16384 5
virtio_blk,virtio_net,virtio_pci,virtio_balloon,virtio_console
ethtool on the vnet0 interface shows the speed at only 10Mb/s. Is that
normal? Or even changable?
I've included the qemu command-line below.
# ps ax|grep propguest
1582 ? Sl 25433:37 /usr/bin/qemu-system-x86_64 -machine
accel=kvm -name propguest -S -machine pc-1.2,accel=kvm,usb=off -m 16384
-realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid
b67e10fe-7ef0-a1ca-cecf-3f3506d54e1a -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/propguest.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
-no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x5 -drive
file=/var/lib/libvirt/iso-images/Fedora-18-x86_64-DVD.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
file=/var/lib/libvirt/images/propguest.img,if=none,id=drive-virtio-disk0,format=qcow2
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/var/lib/libvirt/images/propguest-2.img,if=none,id=drive-virtio-disk2,format=raw
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x9,drive=drive-virtio-disk2,id=virtio-disk2
-netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:11:66:5a:51,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -chardev
spicevmc,id=charchannel0,name=vdagent -device
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.spice.0
-device usb-tablet,id=input0 -spice
port=5900,addr=127.0.0.1,disable-ticketing,seamless-migration=on -device
qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,bus=pci.0,addr=0x2 -device
intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8
Any ideas greatly appreciated.
Thanks,
Alex
9 years, 4 months
[libvirt-users] qemu-img snapshots configuration
by sergio
Hello.
What it the difference in external snapshot configuration with and
without <domainsnapshot> ?
What is the difference between vda and vdb in the following example?
(From https://libvirt.org/formatsnapshot.html)
<domainsnapshot>
...
<memory snapshot='no'/>
<disks>
<disk name='vda' snapshot='external'>
<driver type='qcow2'/>
<source file='/path/vda-delta.qcow2'/>
</disk>
<disk name='vdb' snapshot='no'/>
</disks>
<domain>
...
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/path/vda-img.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
<disk type='file' device='disk' snapshot='external'>
<driver name='qemu' type='raw'/>
<source file='/path/vdb-img.qcow2'/>
<backingStore type='file'>
<format type='qcow2'/>
<source file='/path/vdb-delta.qcow2'/>
</backingStore>
<target dev='vdb' bus='virtio'/>
</disk>
...
</devices>
</domain>
</domainsnapshot>
--
sergio.
9 years, 4 months
[libvirt-users] QEMU-KVM and bare metal performance impact
by nishtala
Hi,
My name is Rajiv Nishtala and I am a PhD student in UPC.
I am trying to investigate the performance (instructions per second
(IPS)) impact of an application on QEMU-KVM and bare metal using the
performance monitoring tools perfmon and perf. The performance counter
used for measuring this is "instructions" and "instructions_retired" on
perf and perfmon, respectively. The application is a *stress*
microbenchmark that does not produce any cache misses (LLC).
I account for the performance of the application on QEMU-KVM using the
following procedure:
A) Record the IPS of the QEMU-KVM (using its PID) on perf and perfmon
when idle.
B) Launch the stress benchmark and record IPS.
C) the difference between *B* and *A* should give the performance of the
application since the KVM does not inherently provide a technique to
collect this statistic.
However the performance of the application on bare metal is recorded as
is using the per-thread session available in perfmon and perf.
The sampling frequency to collect the statistics is 1 second.
The result obtained on bare metal are in the order of magnitude of
billions. On the other hand, results from KVM do not show any difference
in performance when idle or when an application is running and is in the
order of magnitude of 100's of thousands.
IPS_KVM_IDLE LLC_KVM_IDLE
IPS_KVM_STRESS LLC_KVM_STRESS
IPS_bare metal_STRESS LLC_bare metal_STRESS
47902 2495
25157 600
8116556011 3805
23437 758
48762 1032
8166564000 2140
834370 9954
543690 2379
8326333629 234
25261 662
49139 997
8057453733 1399
47998 2767
23773 600
8214761042 494
542039 2299
843266 2231
8075529487 4593
47982 2781
25249 603
8327678364 443
23833 794
49138 622
8058893864 1870
832913 9978
544567 2204
8210889659 394
I also tried using perf-kvm using the following command:
sudo perf kvm --guest --guestkallsyms=guest-kallsyms
--guestmodules=guest-modules record -a -o perf.data
but failed with the following error:
*Couldn't record guest kernel [0]'s reference relocation symbol.
*and no events were recorded.
I describe details of QEMU and the host machine (bare metal) below.
QEMU emulator version 2.0.0 in conjunction with KVM as the virtual
environment and libvirt 1.2.2
The guest machine is running kernel version 3.19.0-15-generic and the
host machine is running version 3.14.5-031405-generic on a x86_64 machine
I setup the guest machine with Intel SandyBridge processor (model
name:Intel Xeon E312xx) with the following flags:
sockets=2,cores=2,threads=1 and 4mb cache.
More details:
cpu family : 6
model : 42
max freq : 2394.560 MHz
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx rdtscp lm
constant_tsc rep_good nopl eagerfpu pni pclmulqdq vmx ssse3 cx16 pcid
sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes xsave avx hypervisor
lahf_lm vnmi ept xsaveopt
The host machine is an Intel Sandy Bridge processor (Intel(R) Core(TM)
i7-2760QM CPU @ 2.40GHz) with 4 cores and 6mb cache.
cpu family : 6
model : 42
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca
cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall
nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology
nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx
smx est tm2 ssse3 cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt
tsc_deadline_timer aes xsave avx lahf_lm ida arat epb xsaveopt pln pts
dtherm tpr_shadow vnmi flexpriority ept vpid
Thanks
Rajiv Nishtala
WARNING / LEGAL TEXT: This message is intended only for the use of the
individual or entity to which it is addressed and may contain
information which is privileged, confidential, proprietary, or exempt
from disclosure under applicable law. If you are not the intended
recipient or the person responsible for delivering the message to the
intended recipient, you are strictly prohibited from disclosing,
distributing, copying, or in any way using this message. If you have
received this communication in error, please notify the sender and
destroy and delete any copies you may have received.
http://www.bsc.es/disclaimer
9 years, 4 months
[libvirt-users] qemu-img snapshots configuration
by sergio
Hello.
What it the difference in external snapshot configuration with and
without <domainsnapshot> ?
What is the difference between vda, vdb and vdc in the following example?
(From https://libvirt.org/formatsnapshot.html)
<domainsnapshot>
...
<memory snapshot='no'/>
<disks>
<disk name='vda' snapshot='external'>
<driver type='qcow2'/>
<source file='/path/vda-delta.qcow2'/>
</disk>
<disk name='vdb' snapshot='no'/>
<disk name='vdc' snapshot='no'/>
</disks>
<domain>
...
<devices>
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/path/vda-img.qcow2'/>
<target dev='vda' bus='virtio'/>
</disk>
<disk type='file' device='disk' snapshot='external'>
<driver name='qemu' type='raw'/>
<source file='/path/vdb-img.qcow2'/>
<target dev='vdb' bus='virtio'/>
</disk>
<disk type='file' device='disk' snapshot='external'>
<driver name='qemu' type='raw'/>
<source file='/path/vdc-img.qcow2'/>
<backingStore type='file'>
<format type='qcow2'/>
<source file='/path/vdc-delta.qcow2'/>
</backingStore>
<target dev='vdc' bus='virtio'/>
</disk>
...
</devices>
</domain>
</domainsnapshot>
--
sergio.
9 years, 4 months
[libvirt-users] Compiling Libvirt-snmp on VMware Vsphere ESXi
by Aleem Akhtar
How can I compile and run Libvirt-snmp on VMware Vsphere ESXi? Can somebody
guide me with step by step procedure.
I tried to followed steps mentioned on Libvirt Website
<https://libvirt.org/compiling.html> but I guess they are for Linux
distribution. Because I could not execute ./configure command.
After searching on Google I found a similar question
<https://communities.vmware.com/thread/514078> which tells that I need to
create a VIB and than install that VIB. Now I have no idea about creating
VIB. Can somebody please guide me on this.
*Regards,*
*Aleem Akhtar *
*Lecturer Computer Science*
Punjab Group of Colleges
P. D. Khan Campus
Email: aleem.akhtar(a)seecs.nust.edu.pk
<aleem.akhtar(a)seecs.nust.edu.pk>
Website: aleemakhtar.com
9 years, 4 months