Re: [libvirt-users] Virsh snapshots
by Juraj Melo
Thanks for your answer,
I am trying to make a snapshot of whole virtual machine (disk, CPU
state, memory). But I need to make this snapshot in matter of seconds.
I have already try to create snapshot of disk, which is not problem. I
use qcow2 format, and create new disk image using original disk image as
backing file. But still I am not able to assign state of vm with new disk.
I have found some commands for creating snapshots in libvirt API, so in
my opinion one way how to accomplish my task is to create utility using
theese commands, and create snapshot of vm in RAM - I hope it would be
faster.
But I wonder whether virsh contains similar functionality, so I won't
need to programm it again.
Dne 2.12.2013 19:32, Eric Blake napsal(a):
On 12/02/2013 09:04 AM, Juraj Melo wrote:
> Hello,
>
> I am working on my PhD thesis and it would be really helpfull if someone
> could advise me, whether can Virsh create snapshots of VMs using
> copy-on-write.
What are you trying to copy-on-write? Is it just disk state, or VM
memory state? Do you have corresponding qemu commands that you are
trying to figure out if virsh maps to those same commands? There's a
lot of flexibility in the virsh snapshot XML, but not all of it has been
wired up yet (in part because it's still a moving target for what
upstream qemu supports), so knowing more details about what you are
trying to do will help us better answer whether it can be done now and
if so with which commands.
10 years, 11 months
[libvirt-users] libvirt, lvm thin provisioning
by Michael Mol
I know that lvm supports thin provisioning, and I think I have a pretty
good grasp on how that works. Does libvirt support lvm thin
provisioning and thin snapshots?
I know that in order to set up lvm thin provisioning by hand, I have to
create a thin-provisioning pool within the volume group, and then
thin-provisioned logical volumes and thin-provisioned snapshots can be
created within that thin-provision pool.
I also know that it's possible to have more than one thin-provisioning
pool within the same volume group, which tells me that in order to
properly set up any lvm-aware application to use thin-provisioning, I
may need to inform it which thin-provisioning pool it should base LVs
and snapshots from.
When I look at the example here:
http://libvirt.org/storage.html#StorageBackendLogical
I don't see anything particular which tells me it supports building off
a thin-provisioning pool if I specify such by name.
Checking here:
http://libvirt.org/formatstorage.html#StoragePool
I again don't see anything specific about logical thick vs thin
provisioning. I would expect to see *something*, as there are
performance consequences when making such choices, so I'm again not
sure libvirt recognizes a difference.
So is this something that libvirt can do? And is there good
documentation somewhere on the subject?
10 years, 11 months
[libvirt-users] cputune shares with multiple cpu and pinning
by Edoardo Comar
Hi,
I have found the cpu time partitioning based on cpu shares weights not
very intuitive.
On RHEL64, I deployed two qemu/kvm VMs
VM1 with 1 vcpu and 512 cpu shares
VM2 with 2 vcpus and 1024 cpu shares
I pinned their vcpus to specific host pcpus:
VM1 vcpu 0 to host pcpu1
VM2 vcpu 0 to host pcpu1, VM2 vcpu 1 to host pcpu2
I executed inside the VMs a simple process that consume all available cpu,
eg
# cat /dev/zero > /dev/null
on the host, using 'top', the reported cpu usage per qemu process is :
with 1 process in VM1 and 1 process on vcpu1 in VM2
VM1 = 100%, VM2=100%
explanation - without contention for pcpu2 shares are to be irrelevant
(that's ok!)
with 1 process in VM1 and 1 process on vcpu0 in VM2
VM1=33%, VM2=66%
explanation - with contention on pcpu2, host cpu usage is partitioned
according to shares (that's ok!)
with 1 process in VM1, and 2 processes in VM2 (one on vcpu0 and on vcpu1 -
launched with taskset)
VM1=50%, VM2=150%
This result was a bit unexpected to me:
adding load on VM2 resulted in more cpu time allowed for VM1 - can anyone
please explain the logic ?
Changing the pinning so that both VMs can use the host pcpuset 1-2
then the CPU usage is
100%/100% when VM2 is executing one task only ( on whatever vcpu )
50%/150% when VM2 is executing two tasks.
again, not intuitive just by looking at the shares weights, I think.
--------------------------------------------------
Edoardo Comar
WebSphere Application Service Platform for Networks (ASPN)
ecomar(a)uk.ibm.com
+44 (0)1962 81 5576
IBM UK Ltd, Hursley Park, SO21 2JN
Unless stated otherwise above:
IBM United Kingdom Limited - Registered in England and Wales with number
741598.
Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
10 years, 11 months
[libvirt-users] virsh iface-list fail
by Slash Sda
Hi all, I'm using libvirt 1.1.1 on my ubuntu 13.10 minimal.
When I try a virsh iface-list i get this message:
error: Failed to list interfaces error: internal error: failed to get
number of host interfaces
I create a BR0 thru my ETH0.
On my virt-manager i don't see the computer interfaces
What could it be?
Thanks
10 years, 11 months
[libvirt-users] error: Failed to start domain
by Stefan G. Weichinger
On a gentoo server with libvirt-1.1.3 I get problems with starting VMs.
When I do:
# virsh start vm180
error: Failed to start domain vm180
error: Input/output error
This happens with 1.1.3 and 1.1.4 (I rebuilt the packages and restarted
the libvirtd.service).
# journalctl -f -u libvirtd.service
shows:
Dec 03 17:32:56 jupiter libvirtd[24020]: Input/output error
Dec 03 17:33:47 jupiter libvirtd[24020]: Input/output error
The log for the specific VM shows:
2013-12-03 16:33:47.004+0000: starting up
LC_ALL=C
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -name vm180 -S -machine
pc-i440fx-1.5,accel=kvm,usb=off -cpu
Opteron_G5,+bmi1,+perfctr_nb,+perfctr_core,+topoext,+nodeid_msr,+tce,+lwp,+wdt,+skinit,+ibs,+osvw,+cr8legacy,+extapic,+cmp_legacy,+fxsr_opt,+mmxext,+osxsave,+monitor,+ht,+vme
-m 3954 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -uuid
646f757f-2ab7-4fdf-2140-a79396200c6f -no-user-config -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vm180.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=control -rtc
base=localtime,clock=vm,driftfix=slew -no-kvm-pit-reinjection -no-hpet
-no-shutdown -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
file=/dev/VG01/vm180_disk0,if=none,id=drive-virtio-disk0,format=raw,cache=none,aio=threads
-device
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x7,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
-drive
file=/mnt/tests/grml64-full_2013.02.iso,if=none,id=drive-ide0-1-0,readonly=on,format=raw
-device ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -netdev
tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:8d:3b:af,bus=pci.0,addr=0x3
-chardev pty,id=charserial0 -device
isa-serial,chardev=charserial0,id=serial0 -vnc 127.0.0.1:5 -vga cirrus
-device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
2013-12-03 16:33:47.013+0000: shutting down
libvirt: error : libvirtd quit during handshake: Input/output error
What can I do about this issue?
Thanks for any pointers, Stefan
10 years, 11 months
[libvirt-users] layer2 tunnel between guests across hosts
by sujay g
Hi,
Is it possible to setup layer2 tunnels between guests across hosts?
Using vrbr? -- i.e the Guest is unaware.
On Host 1:
guest (a) --- vrbr0 -------
|
|
On Host 2: |
guest(b) -- vrbr0 ------
Has there been any work done around it?
TIA,
-Sujay
10 years, 11 months
[libvirt-users] Problem booting guest with more than 8 disks
by Jon
Hello All,
On my host machine, I'm using kvm, libvirt, ceph and ubuntu versions as
follows:
>> QEMU emulator version 1.5.0 (Debian 1.5.0+dfsg-3ubuntu5), Copyright (c)
2003-2008 Fabrice Bellard
>> root@kitt:~# virsh --version: 1.1.1
>> ceph version 0.67.4 (ad85b8bfafea6232d64cb7ba76a8b6e8252fa0c7)
>> VERSION="13.10, Saucy Salamander"
>> Linux kitt 3.11.0-13-generic #20-Ubuntu SMP Wed Oct 23 07:38:26 UTC 2013
x86_64 x86_64 x86_64 GNU/Linux
I'm encountering an issue when attempting to boot a VM with more than eight
disks, the boot disk is not found and the vm fails to boot with an error
"No bootable device".
This happens with Archlinux, Ubuntu. and Fedora images.
I'm working on a tool [1] to build virtual machines, it creates rbd images
named $hostname-os and $hostname-storageX (where X is the "additional disk
number" starting at 0, a vm with seven disks would have one named -os and
six named -storage{0..5})--the os disk is a clone of a protected COW RBD
snapshot--the the tool defines the xml and uses Sys::Virt to define and
boot the virtual machine.
All os types use grub, ubuntu and fedora use an mbr and arch uses gpt, I'm
not certain that matters as vms with less than six disks boot no problem.
I've included both my generated and dumped xml for both my booting and non
booting vms. The only consistent difference between the vm deployments is
the addition on $hostname-storage6.
Is there perhaps a way to specify which hd is the boot device?
I appreciate any assistance.
Thanks,
Jon A
[1] https://github.com/three18ti/Build-VM
non Booting VM:
#########################################################
# Generated:
| <domain type='kvm'>
| <name>arch_test2</name>
| <memory unit='KiB'>4194304</memory>
| <currentMemory unit='KiB'>4194304</currentMemory>
| <vcpu placement='static'>2</vcpu>
| <os>
| <type arch='x86_64' machine='pc-i440fx-1.4'>hvm</type>
| <boot dev='hd'/>
| <bootmenu enable='no'/>
| </os>
| <features>
| <acpi/>
| <apic/>
| <pae/>
| </features>
| <clock offset='utc'/>
| <on_poweroff>destroy</on_poweroff>
| <on_reboot>restart</on_reboot>
| <on_crash>restart</on_crash>
| <devices>
| <emulator>/usr/bin/kvm</emulator>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-os'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vda' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage0'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdb' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage1'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdc' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage2'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdd' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage3'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vde' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage4'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdf' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage5'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdg' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage6'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdh' bus='virtio' />
| </disk>
| <interface type='bridge'>
| <source bridge='ovsbr0'/>
| <virtualport type='openvswitch'>
| </virtualport>
| <model type='virtio'/>
| </interface>
|
| <controller type='usb' index='0'>
| </controller>
| <controller type='ide' index='0'>
| </controller>
| <controller type='virtio-serial' index='0'>
| </controller>
| <serial type='pty'>
| <target port='0'/>
| </serial>
| <console type='pty'>
| <target type='serial' port='0'/>
| </console>
| <input type='mouse' bus='ps2'/>
| <graphics type='vnc' port='-1' autoport='yes'/>
| <sound model='ich6'>
| </sound>
|
| <video>
| <model type='cirrus' vram='9216' heads='1'/>
| </video>
|
| <memballoon model='virtio'>
| </memballoon>
|
| </devices>
| </domain>
#####
-------------------------------------------------------------------------------------------------------------
#####
# Dumped
| <domain type='kvm' id='171'>
| <name>arch_test2</name>
| <uuid>fcff77f4-0a93-43f9-bf3a-e5863b787400</uuid>
| <memory unit='KiB'>4194304</memory>
| <currentMemory unit='KiB'>4194304</currentMemory>
| <vcpu placement='static'>2</vcpu>
| <resource>
| <partition>/machine</partition>
| </resource>
| <os>
| <type arch='x86_64' machine='pc-i440fx-1.4'>hvm</type>
| <boot dev='hd'/>
| <bootmenu enable='no'/>
| </os>
| <features>
| <acpi/>
| <apic/>
| <pae/>
| </features>
| <clock offset='utc'/>
| <on_poweroff>destroy</on_poweroff>
| <on_reboot>restart</on_reboot>
| <on_crash>restart</on_crash>
| <devices>
| <emulator>/usr/bin/kvm</emulator>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-os'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vda' bus='virtio'/>
| <alias name='virtio-disk0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage0'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdb' bus='virtio'/>
| <alias name='virtio-disk1'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage1'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdc' bus='virtio'/>
| <alias name='virtio-disk2'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x08'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage2'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdd' bus='virtio'/>
| <alias name='virtio-disk3'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x09'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage3'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vde' bus='virtio'/>
| <alias name='virtio-disk4'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0a'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage4'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdf' bus='virtio'/>
| <alias name='virtio-disk5'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0b'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage5'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdg' bus='virtio'/>
| <alias name='virtio-disk6'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0c'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test2-storage6'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdh' bus='virtio'/>
| <alias name='virtio-disk7'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0d'
function='0x0'/>
| </disk>
| <controller type='usb' index='0'>
| <alias name='usb0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>
| </controller>
| <controller type='ide' index='0'>
| <alias name='ide0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
| </controller>
| <controller type='virtio-serial' index='0'>
| <alias name='virtio-serial0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
| </controller>
| <controller type='pci' index='0' model='pci-root'>
| <alias name='pci0'/>
| </controller>
| <interface type='bridge'>
| <mac address='52:54:00:c2:64:72'/>
| <source bridge='ovsbr0'/>
| <virtualport type='openvswitch'>
| <parameters interfaceid='51324f0c-e98f-419e-aa82-ef9942c27eea'/>
| </virtualport>
| <target dev='vnet4'/>
| <model type='virtio'/>
| <alias name='net0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
| </interface>
| <serial type='pty'>
| <source path='/dev/pts/13'/>
| <target port='0'/>
| <alias name='serial0'/>
| </serial>
| <console type='pty' tty='/dev/pts/13'>
| <source path='/dev/pts/13'/>
| <target type='serial' port='0'/>
| <alias name='serial0'/>
| </console>
| <input type='mouse' bus='ps2'/>
| <graphics type='vnc' port='5904' autoport='yes' listen='127.0.0.1'>
| <listen type='address' address='127.0.0.1'/>
| </graphics>
| <sound model='ich6'>
| <alias name='sound0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
| </sound>
| <video>
| <model type='cirrus' vram='9216' heads='1'/>
| <alias name='video0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
| </video>
| <memballoon model='virtio'>
| <alias name='balloon0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0e'
function='0x0'/>
| </memballoon>
| </devices>
| <seclabel type='dynamic' model='apparmor' relabel='yes'>
| <label>libvirt-fcff77f4-0a93-43f9-bf3a-e5863b787400</label>
| <imagelabel>libvirt-fcff77f4-0a93-43f9-bf3a-e5863b787400</imagelabel>
| </seclabel>
| </domain>
--------------------------------------------------------------------------------------------------------------
###############################################################
###############################################################
# Booting VM:
# Generated:
| <domain type='kvm'>
| <name>arch_test</name>
| <memory unit='KiB'>4194304</memory>
| <currentMemory unit='KiB'>4194304</currentMemory>
| <vcpu placement='static'>2</vcpu>
| <os>
| <type arch='x86_64' machine='pc-i440fx-1.4'>hvm</type>
| <boot dev='hd'/>
| <bootmenu enable='no'/>
| </os>
| <features>
| <acpi/>
| <apic/>
| <pae/>
| </features>
| <clock offset='utc'/>
| <on_poweroff>destroy</on_poweroff>
| <on_reboot>restart</on_reboot>
| <on_crash>restart</on_crash>
| <devices>
| <emulator>/usr/bin/kvm</emulator>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-os'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vda' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage0'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdb' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage1'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdc' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage2'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdd' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage3'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vde' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage4'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdf' bus='virtio' />
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage5'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdg' bus='virtio' />
| </disk>
| <interface type='bridge'>
| <source bridge='ovsbr0'/>
| <virtualport type='openvswitch'>
| </virtualport>
| <model type='virtio'/>
| </interface>
|
| <controller type='usb' index='0'>
| </controller>
| <controller type='ide' index='0'>
| </controller>
| <controller type='virtio-serial' index='0'>
| </controller>
| <serial type='pty'>
| <target port='0'/>
| </serial>
| <console type='pty'>
| <target type='serial' port='0'/>
| </console>
| <input type='mouse' bus='ps2'/>
| <graphics type='vnc' port='-1' autoport='yes'/>
| <sound model='ich6'>
| </sound>
|
| <video>
| <model type='cirrus' vram='9216' heads='1'/>
| </video>
|
| <memballoon model='virtio'>
| </memballoon>
|
| </devices>
| </domain>
#####
----------------------------------------------------------------------------------------------------------------
#####
| <domain type='kvm' id='169'>
| <name>arch_test</name>
| <uuid>7c3b44dc-ff91-413a-b1cf-dfbe2480d44e</uuid>
| <memory unit='KiB'>4194304</memory>
| <currentMemory unit='KiB'>4194304</currentMemory>
| <vcpu placement='static'>2</vcpu>
| <resource>
| <partition>/machine</partition>
| </resource>
| <os>
| <type arch='x86_64' machine='pc-i440fx-1.4'>hvm</type>
| <boot dev='hd'/>
| <bootmenu enable='no'/>
| </os>
| <features>
| <acpi/>
| <apic/>
| <pae/>
| </features>
| <clock offset='utc'/>
| <on_poweroff>destroy</on_poweroff>
| <on_reboot>restart</on_reboot>
| <on_crash>restart</on_crash>
| <devices>
| <emulator>/usr/bin/kvm</emulator>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-os'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vda' bus='virtio'/>
| <alias name='virtio-disk0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage0'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdb' bus='virtio'/>
| <alias name='virtio-disk1'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x07'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage1'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdc' bus='virtio'/>
| <alias name='virtio-disk2'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x08'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage2'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdd' bus='virtio'/>
| <alias name='virtio-disk3'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x09'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage3'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vde' bus='virtio'/>
| <alias name='virtio-disk4'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0a'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage4'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdf' bus='virtio'/>
| <alias name='virtio-disk5'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0b'
function='0x0'/>
| </disk>
| <disk type='network' device='disk'>
| <driver name='qemu'/>
| <source protocol='rbd' name='libvirt-pool/arch_test-storage5'>
| <host name='192.168.0.35' port='6789'/>
| <host name='192.168.0.2' port='6789'/>
| <host name='192.168.0.40' port='6789'/>
| </source>
| <target dev='vdg' bus='virtio'/>
| <alias name='virtio-disk6'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0c'
function='0x0'/>
| </disk>
| <controller type='usb' index='0'>
| <alias name='usb0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>
| </controller>
| <controller type='ide' index='0'>
| <alias name='ide0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x1'/>
| </controller>
| <controller type='virtio-serial' index='0'>
| <alias name='virtio-serial0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
| </controller>
| <controller type='pci' index='0' model='pci-root'>
| <alias name='pci0'/>
| </controller>
| <interface type='bridge'>
| <mac address='52:54:00:d0:da:73'/>
| <source bridge='ovsbr0'/>
| <virtualport type='openvswitch'>
| <parameters interfaceid='f1c41c6e-8ca0-4c34-b1df-053c9a7976bb'/>
| </virtualport>
| <target dev='vnet2'/>
| <model type='virtio'/>
| <alias name='net0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
| </interface>
| <serial type='pty'>
| <source path='/dev/pts/4'/>
| <target port='0'/>
| <alias name='serial0'/>
| </serial>
| <console type='pty' tty='/dev/pts/4'>
| <source path='/dev/pts/4'/>
| <target type='serial' port='0'/>
| <alias name='serial0'/>
| </console>
| <input type='mouse' bus='ps2'/>
| <graphics type='vnc' port='5902' autoport='yes' listen='127.0.0.1'>
| <listen type='address' address='127.0.0.1'/>
| </graphics>
| <sound model='ich6'>
| <alias name='sound0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
| </sound>
| <video>
| <model type='cirrus' vram='9216' heads='1'/>
| <alias name='video0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0'/>
| </video>
| <memballoon model='virtio'>
| <alias name='balloon0'/>
| <address type='pci' domain='0x0000' bus='0x00' slot='0x0d'
function='0x0'/>
| </memballoon>
| </devices>
| <seclabel type='dynamic' model='apparmor' relabel='yes'>
| <label>libvirt-7c3b44dc-ff91-413a-b1cf-dfbe2480d44e</label>
| <imagelabel>libvirt-7c3b44dc-ff91-413a-b1cf-dfbe2480d44e</imagelabel>
| </seclabel>
| </domain>
10 years, 11 months
[libvirt-users] virsh detach typo
by Mauricio Tavares
[root@vmhost vms]# virsh detach-disk puppet vdb
error: No found disk whose source path or target is vdb
[root@vmhost vms]# virsh --version
0.10.2
[root@vmhost vms]#
This probably was solved already but if not, "No found disk" probably
sounds better if it was "No disk found"
10 years, 12 months