[libvirt-users] [bump] Storage pool created on the "wrong" vg
by Alaric Haag
________________________________
From: Alaric Haag
Sent: Tuesday, March 19, 2013 2:39 PM
To: libvirt-users(a)redhat.com
Subject: Storage pool created on the "wrong" vg
Hello all,
(Hopefully, I am posing this question to the correct list!)
I seem to have mis-clicked through the creation of an LVM-based storage pool in virt-manager and it is using a volume group containing LVs dedicated to the root/swap filesystems. As these are active LVs, there seems no way to remove this pool. Is that true?
My reading thus far suggests that, with LVM storage units, you can't delete them from the pool without REALLY deleting them from the VG.
I'm "new enough" to KVM to be missing something obvious, but I'd be pretty shocked if a "structure" like a storage pool can't be deconstructed.
Kind regards,
Alaric
------------------------
Here's a bit more detail in the hope someone might help:
I clicked through the creation of a storage pool, and accidentally used the wrong volume group, so I have:
virsh # vol-list vm_pool
Name Path
-----------------------------------------
lv_root /dev/vg_01/lv_root
lv_swap /dev/vg_01/lv_swap
These two LVs, in group vg_01, are the / and swap partition of the host system. I meant for the pool to reside on vg_vm, a different volume group.
Can I safely run:
virsh # vol-delete lv_root --pool=vm_pool
virsh # vol-delete lv_swap --pool=vm_pool
virsh # pool-delete vm_pool
or do the first two steps actually destroy the LVs in vg_01 thus bringing my system crashing down? The link below strongly suggests I cannot delete these logical volumes from the pool, but is a "draft" for OpenSUSE. (I'm running libvirt 0.10.2-18 on RHEL6)
http://doc.opensuse.org/products/draft/SLES/SLES-kvm_sd_draft/cha.libvirt...
If that's the case, is there ANY way to undo my mistake and remove this pool?
I am most grateful for any guidance!
Alaric
11 years, 8 months
Re: [libvirt-users] remote connection issue 'virsh -c qemu+ssh:///root@localhost/system list'
by Javi Legido
Hi Olivia.
Another think that you should check is socat package.
In my documented example the hypervisor running KVM is a Debian squeeze,
and I had to proceed this way:
sudo aptitude update; sudo aptitude install netcat socat -R
sudo vim /bin/netcatsocket
#!/bin/bash
socat - unix-client:$2
sudo chmod +x /bin/netcatsocket
You can try it and see if it helps or not.
PD: looping in again the list, I replied just to Olivia before
Cheers
El 21/03/2013 12:47, "Yin Olivia-R63875" <r63875(a)freescale.com> escribió:
> Hi Javier,
>
> Thanks for your help.
> From the pages you provided, I guess you suggest is as below:
> * Cannot recv data: Host key verification failed. : Connection reset by
> peer
>
> Sucede al lanzar:
>
> virt-manager -c qemu+ssh://usuario@hipervisor/system
>
> Solucion: iniciar una sesion SSH para almacenar las claves del hipervisor
> en el cliente:
>
> ssh usuario@hipervisor
>
> Exactly, I've login with ssh and then connect with virsh.
>
> user@x86:~$ ssh root(a)10.193.20.109
> root(a)10.193.20.109's password:
> root@ppc:~# exit
> logout
> Connection to 10.193.20.109 closed.
>
> user@x86:~$ virsh -c qemu+ssh://root@10.193.20.109/system list --all
> root(a)10.193.20.109's password:
> error: failed to connect to the hypervisor
> error: End of file while reading data: nc: invalid option -- 'U'
> BusyBox v1.19.4 (2013-03-08 13:08:18 CST) multi-call binary.
>
> Usage: nc [-iN] [-wN] [-l] [-p PORT] [-f FILE|IPADDR PORT] [-e PROG]:
> Input/output error
>
> It still failed to connect the hypervisor, instead of ssh issue.
>
> Best Regards,
> Olivia
>
> > -----Original Message-----
> > From: javilegido(a)gmail.com [mailto:javilegido@gmail.com] On Behalf Of
> Javi
> > Legido
> > Sent: Thursday, March 21, 2013 7:35 PM
> > To: Yin Olivia-R63875
> > Subject: Re: [libvirt-users] remote connection issue 'virsh -c
> > qemu+ssh:///root@localhost/system list'
> >
> > Hi Olivia.
> >
> > It's in spanish, but pretty easy to follow, maybe it helps you:
> >
> >
> http://wiki.legido.com/doku.php?id=informatica:linux:virtualizacion:kvm#adm
> > inistrar_kvm_y_vm_desde_una_gui_en_el_cliente
> >
> > With this guide you should be able to connect from a client machine (for
> > instance your laptop) to the hypervisor (KVM) if both have SSH
> connectivity.
> >
> > Cheers.
> >
> > Javier
> >
> > 2013/3/21 Yin Olivia-R63875 <r63875(a)freescale.com>:
> > > Hi,
> > >
> > > I'm trying remote connection with qemu hypervisor on FSL PPC board.
> > >
> > > The libvirt server is the PPC board.
> > >
> > > root@ppc:~# ifconfig eth0 10.193.20.109 root@ppc:~# libvirtd -d
> > > root@ppc:~# virsh -c qemu:///system define test.xml root@ppc:~# virsh
> > > -c qemu:///system start test root@ppc:~# virsh -c qemu:///system list
> > > --all
> > > Id Name State
> > > ----------------------------------------------------
> > > 2 test running
> > >
> > > Connect from an X86 PC (Ubuntu 10.04) to the PPC board.
> > >
> > > user@x86:~$ virsh -c qemu+ssh://root@10.193.20.109/system list --all
> > > The authenticity of host '10.193.20.109 (10.193.20.109)' can't be
> > established.
> > > RSA key fingerprint is 2f:56:07:08:da:7d:ac:41:45:57:d2:12:15:19:67:e0.
> > > Are you sure you want to continue connecting (yes/no)? yes
> > > root(a)10.193.20.109's password:
> > > error: failed to connect to the hypervisor
> > > error: End of file while reading data: Warning: Permanently added
> > '10.193.20.109' (RSA) to the list of known hosts.
> > > nc: invalid option -- 'U'
> > > BusyBox v1.19.4 (2013-03-08 13:08:18 CST) multi-call binary.
> > >
> > > Usage: nc [-iN] [-wN] [-l] [-p PORT] [-f FILE|IPADDR PORT] [-e PROG]:
> > > Input/output error
> > >
> > >
> > >
> > > I tried to verify the remote connection on localhost. But it also
> failed
> > as below:
> > >
> > > root@mpc8572ds:~# virsh -c qemu+ssh:///root@localhost/system list
> > > --all root@localhost's password:
> > > error: failed to connect to the hypervisor
> > > error: End of file while reading data: nc: invalid option -- 'U'
> > > BusyBox v1.19.4 (2013-03-08 13:08:18 CST) multi-call binary.
> > >
> > > Usage: nc [-iN] [-wN] [-l] [-p PORT] [-f FILE|IPADDR PORT] [-e PROG]:
> > > Input/output error
> > >
> > >
> > > Could anyone give suggestion on this issue?
> > >
> > >
> > > Best Regards,
> > > Olivia
> > >
> > >
> > > _______________________________________________
> > > libvirt-users mailing list
> > > libvirt-users(a)redhat.com
> > > https://www.redhat.com/mailman/listinfo/libvirt-users
>
>
>
11 years, 8 months
[libvirt-users] Libvirt dead, pid still exists
by SHREE DUTH AWASTHI
Hi All,
Refering to the link below, I got to know that previous versions of
Libvirtd had this issue.
https://www.redhat.com/archives/libvirt-users/2012-August/msg00104.html
We are using libvirt-0.10.0 but we are facing the same issue.
Procedure to reproduce :
1. Login to some domain name say CLA-0 using virsh.
(i) # virsh list --all
Id Name State
----------------------------------------------------
4 CLA-0 running
(ii) # virsh console 4
Connected to domain CLA-0
Escape character is ^]
2. Whenever we exit from the above CLA-0 domain by pressing ctrl-]
We are getting the Segmentation fault, however core is not generated but we
have debugged it through gdb
Program received signal SIGSEGV, Segmentation fault.
0x00007ffff5a96c71 in pthread_mutex_lock () from /lib64/libpthread.so.0
(gdb) bt
#0 0x00007ffff5a96c71 in pthread_mutex_lock () from /lib64/libpthread.so.0
#1 0x00007ffff78c8b97 in ?? () from /usr/lib64/libvirt.so.0
#2 0x00007ffff78b6297 in ?? () from /usr/lib64/libvirt.so.0
#3 0x00007ffff7865b65 in virHashRemoveEntry () from /usr/lib64/libvirt.so.0
#4 0x00007ffff78b60d0 in ?? () from /usr/lib64/libvirt.so.0
#5 0x00007ffff78c95f9 in ?? () from /usr/lib64/libvirt.so.0
#6 0x00007ffff78ce711 in virStreamAbort () from /usr/lib64/libvirt.so.0
#7 0x000000000042b546 in ?? ()
#8 0x000000000042b8f3 in ?? ()
#9 0x00007ffff78c983a in ?? () from /usr/lib64/libvirt.so.0
#10 0x00007ffff7843cc5 in virEventPollRunOnce () from
/usr/lib64/libvirt.so.0
#11 0x00007ffff78428f5 in virEventRunDefaultImpl () from
/usr/lib64/libvirt.so.0
#12 0x00007ffff793407d in virNetServerRun () from /usr/lib64/libvirt.so.0
#13 0x000000000040c483 in ?? ()
#14 0x00007ffff534cbbe in __libc_start_main () from /lib64/libc.so.6
---Type <return> to continue, or q <return> to quit---
#15 0x000000000040b139 in ?? ()
#16 0x00007fffffffe7d8 in ?? ()
#17 0x000000000000001c in ?? ()
#18 0x0000000000000001 in ?? ()
#19 0x00007fffffffea73 in ?? ()
#20 0x0000000000000000 in ?? ()
After this point libvirtd is dead.
# service libvirtd status
libvirtd dead but pid file exists
>From the mail conversation between you and Andrey we came to know that
patch has been provided for this problem.
Could you please share us the fix.
Thanking you in anticipation.
Thanks and Regards,
Shree Duth Awasthi.
11 years, 8 months
[libvirt-users] About live migration with snapshots
by Chiang Hubert
Hello,
I'd like to live migration with snapshots.
But it doesn't work. It comes out a message "cannot migrate domain with 1
snapshots"
Then I try to trace the code(Libvirt 0.9.8 to 1.0.3), I find out the code
in src/qemu/qemu_migration.c @ Line 1395 - 1440 (Libvirt 1.0.3)
It will check the VM which has snapshots or not.
I just curious about this limitation, why the VM can't live migration with
snapshots?
What happen if I skip this check?
Does it has any suggestion way or virsh command with options to do live
migration with snapshots?
Thanks a lot.
Hubert
11 years, 8 months
[libvirt-users] SSD Trim needed? Physical Block Device Keep Track Of Guest FS Blocks?
by d hee
I am trying to figure out if a SSD drive needs to receive trim commands from the KVM guest filesystem(strictly Linux ext4 in this case)....
When a file is deleted inside a KVM guest(ext4 and raw image)....If a SSD is the underlying block device that the KVM raw disk image resides on, does the SDD drive need to know which block(s) are marked for reuse within the raw image? Or is it since the Guest raw image is a container that the physical hard drive has no concern about the filesystem blocks within the Guest? Meaning the physical disk would only be concerned with the image itself as a whole...such as if the whole image was deleted?
Thanks,
-Darin
11 years, 8 months
[libvirt-users] Storage pool created on the "wrong" vg
by Alaric Haag
Hello all,
(Hopefully, I am posing this question to the correct list!)
I seem to have mis-clicked through the creation of an LVM-based storage pool in virt-manager and it is using a volume group containing LVs dedicated to the root/swap filesystems. As these are active LVs, there seems no way to remove this pool. Is that true?
My reading thus far suggests that, with LVM storage units, you can't delete them from the pool without REALLY deleting them from the VG.
I'm "new enough" to KVM to be missing something obvious, but I'd be pretty shocked if a "structure" like a storage pool can't be deconstructed.
Kind regards,
Alaric
11 years, 8 months
[libvirt-users] network bridge hairpin parameter support
by Zang MingJie
Hi:
Currently we encounter a problem where OpenStack may forget or
misconfig the bridge hairpin setting, and will cause some unexpected
behavior [1]. Although this bug is marked fixed, but if we manually
restart the vm by virsh will also trigger the problem. And because the
bridge interface is created and managed by libvirt, we are considering
move the hairpin configuration to libvirt.
Now I start writing a patch to achieve this, but after digging into
the source code, I found only kvm/qemu and uml backend may support
this parameter. Other backends manage their own bridge, and seems no
way to modify it.
I expect is there any solution working in progress or should we add
the new hairpin parameter in libvirt configuration ?
[1] https://bugs.launchpad.net/nova/+bug/933640
11 years, 8 months
[libvirt-users] network bridge hairpin parameter support
by Zang MingJie
Hi:
Currently we encounter a problem where OpenStack may forget or
misconfig the bridge hairpin setting, and will cause some unexpected
behavior [1]. Although this bug is marked fixed, but if we manually
restart the vm by virsh will also trigger the problem. And because the
bridge interface is created and managed by libvirt, we are considering
move the hairpin configuration to libvirt.
Now I start writing a patch to achieve this, but after digging into
the source code, I found only kvm/qemu and uml backend may support this
parameter. Other backends manage their own bridge, and seems no way to
modify it.
I expect is there any solution working in progress or should we add
the new parameter in libvirt configuration ?
[1] https://bugs.launchpad.net/nova/+bug/933640
11 years, 8 months
[libvirt-users] libvirt rpm src
by Drew Morris
Hi Guys,
Do you know where the source RPMS are for libvirt? Can't find them in the
official downloads section.
--
Regards,
Drew Morris
11 years, 8 months
[libvirt-users] Unable to connect to console to recently cloned VM's
by Javi Legido
Hi all.
I was using libvirt under Debian squeeze (I guess it was
0.8.3-5+squeeze4 or something similar). I was able to clone machines
with below command:
sudo virt-clone --connect=qemu:///system -o template -n
template_clone -f /var/lib/libvirt/images/template_clone.qcow2
After that I was able to start the VM and connect to the console:
sudo virsh console template_clone
I recently Installed libvirt on a Debian wheezy ( libvirt-bin
0.9.12-11). I recuperated the old "template" VM, and I tried to create
a new one successfully.
What I CAN do:
1. Start both VM's (the template imported from the old installation
and the new one)
2. Connect to the console of both VM's
3. Clone them
What I CAN'T do:
1. Access to the console of the cloned VM, they stalled at the prompt.
Just in case config of the VM:
<domain type='kvm'>
<name>template-1.dev.jj.com</name>
<uuid>f75dcbdc-8fc9-b50c-6612-c34dfebea16f</uuid>
<memory unit='KiB'>1048576</memory>
<currentMemory unit='KiB'>524288</currentMemory>
<vcpu placement='static'>2</vcpu>
<os>
<type arch='x86_64' machine='pc-1.1'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>restart</on_crash>
<devices>
<emulator>/usr/bin/kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/lib/libvirt/images/template-1.dev.jj.com.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04'
function='0x0'/>
</disk>
<controller type='usb' index='0'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x2'/>
</controller>
<interface type='network'>
<mac address='52:54:00:c0:c6:24'/>
<source network='default'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<serial type='pty'>
<target port='0'/>
</serial>
<console type='pty'>
<target type='serial' port='0'/>
</console>
<memballoon model='virtio'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</memballoon>
</devices>
</domain>
Is somebody else experiencing the same problem? I'm not sure even that
the clones VM is working, since I'm able to start it up but no connect
to it.
Thanks.
Javier
11 years, 8 months