[libvirt-users] image transfer incompletely in live migration
by yli@cloudiya.com
Hi all,
I am using libvirt version 0.10.1 and qemu-kvm version 1.2.0 with RHEL 6.3
When I use libvirt API to make block migrate with a VM, it failed
frequently. I check the migrated VM and found that the filesystem damaged.
But sometimes it works very fine, I don't know why, it seems to be random.
All failed tests have the same condition, image transfer incompletely in
live migration. The image I used is 10GB size, after a failed migration it
has only about 400MB to 3GB in remote host.
Here is the code:
import libvirt
from xml.dom import minidom
name = "vm-01"
host = libvirt.open("qemu:///system")
vm = host.lookupByNmae(name)
vm_xml_file = "/etc/libvirt/qemu/%s.xml" % name
vm_xml = minidom.parse(vm_xml_file).documentElement.toxml()
remote_host = "qemu+ssh://host02/system"
remote = libvirt.open(remote_host)
try:
vm.migrate2(remote, vm_xml, 89, name, None, 0)
except Exception,e:
print str(e)
Before run the code I have already create a image for migration in remote
host 'host02'. Path, size, and privilege all OK.
Here is the related log:
in source host 'host01':
2012-11-24 05:14:40.001+0000: 24888: warning :
qemuDomainObjBeginJobInternal:838 : Cannot start job (query, none) for
domain wrksp-776; current job is (async nested, migration out) owned by
(24890, 24890) 2012-11-24 05:16:16.398+0000: 24888: error :
qemuDomainObjBeginJobInternal:842 : Timed out during operation: cannot
acquire state change lock
2012-11-24 05:16:21.317+0000: 24887: error : virNetSocketReadWire:1184 :
End of file while reading data: Input/output error
in remote host 'host02':
2012-11-24 05:14:00.776+0000: 11065: warning :
qemuDomainObjEnterMonitorInternal:993 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2012-11-24 05:14:01.196+0000: 11065: warning :
qemuDomainObjEnterMonitorInternal:993 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2012-11-24 05:14:01.198+0000: 11065: warning :
qemuDomainObjEnterMonitorInternal:993 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2012-11-24 05:14:01.208+0000: 11065: warning :
qemuDomainObjEnterMonitorInternal:993 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
2012-11-24 05:14:01.211+0000: 11065: warning :
qemuDomainObjEnterMonitorInternal:993 : This thread seems to be the async
job owner; entering monitor without asking for a nested job is dangerous
I try to use command "virsh migrate vm-01 --live --copy-storage-all
--persistent --undefinesource qemu+ssh://host02/system --verbose", the
test results have no change, common but not regular.
Even after my once successful test, it will certainly fail when I try to
make a second migrate with the same VM.
How can I solve this problem? Anything is helpful.
Thanks.
11 years, 11 months
[libvirt-users] how can i get rid of the password for accessing the console in virt-manager ?
by Lentes, Bernd
Hi,
i'm using virt-manager to manage serveral guests on a sles 11 host. For one guest, i configured a password to enter the console via virt-manager (not via VNC). How can i get rid of it ?
Thanks in advance.
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 89 3187 1241
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/idg
Wir sollten nicht den Tod fürchten, sondern
das schlechte Leben
Helmholtz Zentrum München
Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
Ingolstädter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
Geschäftsführer: Prof. Dr. Günther Wess und Dr. Nikolaus Blum
Registergericht: Amtsgericht München HRB 6466
USt-IdNr: DE 129521671
11 years, 11 months
[libvirt-users] add disk problems for domain
by zhijun liu
hi,all
I use Java api `domain.attachDeviceFlags(xml, 0);` to add a disk for
domain. the xml file like this:
<disk type='file' device='disk'>
<driver name='qemu' type='raw'/>
<source file='/dev/sp1353486595267/v1353488096262'/>
<target dev='vdb' bus='virtio'/>
</disk>
then I use command `virsh detach-disk $domain vdb` to remove the disk.
what I said above have no any errors.but the problem is when I restart
this vm ,the disk still exist,because I see it from the command `virsh
dumpxml $domain` ,how to remove the disk from this domain persistently.
thanks in advance.
>zhijun
11 years, 11 months
[libvirt-users] Disk/volume hot-add question
by Harish Patil
Hello
Why is the disk hot-add operation use PCI hotplug ?
Below is the hot plugged device using info pci monitor cmd.
> Bus 192, device 10, function 0:
> SCSI controller: PCI device 1af4:1001
> IRQ 0.
> BAR0: I/O at 0xc400 [0xc43f].
> BAR1: 32 bit memory at 0xf0101000 [0xf0101fff].
> id "mydisk"
Could someone pls provide all the steps involved upto the point it appears
in the guest kernel?
Thx
>harish
11 years, 11 months
[libvirt-users] Fwd: unable to ping from guests in virbr0 to guests in virbr1 network
by Marwan Tanager
---------- Forwarded message ----------
From: mallapadi niranjan <niranjan.ashok(a)gmail.com>
Date: Mon, Nov 19, 2012 at 7:30 PM
Subject: Re: [libvirt-users] unable to ping from guests in virbr0 to guests
in virbr1 network
To: Marwan Tanager <marwan.tngr(a)gmail.com>
On Mon, Nov 19, 2012 at 7:51 PM, Marwan Tanager <marwan.tngr(a)gmail.com>wrote:
> On Mon, Nov 19, 2012 at 12:09:53PM +0530, mallapadi niranjan wrote:
> > Hi all,
> >
> > I have 3 guests (2-RHEL4 and 1 RHEL6) and have some issues regarding
> > networking between them. The 2 RHEL4 system's use default bridge virbr0
> and
> > get ip's of range 192.168.122.0/24 (192.168.122.207, 192.168.122.167)
> >
> > I created another bridge (virbr1) with NAT forwarding (no dhcp). The
> > network i choose was 192.168.100.0/24. And the third system (RHEL6) was
> > assigned static ip-addres 192.168.100.101,
> >
> > >>From the RHEL6 system which uses virbr1 is able to ping systems in
> > 192.168.122.0/24 series , but guest systems in 192.168.122.0/24 are not
> > able to ping RHEL6 system (in virbr1) network.
> >
> > >>From the RHEL4 guests i am able to ping the gateway ip's
> (192.168.122.1,
> > 192.168.100.1) , but not the RHEL6 system
> >
> >
> > Versions:
> > Fedora release 16 (Verne)
> > libvirt-0.9.6.3-1.fc16.x86_64
> > qemu-kvm-0.15.1-8.fc16.x86_64
> >
> > Any hints on what could be the problem
>
> The problem is caused by the relative order of the iptables rules for
> those two
> networks.
>
> When libvirt created virbr1 for the network 192.168.100.0/24 it inserted a
> couple of iptables rules on the FORWARDING chain for this interface, but it
> added them before the rules of virbr0 on the same chain. Those rules
> basically
> are ordered as follows (at least on my system which I suspect is different
> from
> yours since I also had the same problem):
>
> 1. Forward packets destined for the interface that are part of an
> established
> connection.
> 2. Forward packets coming from the interface.
> 3. Forward packets coming from and destined to the same interface
> (loopback).
> 4. Reject forwarding anything else to the interface.
> 5. Reject forwarding anything else from the interface.
>
okay that seems to be the behaviour:
hain FORWARD (policy ACCEPT)
target prot opt source destination
ACCEPT all -- 0.0.0.0/0 192.168.100.0/24 state
RELATED,ESTABLISHED
ACCEPT all -- 192.168.100.0/24 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with
icmp-port-unreachable
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with
icmp-port-unreachable
ACCEPT all -- 0.0.0.0/0 192.168.200.0/24 state
RELATED,ESTABLISHED
ACCEPT all -- 192.168.200.0/24 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with
icmp-port-unreachable
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with
icmp-port-unreachable
ACCEPT all -- 0.0.0.0/0 192.168.122.0/24 state
RELATED,ESTABLISHED
ACCEPT all -- 192.168.122.0/24 0.0.0.0/0
ACCEPT all -- 0.0.0.0/0 0.0.0.0/0
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with
icmp-port-unreachable
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with
icmp-port-unreachable
REJECT all -- 0.0.0.0/0 0.0.0.0/0 reject-with
icmp-host-prohibited
I can ping from 192.168.100.0/24 -> 192.168.122.0/24 (but not the viceversa)
>
> Since those rules are inserted for every interface libvirt adds in this
> order,
> you can only ping from one interface to another if the set of rules for the
> interface you are pinging from comes before that of the one you're pinging
> to.
> Otherwise, rule number 4 of the destination interface will get in the way
> since
> iptables matches the packets against the rules in a chain according to
> their
> order on it.
>
> A possible workaround:
>
> iptables-save >/path/to/iptables/dump/file
>
> then, edit the file to move all relevant rules that REJECT things on the
> FORWARDING chain to the end of the chain.
>
> then put this command in your rc.local script:
>
> cat /path/to/iptables/dump/file | iptables-restore
>
> This solution is not reliable though because you will need to update the
> file
> containing the rules every time you add a new virtual network or update the
> iptables rules for some other reason.
>
Right ,
>
> Also, I hinted before at this problem on the list and someone pointed me
> that
> it's a bug, but nobody confirmed. Read my message on this topic with the
> subject: Inconsistent iptables forwarding rules for virtual networks, to
> get a
> clearer picture.
>
> Okay, will look in to that mail
>
> Marwan
>
11 years, 11 months
[libvirt-users] unable to ping from guests in virbr0 to guests in virbr1 network
by mallapadi niranjan
Hi all,
I have 3 guests (2-RHEL4 and 1 RHEL6) and have some issues regarding
networking between them. The 2 RHEL4 system's use default bridge virbr0 and
get ip's of range 192.168.122.0/24 (192.168.122.207, 192.168.122.167)
I created another bridge (virbr1) with NAT forwarding (no dhcp). The
network i choose was 192.168.100.0/24. And the third system (RHEL6) was
assigned static ip-addres 192.168.100.101,
>From the RHEL6 system which uses virbr1 is able to ping systems in
192.168.122.0/24 series , but guest systems in 192.168.122.0/24 are not
able to ping RHEL6 system (in virbr1) network.
>From the RHEL4 guests i am able to ping the gateway ip's (192.168.122.1,
192.168.100.1) , but not the RHEL6 system
Versions:
Fedora release 16 (Verne)
libvirt-0.9.6.3-1.fc16.x86_64
qemu-kvm-0.15.1-8.fc16.x86_64
Any hints on what could be the problem
Regards
Niranjan
11 years, 11 months
[libvirt-users] how to make the volume's format to qcow2 when creating volume
by zhijun liu
hi,all
the following are files of pool and volume.
storage pool is based on logical(LVM) and iscsi,now I create volume
specified the format to "qcow2"
*pool.xml*
<pool type='logical'>
<name>pool_190</name>
<source>
<device
path='/dev/disk/by-path/ip-192.168.0.190:3260-iscsi-iqn.2012-11.com.cloudking:server.target1-lun-1'/>
</source>
<target>
<path>/dev/pool_190</path>
</target>
</pool>
*volume.xml*
<?xml version="1.0" encoding="UTF-8" ?>
<volume>
<name>volume1</name>
<capacity unit="GB">1</capacity>
<target>
<format type="qcow2" />
</target>
</volume>
*the problmes is :when the volume created successfully,However, the
volume's format is 'raw'*
like this
image: /dev/pool_190/volume1
*file format: raw*
virtual size: 1.0G (1073741824 bytes)
disk size: 0
how to resolve this troublesome.
libvirt :0.9.8
os:ubuntu12.04
many thanks for any reply....
11 years, 11 months