
W dniu 2015-11-20 o 09:13, Peter Krempa pisze:
On Thu, Nov 19, 2015 at 21:13:56 +0100, Piotr Rybicki wrote:
W dniu 2015-11-19 o 17:31, Michal Privoznik pisze:
...
==2650== 7,692,288 bytes in 2 blocks are still reachable in loss record 1,444 of 1,452 ==2650== at 0x4C2BFC8: calloc (vg_replace_malloc.c:711) ==2650== by 0x1061335C: __gf_default_calloc (mem-pool.h:75) ==2650== by 0x106137D2: __gf_calloc (mem-pool.c:104) ==2650== by 0x1061419D: mem_pool_new_fn (mem-pool.c:316) ==2650== by 0xFD69DDA: glusterfs_ctx_defaults_init (glfs.c:110) ==2650== by 0xFD6AC31: glfs_new@@GFAPI_3.4.0 (glfs.c:558) ==2650== by 0xF90321E: virStorageFileBackendGlusterInit (storage_backend_gluster.c:611) ==2650== by 0xF8F43AF: virStorageFileInitAs (storage_driver.c:2736) ==2650== by 0x115AE41A: qemuDomainStorageFileInit (qemu_domain.c:2929) ==2650== by 0x1163DE5A: qemuDomainSnapshotCreateSingleDiskActive (qemu_driver.c:14201) ==2650== by 0x1163E604: qemuDomainSnapshotCreateDiskActive (qemu_driver.c:14371) ==2650== by 0x1163ED27: qemuDomainSnapshotCreateActiveExternal (qemu_driver.c:14559) ==2650== ==2650== 7,692,288 bytes in 2 blocks are still reachable in loss record 1,445 of 1,452 ==2650== at 0x4C2BFC8: calloc (vg_replace_malloc.c:711) ==2650== by 0x1061335C: __gf_default_calloc (mem-pool.h:75) ==2650== by 0x106137D2: __gf_calloc (mem-pool.c:104) ==2650== by 0x1061419D: mem_pool_new_fn (mem-pool.c:316) ==2650== by 0xFD69DDA: glusterfs_ctx_defaults_init (glfs.c:110) ==2650== by 0xFD6AC31: glfs_new@@GFAPI_3.4.0 (glfs.c:558) ==2650== by 0xF90321E: virStorageFileBackendGlusterInit (storage_backend_gluster.c:611) ==2650== by 0xF8F43AF: virStorageFileInitAs (storage_driver.c:2736) ==2650== by 0xF8F4B0A: virStorageFileGetMetadataRecurse (storage_driver.c:2996) ==2650== by 0xF8F4F66: virStorageFileGetMetadata (storage_driver.c:3119) ==2650== by 0x115AE629: qemuDomainDetermineDiskChain (qemu_domain.c:2980) ==2650== by 0x1163E843: qemuDomainSnapshotCreateDiskActive (qemu_driver.c:14421) ==2650== ==2650== 7,692,288 bytes in 2 blocks are still reachable in loss record 1,446 of 1,452 ==2650== at 0x4C2BFC8: calloc (vg_replace_malloc.c:711) ==2650== by 0x1061335C: __gf_default_calloc (mem-pool.h:75) ==2650== by 0x106137D2: __gf_calloc (mem-pool.c:104) ==2650== by 0x1061419D: mem_pool_new_fn (mem-pool.c:316) ==2650== by 0xFD69DDA: glusterfs_ctx_defaults_init (glfs.c:110) ==2650== by 0xFD6AC31: glfs_new@@GFAPI_3.4.0 (glfs.c:558) ==2650== by 0xF90321E: virStorageFileBackendGlusterInit (storage_backend_gluster.c:611) ==2650== by 0xF8F43AF: virStorageFileInitAs (storage_driver.c:2736) ==2650== by 0xF8F4B0A: virStorageFileGetMetadataRecurse (storage_driver.c:2996) ==2650== by 0xF8F4DC5: virStorageFileGetMetadataRecurse (storage_driver.c:3054) ==2650== by 0xF8F4F66: virStorageFileGetMetadata (storage_driver.c:3119) ==2650== by 0x115AE629: qemuDomainDetermineDiskChain (qemu_domain.c:2980)
I've seen some of theese already. The bug is actually not in libvirt but in gluster's libgfapi library, so any change in libvirt won't help.
This was tracked in gluster as:
https://bugzilla.redhat.com/show_bug.cgi?id=1093594
I suggest you update the gluster library to resolve this issue.
Hello. Unfortunatelly, memleak still exists (although smaller with glusterfs 3.7.6) latest versions: libvirt 1.2.21 qemu 2.4.1 glusterfs 3.7.6 steps to reproduce: * virsh domblklist KVM * qemu-img create -f qcow2 -o backing_file=gluster(...) - precreate backing file * virsh snapshot-create KVM SNAP.xml (...) - create snapshot from precreated XML snapshot file * cp main img file * virsh blockcommit KVM disk (...) valgrind --leak-check=full --show-reachable=yes --child-silent-after-fork=yes /usr/sbin/libvirtd --listen 2> valgrind.log valgrind.log: http://www.filedropper.com/valgrind (sorry for crappy pastebin server, wikisend seems to have problem). Best regards Piotr Rybicki