On Tue, Feb 01, 2022 at 02:20:15PM +0100, Tim Wiederhake wrote:
Signed-off-by: Tim Wiederhake <twiederh(a)redhat.com>
---
src/conf/virchrdev.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/src/conf/virchrdev.c b/src/conf/virchrdev.c
index c9b2134e3b..8610f0ac5c 100644
--- a/src/conf/virchrdev.c
+++ b/src/conf/virchrdev.c
@@ -291,10 +291,10 @@ void virChrdevFree(virChrdevs *devs)
if (!devs)
return;
- virMutexLock(&devs->lock);
- virHashForEachSafe(devs->hash, virChrdevFreeClearCallbacks, NULL);
- g_clear_pointer(&devs->hash, g_hash_table_unref);
- virMutexUnlock(&devs->lock);
+ VIR_WITH_MUTEX_LOCK_GUARD(&devs->lock) {
+ virHashForEachSafe(devs->hash, virChrdevFreeClearCallbacks, NULL);
+ g_clear_pointer(&devs->hash, g_hash_table_unref);
+ }
Your code change is perfectly correct given the starting state, so
Reviewed-by: Daniel P. Berrangé <berrange(a)redhat.com>
but I seriously question the sanity of the original code.
If virMutexLock here passes without waiting that's good because
it means nothing else is using the virChrdev pointer.
if virMutexLock hits contention and has to wait though, this feels
very bad because it implies there is a 2nd thread here that holds
a copy of the virChrdevs pointer and is actively still using it
concurrently with us calling virChrdevFree.
This feels very risky as a design pattern. Normally if we expect
multiple threads to be accessing the same struct, we would want
to have it ref counted, and both threads hold their own reference,
such that we don't get into running any destructor until we know
only one thread is left using the object. Thus calling MutexLock
would be unnecessary.
Maybe we lucky in our usage for virChrdev but it feels like it
deserves future attention.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|