Re: Question about managed/unmanaged persistent reservation disks

On 6/10/25 19:15, Simon Coter wrote:
Adding users DL to possibly reach out a wider audience.
Simon
Dropping devel list as this is users list material.
On Jun 9, 2025, at 7:28 PM, Annie Li <annie.li@oracle.com> wrote:
Hello,
I've been looking at source code related to persistent reservation and got confused a little bit about managed persistent reservation disks. For disk configured with 'managed=yes' as the following,
<reservations managed='yes'> <source type='unix' path='/var/lib/libvirt/qemu/domain-7- brml10g19-iscsi-rese/pr-helper0.sock' mode='client'/> </reservations>
libvirt is responsible for starting a pr-helper program with a specific associated socket file. The following source code shows that there is only one pr-helper and socket file associated with the managed disks for one VM.
const char * qemuDomainGetManagedPRAlias(void) { return "pr-helper0"; } char * qemuDomainGetManagedPRSocketPath(qemuDomainObjPrivate *priv) { return g_strdup_printf("%s/%s.sock", priv->libDir, qemuDomainGetManagedPRAlias()); }
So if the VM is booted with multiple disks configured with 'managed=yes' for reservation, I suppose these multiple disks share the this managed pr-helper and socket file. However, per the qemu document, https://www.qemu.org/docs/master/interop/pr-helper.html <https://www.qemu.org/docs/master/interop/pr-helper.html> "It is invalid to send multiple commands concurrently on the same socket. It is however possible to connect multiple sockets to the helper and send multiple commands to the helper for one or more file descriptors."
This certainly did not use to be the case. IIRC this was discussed in this very old thread: https://lists.libvirt.org/archives/list/devel@lists.libvirt.org/thread/UUL3B... Maybe QEMU has some internal lock that does the right thing and serializes requests? Michal
Due to this limitation above, only one persistent reservation disk is allowed as managed in theory. However, libvirt doesn't throw out any error or warning when the VM is booted up with multiple managed persistent reservation disks. I am wondering if I've missed something here?
For unmanaged persistent reservation disks, libvirt doesn't start the pr-helper program for them. It is user's responsibility to start this program with customized socket file per disk, but the complexity increases with numbers of persistent reservation disks, especially in the case of hotplug/hotunplog. Is there any plan to support multiple managed persistent reservation disks with separate pr-helper/socket file?
Any suggestions/clarifications are greatly appreciated.
Thanks
Annie

On 6/10/25 19:15, Simon Coter wrote:
Adding users DL to possibly reach out a wider audience.
Simon
Dropping devel list as this is users list material.
On Jun 9, 2025, at 7:28 PM, Annie Li <annie.li@oracle.com> wrote:
Hello,
I've been looking at source code related to persistent reservation and got confused a little bit about managed persistent reservation disks. For disk configured with 'managed=yes' as the following,
<reservations managed='yes'> <source type='unix' path='/var/lib/libvirt/qemu/domain-7- brml10g19-iscsi-rese/pr-helper0.sock' mode='client'/> </reservations>
libvirt is responsible for starting a pr-helper program with a specific associated socket file. The following source code shows that there is only one pr-helper and socket file associated with the managed disks for one VM.
const char * qemuDomainGetManagedPRAlias(void) { return "pr-helper0"; } char * qemuDomainGetManagedPRSocketPath(qemuDomainObjPrivate *priv) { return g_strdup_printf("%s/%s.sock", priv->libDir, qemuDomainGetManagedPRAlias()); }
So if the VM is booted with multiple disks configured with 'managed=yes' for reservation, I suppose these multiple disks share the this managed pr-helper and socket file. However, per the qemu document, https://urldefense.com/v3/__https://www.qemu.org/docs/master/interop/pr-help... <https://urldefense.com/v3/__https://www.qemu.org/docs/master/interop/pr-help... > "It is invalid to send multiple commands concurrently on the same socket. It is however possible to connect multiple sockets to the helper and send multiple commands to the helper for one or more file descriptors."
This certainly did not use to be the case. IIRC this was discussed in this very old thread:
https://urldefense.com/v3/__https://lists.libvirt.org/archives/list/devel@li... Thanks for the info. This thread talks about the socket connection/access, but doesn't touch
Hello Michal, On 6/12/2025 9:18 AM, Michal Prívozník wrote: the topic of multiple socket. My understanding of the qemu document is that it's OK to run one helper per QEMU or even per host, but multiple disks shouldn't share the same socket since it is possible that multiple commands may be sent concurrently.
Maybe QEMU has some internal lock that does the right thing and serializes requests?
I'll dig the qemu-pr-helper source code. Any thoughts are welcome :) Thanks Annie
Michal
Due to this limitation above, only one persistent reservation disk is allowed as managed in theory. However, libvirt doesn't throw out any error or warning when the VM is booted up with multiple managed persistent reservation disks. I am wondering if I've missed something here?
For unmanaged persistent reservation disks, libvirt doesn't start the pr-helper program for them. It is user's responsibility to start this program with customized socket file per disk, but the complexity increases with numbers of persistent reservation disks, especially in the case of hotplug/hotunplog. Is there any plan to support multiple managed persistent reservation disks with separate pr-helper/socket file?
Any suggestions/clarifications are greatly appreciated.
Thanks
Annie

On 6/12/2025 11:01 AM, Annie Li wrote:
Hello Michal,
On 6/12/2025 9:18 AM, Michal Prívozník wrote:
On 6/10/25 19:15, Simon Coter wrote:
Adding users DL to possibly reach out a wider audience.
Simon
Dropping devel list as this is users list material.
On Jun 9, 2025, at 7:28 PM, Annie Li <annie.li@oracle.com> wrote:
Hello,
I've been looking at source code related to persistent reservation and got confused a little bit about managed persistent reservation disks. For disk configured with 'managed=yes' as the following,
<reservations managed='yes'> <source type='unix' path='/var/lib/libvirt/qemu/domain-7- brml10g19-iscsi-rese/pr-helper0.sock' mode='client'/> </reservations>
libvirt is responsible for starting a pr-helper program with a specific associated socket file. The following source code shows that there is only one pr-helper and socket file associated with the managed disks for one VM.
const char * qemuDomainGetManagedPRAlias(void) { return "pr-helper0"; } char * qemuDomainGetManagedPRSocketPath(qemuDomainObjPrivate *priv) { return g_strdup_printf("%s/%s.sock", priv->libDir, qemuDomainGetManagedPRAlias()); }
So if the VM is booted with multiple disks configured with 'managed=yes' for reservation, I suppose these multiple disks share the this managed pr-helper and socket file. However, per the qemu document, https://urldefense.com/v3/__https://www.qemu.org/docs/master/interop/pr-help... <https://urldefense.com/v3/__https://www.qemu.org/docs/master/interop/pr-help...
"It is invalid to send multiple commands concurrently on the same socket. It is however possible to connect multiple sockets to the helper and send multiple commands to the helper for one or more file descriptors."
This certainly did not use to be the case. IIRC this was discussed in this very old thread:
https://urldefense.com/v3/__https://lists.libvirt.org/archives/list/devel@li...
Thanks for the info. This thread talks about the socket connection/access, but doesn't touch the topic of multiple socket. My understanding of the qemu document is that it's OK to run one helper per QEMU or even per host, but multiple disks shouldn't share the same socket since it is possible that multiple commands may be sent concurrently.
Maybe QEMU has some internal lock that does the right thing and serializes requests?
I'll dig the qemu-pr-helper source code. Any thoughts are welcome :)
In libvirt, the socket parameter is configured with '-k' option in qemuProcessStartManagedPRDaemon, if (!(cmd = virCommandNewArgList(prHelperPath, "-k", socketPath, NULL))) and qemu-pr-helper creates socket by the following, saddr = (SocketAddress){ .type = SOCKET_ADDRESS_TYPE_UNIX, .u.q_unix.path = socket_path, }; server_ioc = qio_channel_socket_new(); The 'socket_path' is a global pointer and points to the socketPath parameter configured with '-k'(see above in libvirt). Later, qemu-pr-helper reads out requests from the socket channel. However, I don't see the helper specifically processes PR commands sent concurrently by multiple disks. If multiple disks share the same socket, there is certainly an issue as what is described in the qemu document. I'm wondering if I've missed something here? Thanks Annie
Thanks
Annie
Michal
Due to this limitation above, only one persistent reservation disk is allowed as managed in theory. However, libvirt doesn't throw out any error or warning when the VM is booted up with multiple managed persistent reservation disks. I am wondering if I've missed something here?
For unmanaged persistent reservation disks, libvirt doesn't start the pr-helper program for them. It is user's responsibility to start this program with customized socket file per disk, but the complexity increases with numbers of persistent reservation disks, especially in the case of hotplug/hotunplog. Is there any plan to support multiple managed persistent reservation disks with separate pr-helper/socket file?
Any suggestions/clarifications are greatly appreciated.
Thanks
Annie

Il lun 16 giu 2025, 21:00 Annie Li <annie.li@oracle.com> ha scritto:
My understanding of the qemu document is that it's OK to run one helper per QEMU or even per host, but multiple disks shouldn't share the same socket since it is possible that multiple commands may be sent concurrently.
Maybe QEMU has some internal lock that does the right thing and serializes requests?
Multiple disks can share the socket, the serialization of requests is handled with a mutex in scsi/pr-manager-helper.c.
I'll dig the qemu-pr-helper source code. Any thoughts are welcome :)
In libvirt, the socket parameter is configured with '-k' option in qemuProcessStartManagedPRDaemon,
if (!(cmd = virCommandNewArgList(prHelperPath, "-k", socketPath, NULL))) and qemu-pr-helper creates socket by the following, saddr = (SocketAddress){ .type = SOCKET_ADDRESS_TYPE_UNIX, .u.q_unix.path = socket_path, }; server_ioc = qio_channel_socket_new(); The 'socket_path' is a global pointer and points to the socketPath parameter configured with '-k'(see above in libvirt). Later, qemu-pr-helper reads out requests from the socket channel. However, I don't see the helper specifically processes PR commands sent concurrently by multiple disks. If multiple disks share the same socket, there is certainly an issue as what is described in the qemu document. I'm wondering if I've missed something here?
Thanks
Annie
Thanks
Annie
Michal
Due to this limitation above, only one persistent reservation disk is allowed as managed in theory. However, libvirt doesn't throw out any error or warning when the VM is booted up with multiple managed persistent reservation disks. I am wondering if I've missed something here?
For unmanaged persistent reservation disks, libvirt doesn't start the pr-helper program for them. It is user's responsibility to start this program with customized socket file per disk, but the complexity increases with numbers of persistent reservation disks, especially in the case of hotplug/hotunplog. Is there any plan to support multiple managed persistent reservation disks with separate pr-helper/socket file?
Any suggestions/clarifications are greatly appreciated.
Thanks
Annie

Hello Paolo, On 6/16/2025 5:11 PM, Paolo Bonzini wrote:
Il lun 16 giu 2025, 21:00 Annie Li <annie.li@oracle.com> ha scritto:
> My understanding of the qemu document is that it's OK to run one > helper per QEMU or even per host, but multiple disks shouldn't share > the same socket since it is possible that multiple commands may be > sent concurrently. >> Maybe QEMU has some internal lock that does the right thing and >> serializes requests?
Multiple disks can share the socket, the serialization of requests is handled with a mutex in scsi/pr-manager-helper.c.
Thanks a lot for the clarification. I was only focusing on the qemu-pr-helper source code, haven't checked pr-manager-helper yet, will definitely take a look. Looks the following document is misleading, https://www.qemu.org/docs/master/interop/pr-helper.html Since there is a mutex handling the requests from multiple disk over one socket, I suppose the statement "It is invalid to send multiple commands concurrently on the same socket." can be removed? Thanks Annie
> I'll dig the qemu-pr-helper source code. Any thoughts are welcome :)
In libvirt, the socket parameter is configured with '-k' option in qemuProcessStartManagedPRDaemon,
if (!(cmd = virCommandNewArgList(prHelperPath, "-k", socketPath, NULL))) and qemu-pr-helper creates socket by the following, saddr = (SocketAddress){ .type = SOCKET_ADDRESS_TYPE_UNIX, .u.q_unix.path = socket_path, }; server_ioc = qio_channel_socket_new(); The 'socket_path' is a global pointer and points to the socketPath parameter configured with '-k'(see above in libvirt). Later, qemu-pr-helper reads out requests from the socket channel. However, I don't see the helper specifically processes PR commands sent concurrently by multiple disks. If multiple disks share the same socket, there is certainly an issue as what is described in the qemu document. I'm wondering if I've missed something here?
Thanks
Annie
> Thanks > > Annie > >> >> Michal >> >>>> Due to this limitation above, only one persistent reservation disk is >>>> allowed as managed in theory. However, libvirt doesn't throw out any >>>> error or warning when the VM is booted up with multiple managed >>>> persistent reservation disks. I am wondering if I've missed something >>>> here? >>>> >>>> For unmanaged persistent reservation disks, libvirt doesn't start the >>>> pr-helper program for them. It is user's responsibility to start this >>>> program with customized socket file per disk, but the complexity >>>> increases with numbers of persistent reservation disks, especially in >>>> the case of hotplug/hotunplog. Is there any plan to support multiple >>>> managed persistent reservation disks with separate pr-helper/socket >>>> file? >>>> >>>> Any suggestions/clarifications are greatly appreciated. >>>> >>>> Thanks >>>> >>>> Annie >>>>

Il mar 17 giu 2025, 18:02 Annie Li <annie.li@oracle.com> ha scritto:
Hello Paolo, On 6/16/2025 5:11 PM, Paolo Bonzini wrote:
Il lun 16 giu 2025, 21:00 Annie Li <annie.li@oracle.com> ha scritto:
My understanding of the qemu document is that it's OK to run one helper per QEMU or even per host, but multiple disks shouldn't share the same socket since it is possible that multiple commands may be sent concurrently.
Maybe QEMU has some internal lock that does the right thing and serializes requests?
Multiple disks can share the socket, the serialization of requests is handled with a mutex in scsi/pr-manager-helper.c.
Thanks a lot for the clarification.
I was only focusing on the qemu-pr-helper source code, haven't checked pr-manager-helper yet, will definitely take a look.
Hi Annie that's the part of QEMU that talks to the helper process.
Looks the following document is misleading, https://www.qemu.org/docs/master/interop/pr-helper.html Since there is a mutex handling the requests from multiple disk over one socket, I suppose the statement "It is invalid to send multiple commands concurrently on the same socket." can be removed
No, I don't think it should be removed: the mutex is exactly what makes QEMU obey that statement. Remember that the documentation is written for everyone that needs to implement a qemu-pr-helper replacement (say one that does persistent reservations using a shared database), or a client that may not be QEMU (ok in practice it will be). Paolo

On 6/12/25 17:01, Annie Li wrote:
Hello Michal,
On 6/10/25 19:15, Simon Coter wrote:
Adding users DL to possibly reach out a wider audience.
Simon
Dropping devel list as this is users list material.
On Jun 9, 2025, at 7:28 PM, Annie Li <annie.li@oracle.com> wrote:
Hello,
I've been looking at source code related to persistent reservation and got confused a little bit about managed persistent reservation disks. For disk configured with 'managed=yes' as the following,
<reservations managed='yes'> <source type='unix' path='/var/lib/libvirt/qemu/domain-7- brml10g19-iscsi-rese/pr-helper0.sock' mode='client'/> </reservations>
libvirt is responsible for starting a pr-helper program with a specific associated socket file. The following source code shows that there is only one pr-helper and socket file associated with the managed disks for one VM.
const char * qemuDomainGetManagedPRAlias(void) { return "pr-helper0"; } char * qemuDomainGetManagedPRSocketPath(qemuDomainObjPrivate *priv) { return g_strdup_printf("%s/%s.sock", priv->libDir, qemuDomainGetManagedPRAlias()); }
So if the VM is booted with multiple disks configured with 'managed=yes' for reservation, I suppose these multiple disks share the this managed pr-helper and socket file. However, per the qemu document, https://urldefense.com/v3/__https://www.qemu.org/docs/ master/interop/pr-helper.html__;!!ACWV5N9M2RV99hQ! KTvTQA7YxW75PM9pKbPZ3lG5cV- QT0MDZfDkL1XZmT6gQ3chMSfk63La0TUAg4ZvMk2FCh_zn2p4HnY$ <https://urldefense.com/v3/__https://www.qemu.org/docs/master/ interop/pr-helper.html__;!!ACWV5N9M2RV99hQ! KTvTQA7YxW75PM9pKbPZ3lG5cV- QT0MDZfDkL1XZmT6gQ3chMSfk63La0TUAg4ZvMk2FCh_zn2p4HnY$ > "It is invalid to send multiple commands concurrently on the same socket. It is however possible to connect multiple sockets to the helper and send multiple commands to the helper for one or more file descriptors."
This certainly did not use to be the case. IIRC this was discussed in this very old thread:
https://urldefense.com/v3/__https://lists.libvirt.org/archives/list/ devel@lists.libvirt.org/thread/UUL3B7ZLAW4WPVUBX2R76GZTOS24Z2SD/__;!! ACWV5N9M2RV99hQ!KTvTQA7YxW75PM9pKbPZ3lG5cV- QT0MDZfDkL1XZmT6gQ3chMSfk63La0TUAg4ZvMk2FCh_zeWt4DWY$ Thanks for the info. This thread talks about the socket connection/access, but doesn't touch
On 6/12/2025 9:18 AM, Michal Prívozník wrote: the topic of multiple socket. My understanding of the qemu document is that it's OK to run one helper per QEMU or even per host, but multiple disks shouldn't share the same socket since it is possible that multiple commands may be sent concurrently.
Maybe QEMU has some internal lock that does the right thing and serializes requests?
Are you actually seeing any problems? Or are you just researching the topic. What is happening is - libvirt starts one pr-helper per guest, and all disks within the guest share the same pr-helper process. QEMU needs just one connection for that and per Paolo's reply later in this thread it has internal mutex that serializes multiple accesses onto the socket.
I'll dig the qemu-pr-helper source code. Any thoughts are welcome :)
Again, are you experiencing any bug? If so, please do file an issue so it can be properly investigated! https://libvirt.org/bugs.html Michal

Hello Michal,
On 6/10/25 19:15, Simon Coter wrote:
Adding users DL to possibly reach out a wider audience.
Simon
Dropping devel list as this is users list material.
On Jun 9, 2025, at 7:28 PM, Annie Li <annie.li@oracle.com> wrote:
Hello,
I've been looking at source code related to persistent reservation and got confused a little bit about managed persistent reservation disks. For disk configured with 'managed=yes' as the following,
<reservations managed='yes'> <source type='unix' path='/var/lib/libvirt/qemu/domain-7- brml10g19-iscsi-rese/pr-helper0.sock' mode='client'/> </reservations>
libvirt is responsible for starting a pr-helper program with a specific associated socket file. The following source code shows that there is only one pr-helper and socket file associated with the managed disks for one VM.
const char * qemuDomainGetManagedPRAlias(void) { return "pr-helper0"; } char * qemuDomainGetManagedPRSocketPath(qemuDomainObjPrivate *priv) { return g_strdup_printf("%s/%s.sock", priv->libDir, qemuDomainGetManagedPRAlias()); }
So if the VM is booted with multiple disks configured with 'managed=yes' for reservation, I suppose these multiple disks share the this managed pr-helper and socket file. However, per the qemu document, https://urldefense.com/v3/__https://www.qemu.org/docs/ master/interop/pr-helper.html__;!!ACWV5N9M2RV99hQ! KTvTQA7YxW75PM9pKbPZ3lG5cV- QT0MDZfDkL1XZmT6gQ3chMSfk63La0TUAg4ZvMk2FCh_zn2p4HnY$ <https://urldefense.com/v3/__https://www.qemu.org/docs/master/ interop/pr-helper.html__;!!ACWV5N9M2RV99hQ! KTvTQA7YxW75PM9pKbPZ3lG5cV- QT0MDZfDkL1XZmT6gQ3chMSfk63La0TUAg4ZvMk2FCh_zn2p4HnY$ > "It is invalid to send multiple commands concurrently on the same socket. It is however possible to connect multiple sockets to the helper and send multiple commands to the helper for one or more file descriptors."
This certainly did not use to be the case. IIRC this was discussed in this very old thread:
https://urldefense.com/v3/__https://lists.libvirt.org/archives/list/ devel@lists.libvirt.org/thread/UUL3B7ZLAW4WPVUBX2R76GZTOS24Z2SD/__;!! ACWV5N9M2RV99hQ!KTvTQA7YxW75PM9pKbPZ3lG5cV- QT0MDZfDkL1XZmT6gQ3chMSfk63La0TUAg4ZvMk2FCh_zeWt4DWY$ Thanks for the info. This thread talks about the socket connection/access, but doesn't touch
On 6/12/2025 9:18 AM, Michal Prívozník wrote: the topic of multiple socket. My understanding of the qemu document is that it's OK to run one helper per QEMU or even per host, but multiple disks shouldn't share the same socket since it is possible that multiple commands may be sent concurrently.
Maybe QEMU has some internal lock that does the right thing and serializes requests? Are you actually seeing any problems? Nope, we are just researching before deploying MSFC on top of libvirt. The libvirt source code shows there is only one pr-helper(with one socket) running for all the managed disks. However, this looks conflicting to the qemu documentation(https://www.qemu.org/docs/master/interop/pr-helper.html). This is why I bring up this topic here for clarification. Or are you just researching the topic. Researching What is happening is - libvirt starts one pr-helper per guest, and all disks within the guest share the same pr-helper process. QEMU needs just one connection for that and per Paolo's reply later in this
On 6/12/25 17:01, Annie Li wrote: thread it has internal mutex that serializes multiple accesses onto the socket. So far I haven't seen the internal mutex in qemu-pr-helper itself(maybe I've missed something inside the socket beyond the helper?), plus what
Hello Michal, On 6/17/2025 3:44 AM, Michal Prívozník wrote: the qemu-pr-helper document says, I suppose it is better to get clarification on this.
I'll dig the qemu-pr-helper source code. Any thoughts are welcome :) Again, are you experiencing any bug?
Nope, just confused about the inconsistency between the qemu documentation and libvirt implementation. Thanks Annie
If so, please do file an issue so it can be properly investigated!
https://urldefense.com/v3/__https://libvirt.org/bugs.html__;!!ACWV5N9M2RV99h...
Michal
participants (3)
-
Annie Li
-
Michal Prívozník
-
Paolo Bonzini