
On Fri, May 05, 2017 at 01:25:34PM +0200, Michal Privoznik wrote:
On 05/04/2017 11:29 PM, John Ferlan wrote:
On 04/20/2017 06:01 AM, Michal Privoznik wrote:
This API can be used to tell the other side of the stream to skip
s/can be/is (unless it can be used for something else ;-))
some bytes in the stream. This can be used to create a sparse file on the receiving side of a stream.
It takes just one argument @length, which says how big the hole is. Since our streams are not rewindable like regular files, we don't need @whence argument like seek(2) has.
lseek is an implementation detail... However, it could be stated that the skipping would be from the current point in the file forward by some number of bytes. It's expected to be used in conjunction with code that is copying over the real (or non-zero) data and should be considered an optimization over sending zere data segments.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com> --- include/libvirt/libvirt-stream.h | 3 +++ src/driver-stream.h | 5 ++++ src/libvirt-stream.c | 57 ++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 1 + 4 files changed, 66 insertions(+)
While it would be unused for now, should @flags be added. Who knows what use it could have, but avoids a new Flags API, but does cause a few other wording changes here.
Ah sure. We should have @flags there. Good point.
Perhaps it's just me - but "Skip" and "HoleSize" just seem awkward. Would "virStreamSetSkip" and "virStreamGetSkip" be more apropos? (or s/Skip/HoleSize/ - ewww). Names would then follow our more recent function naming guidelines. I think I dislike the HoleSize much more than the Skip.
SetSkip and GetSkip sound wrong to me instead :D
diff --git a/include/libvirt/libvirt-stream.h b/include/libvirt/libvirt-stream.h index bee2516..4e0a599 100644 --- a/include/libvirt/libvirt-stream.h +++ b/include/libvirt/libvirt-stream.h @@ -50,6 +50,9 @@ int virStreamRecvFlags(virStreamPtr st, size_t nbytes, unsigned int flags);
+int virStreamSkip(virStreamPtr st, + unsigned long long length);
Was there consideration for using 'off_t' instead of ULL? I know it's an implementation detail of virFDStreamData and lseek() usage, but it does hide things... IDC either way.
The problem with off_t is that it is signed type, while ULL is unsigned. There's not much point in sending a negative offset, is there? Moreover, we use ULL for arguments like offset (not sure really why). Frankly, I don't really know why. Perhaps some types don't exist everywhere?
If anything, we would use size_t, for consistency with the Send/Recv methods. Ultimately though we have a fixed size data type on the wire (64-bit), so mapping to unsigned long long makes sense from that POV, as off_t can in theory be 32-bit in old platforms - similarly how we aim to avoid 'long int' (except for historical mistakes). Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|