On 04/30/2017 02:43 AM, Michal Privoznik wrote:
On 04/29/2017 06:46 PM, John Ferlan wrote:
>
>
> On 04/20/2017 06:01 AM, Michal Privoznik wrote:
>> This function takes a FD and determines whether the current
>> position is in data section or in a hole. In addition to that,
>> it also determines how much bytes are there remaining till the
>> current section ends.
>>
>> Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
>> ---
>> src/libvirt_private.syms | 1 +
>> src/util/virfile.c | 81 +++++++++++++++++++
>> src/util/virfile.h | 3 +
>> tests/virfiletest.c | 203 +++++++++++++++++++++++++++++++++++++++++++++++
>> 4 files changed, 288 insertions(+)
>>
>
> FWIW: This feels like an "inflection point" from which the remainder of
> the algorithms are built around. I've forgotten anything from the
> discussion for the original set of patches, but since I had a few cycles
> to think... Apologies in advance for the length.
>
> I guess I was under the impression that the purpose for sparse streams
> is to find hole's in a file|stream and ensure those don't get copied
> instead generating the hole on the target. If so, a bunch of different
> small API's come to mind after reading this patch and trying to
> internalize what's taking place:
>
> vir{File|Stream}SeekHoleSupported
> -> If SEEK_{DATA|HOLE} returns -1 and EINVAL, then either don't
> use sparse streams and force usage of the non sparse option or just fail.
We shouldn't fail. All the tools that I've seen and do support sparse
files do accept fully allocated files gracefully, e.g. cp --sparse
allways or rsync -S. I mean, even though this function you're suggesting
here would fail or something, the whole stream transmission should just
carry on.
My "default" thought was, if sparse wasn't supported, then fallback to
full copy; however, someone else may say no I don't want that. If I had
a 100G file I knew was sparse, I'm not sure I would want that fallback
to full copy!
>
> vir{File|Stream}HasHole
> -> From beginning of file, find first HOLE < EOF
> -> If none, then sparse streams won't matter, right?
No. I will not. But it doesn't matter if the first hole is at EOF or
before it, if we know how to transfer holes effectively.
>
> vir{File|Stream}IsInHole
> -> From @cur if SEEK_DATA > SEEK_HOLE
>
> vir{File|Stream}IsInData
> -> From @cur if SEEK_HOLE > SEEK_DATA
>
> vir{File|Stream}GetDataEnd
> -> From @cur of DATA return @length
>
> vir{File|Stream}GetHoleEnd
> -> From @cur of HOLE return @length
>
> We should know where we are and be able to keep track, right? Having to
> determine @cur each time just seems to be overhead.
Firstly, this is merely what I'm implementing later in the series (see
patch 33/38 [and I just realized that due to a mistake patch 34/38 has
wrong commit message]).
Yes, I need to get further in the series, but I'm also easily distracted
by external forces.
Secondly, getting current position in a file should be as quick as
this
(imagine me snapping my fingers :-)). The VFS has to keep track of the
current position in a file for sake of all file operations. /me goes and
looks into kernel sources... Yep, it looks like so:
generic_file_llseek() from fs/read_write.c. But the main point is ..
> For the purpose of
> what's being done - I would think we start at 0 and know we have a
> findable @end - then it would be "just" (hah!) a matter of ensuring we
> have holes and then a sequence of finding hole/data from whence we start
> (since theoretically we could start in a hole, right?).
>
> Maybe the following is too simplistic...
>
> if (supportsSparse)
> if ((cur = lseek(fd, 0, SEEK_SET)) != 0)
> goto error;
>
> if (!hasHoles)
> return fullCopy? (although I think the rest would still work)
>
> if ((end = lseek(fd, 0, SEEK_END)) < 0)
> goto error;
>
> inHole = virFile|StreamIsInHole(fd, cur)
> while (cur < end)
> if (inHole)
> get size of hole
> if ret < 0, then size would be end-cur, right?
> update remote to write hole
> update cur to include size of hole
> else /* assume? in data */
> get size of data
> if ret < 0, then size would be end-cur, right?
> update remote to include data
> update cur to include size of data
... that here, in this branch, we should send the data from the data
section to client, right? But how can we do that if we lseek()-ed away
(to the end of the section)?. I mean virFileStreamIsInHole() would need
to seek back to the original current position in the file anyway. Just
like virFileInData() does. Otherwise, if a file is in data section we
would not read it.
OK fair enough. To some degree the concern was over too many possible
failure paths/reasons in one function that may not allow flexibility to
allow some consumer to "handle" them...
Still the premise in this code is that the 'fd' keeps track of where it
is/was. Hopefully nothing can cause that to change... The first open
would start at 0 (theoretically), then the reads and "hole mgmt" would
update for that fd. The model being I know where I am, you take what I
give you.
Still does this play well if you ever wanted to support some sort of
check-pointing feature for a transfer that got interrupted? Then you'd
need to pass a current spot and support the option of - I'll tell you
where to go/start and you then send from that spot. A 100G sparse file
xfr could benefit from that. Of course that leaves data integrity
questions...
>
> I guess I just became concerned about the various decision/exit points
> in virFileInData below and started wondering if they were really needed.
>
>
>> diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
>> index 181e178..7132b3a 100644
>> --- a/src/libvirt_private.syms
>> +++ b/src/libvirt_private.syms
>> @@ -1620,6 +1620,7 @@ virFileGetHugepageSize;
>> virFileGetMountReverseSubtree;
>> virFileGetMountSubtree;
>> virFileHasSuffix;
>> +virFileInData;
>> virFileIsAbsPath;
>> virFileIsDir;
>> virFileIsExecutable;
>> diff --git a/src/util/virfile.c b/src/util/virfile.c
>> index cbfa384..093125c 100644
>> --- a/src/util/virfile.c
>> +++ b/src/util/virfile.c
>> @@ -3793,6 +3793,87 @@ virFileComparePaths(const char *p1, const char *p2)
>> cleanup:
>> VIR_FREE(res1);
>> VIR_FREE(res2);
>> +
>> + return ret;
>> +}
>> +
>> +
>
> s/b
> int
> virFileIndata(...)
>
> Lacking any sort of description of input/returns. Lot's of variables
> here too - whether fd valid/invalid could cause error... Setting *inData
> based on where we are... Setting *length based on no returning a -1...
> and of course returning -1 based on some lseek call failing for various
> reasons.
Okay, I'll add some description.
>
>> +int virFileInData(int fd,
>> + int *inData,
>
> Usage seems to be only boolean...
Yeah, it can be a bool.
>
>> + unsigned long long *length)
>> +{
>> + int ret = -1;
>> + off_t cur, data, hole, end;
>> +
>> + /* Get current position */
>> + cur = lseek(fd, 0, SEEK_CUR);
>> + if (cur == (off_t) -1) {
>> + virReportSystemError(errno, "%s",
>> + _("Unable to get current position in
file"));
>> + goto cleanup;
>> + }
>
> One would hope this doesn't fail, still could @cur be input? I guess I
> wonder why we're looking for current position if the purpose is to copy
> from start to end ensuring that holes aren't copied, but rather just
> created on the target. I am assuming this function is for the "source"
> side.
It's for both. So the whole point of sparse streams is to preserve
sparseness as volume images are copied from daemon to client (virsh
vol-download) or from client to daemon (virsh vol-upload). Currently
what we have is something among the following lines:
while (true) {
got = read(file, buf, sizeof(buf));
if (got < 0 || got == 0)
break;
if (virStreamSend(stream, buf, got) < 0)
break;
}
(Simplified very much, look at virStreamSendAll + virStreamRecvAll).
As you can see, we don't care about the holes. We don't know about holes
at all. So the point of this whole feature is to still copy the data AND
preserve sparseness at the same time:
while (true) {
virFileInData(fd, &inData, &offset);
if (inData) {
while (offset) {
got = read(fd, buf, sizeof(buf));
virStreamSend(stream, buf, got);
offset -= got;
}
} else {
virStreamSendHole(stream, offset);
lseek(fd, offset, SEEK_CUR);
}
}
Here we can clearly see that virFileInData() must leave the position in
@fd unchanged upon its return.
I know I missed the lseek in virFileInData on my initial pass through...
Regardless of up/down-load it's still be a copy from 0 to EOF. I was
trying to avoid the terms client/server and think source/target. Getting
@cur each time and needing to check status each time just seemed
unnecessary. Given this logic getting an @cur that differs from what
what you expect would be far more tragic I would think!
Still in both cases 'offset' could be used as in expected spot
calculation even though (as you put it) it's only time wise a snap of
the fingers - it's still a snap and a trip to get data that could be
saved. Having a failure possibility "just because" we decided not to
save @cur is what I was remarking on. So as written things work. I guess
I just want to be sure that we won't be down in this code again what
some other adjustment is requested.
One such adjustment that comes to mind is check-pointing which I would
think would utilize an input location...
>
>> +
>> + /* Now try to get data and hole offsets */
>> + data = lseek(fd, cur, SEEK_DATA);
>> +
>> + /* There are four options:
>> + * 1) data == cur; @cur is in data
>> + * 2) data > cur; @cur is in a hole, next data at @data
>> + * 3) data < 0, errno = ENXIO; either @cur is in trailing hole, or @cur
is beyond EOF.
>> + * 4) data < 0, errno != ENXIO; we learned nothing
>
> if data < 0 and errno = EINVAL, seeking data/holes isn't supported,
> true? If so then it might be a possible function to check if data/hole
> seeking is supported.
>
>> + */
>> +
>> + if (data == (off_t) -1) {
>> + /* cases 3 and 4 */
>> + if (errno != ENXIO) {
>> + virReportSystemError(errno, "%s",
>> + _("Unable to seek to data"));
>> + goto cleanup;
>> + }
>> +
>> + *inData = 0;
>> + /* There are two situations now. There is always an
>> + * implicit hole at EOF. However, there might be a
>> + * trailing hole just before EOF too. If that's the case
>> + * report it. */
>> + if ((end = lseek(fd, 0, SEEK_END)) == (off_t) -1) {
>> + virReportSystemError(errno, "%s",
>> + _("Unable to seek to EOF"));
>> + goto cleanup;
>
> So for the purpose of what you're trying to do should we care? I suppose
> if you're in a hole and cannot find the end that might be odd. Still if
> we know upfront what @cur and what @end are, then does some crazy
> trailing hole/end/eof need to cause an error?
Because of file size. Imagine a file with the following structure
|- data 1MB -||- hole 1MB -||- EOF -|
The file is clearly 2MBs in size, right? Now, my function would report 2
holes: the first one of 1MB size and the second implicit one at EOF. If
we were to merge those two holes into one, the file would shrink in size
and basically become a different file. The problem is we don't transfer
any kind of metadata throughout the copying process. The other side
doesn't know how big is the file we're copying; what is its data/hole
structure, nothing.
Similar to the starting point or @cur feedback... Can EOF "change"
whilst you're copying? If you know at the start you have a 2MB file to
xfr, then you xfr 1MB of data, but then don't find another data section
then you know there's a 1MB hole at the end. Having an error here causes
me to wonder why and what if...
Perhaps the wording of the comment has thrown me off a bit. AIUI, the
only way to get this far is if ENXIO has been provided. So what would
cause a return of -1 on a SEEK_END at this point in time? Is it:
3a: start ... data ... hole ... cur ... eof
3b: start ... data ... hole ... eof ... cur
How would @cur > @end? If we keep track of @cur and @end when we start,
then one would hope that only by our actions do they change. Trying to
account for a case where something else causes change to the @cur whilst
we're reading/copy data perhaps would imply that what we cannot control
what we copy.
>
>> + }
>> + *length = end - cur;
>> + } else if (data > cur) {
>> + /* case 2 */
>> + *inData = 0;
>> + *length = data - cur;
>> + } else {
>> + /* case 1 */
>> + *inData = 1;
>> +
>> + /* We don't know where does the next hole start. Let's
>> + * find out. Here we get the same 4 possibilities as
>> + * described above.*/
>> + hole = lseek(fd, data, SEEK_HOLE);
>> + if (hole == (off_t) -1 || hole == data) {
>
> Now I'm wondering about the implicit hole @ EOF described above? and the
> error checks from that that aren't done here.
What do you mean? What error checks? This code can be simpler because
when entering this branch we know that we are in a data section. And we
just want to know its length. So we call SEEK_HOLE.
Well cases 3&4 had an error scenario that was called out - checks vs.
ENXIO which aren't done here. My question was more towards the
subsequent comment [1] and checks done above for them related to errno
value and seeking to end.
>
> Still does this matter in the "case" you want to use this for? That is
> if there's no HOLE from @data to @end, then for the purpose of stream
> copying would you'd just be copying everything from data -> end (e.g.
> @length = (end - data)? IOW, does it really matter?
Well, that's the point. The phrasing of the question is very important.
How can we know there's no hole from @cur to EOF if we don't go through
this branch? And after we've gone through it (and after 33/38) we will
no longer call this function at all.
>
>> + /* cases 1, 3 and 4 */
^^^^ [1]
>> + /* Wait a second. The reason why we are here is
>> + * because we are in data. But at the same time we
>> + * are in a trailing hole? Wut!? Do the best what we
>> + * can do here. */
>> + virReportSystemError(errno, "%s",
>> + _("unable to seek to hole"));
>
> Would the "best" we can do is just copy everything from this point to
> the @end regardless of whether that's copying a hole? Again it's just
> another error that perhaps we could avoid.
Not sure. If the kernel is giving me spurious data how can I believe any
subsequent read()? Probably the other condition hole == data is never
going to happen, but I thought better be sure than sorry.
So if we don't find a hole while we're in data, then something is
seriously wrong, I see - gotcha. Especially since the definition of EOF
is an implicit hole. Would "hole == data" even matter/occur?
For cases 2, 3, 4 => @*length is the size of the hole?
For case 1 => @*length is the size of the data
>
>> + goto cleanup;
>> + } else {
>> + /* case 2 */
>> + *length = (hole - data);
>> + }
>> + }
>> +
>> + ret = 0;
>> + cleanup:
>> + /* At any rate, reposition back to where we started. */
>> + if (cur != (off_t) -1)
>> + ignore_value(lseek(fd, cur, SEEK_SET));
oohhh something else I conveniently wasn't thinking about at first
read. too distracted!
>> return ret;
>> }
>>
>> diff --git a/src/util/virfile.h b/src/util/virfile.h
>> index 41c25a2..32a1d3c 100644
>> --- a/src/util/virfile.h
>> +++ b/src/util/virfile.h
>> @@ -340,4 +340,7 @@ int virFileReadValueInt(const char *path, int *value);
>> int virFileReadValueUint(const char *path, unsigned int *value);
>> int virFileReadValueBitmap(const char *path, int maxlen, virBitmapPtr *value);
>>
>> +int virFileInData(int fd,
>> + int *inData,
>> + unsigned long long *length);
>
> You could also go with Pavel's preferred format to follow the .c file
> format.
>
>> #endif /* __VIR_FILE_H */
>> diff --git a/tests/virfiletest.c b/tests/virfiletest.c
>> index 702a76a..3e298dc 100644
>> --- a/tests/virfiletest.c
>> +++ b/tests/virfiletest.c
>> @@ -21,6 +21,7 @@
>> #include <config.h>
>>
>> #include <stdlib.h>
>> +#include <fcntl.h>
>>
>> #include "testutils.h"
>> #include "virfile.h"
>> @@ -119,6 +120,190 @@ testFileSanitizePath(const void *opaque)
>>
>>
>> static int
>> +makeSparseFile(const off_t offsets[],
>> + const bool startData);
>> +
>> +#ifdef __linux__
>> +/* Create a sparse file. @offsets in KiB. */
>> +static int
>> +makeSparseFile(const off_t offsets[],
>> + const bool startData)
>> +{
>> + int fd = -1;
>> + char path[] = abs_builddir "fileInData.XXXXXX";
>> + off_t len = 0;
>> + size_t i;
>> +
>> + if ((fd = mkostemp(path, O_CLOEXEC|O_RDWR)) < 0)
>> + goto error;
>> +
>> + if (unlink(path) < 0)
>> + goto error;
>> +
>> + for (i = 0; offsets[i] != (off_t) -1; i++)
>> + len += offsets[i] * 1024;
>> +
>> + while (len) {
>> + const char buf[] = "abcdefghijklmnopqrstuvwxyz";
>> + off_t toWrite = sizeof(buf);
>> +
>> + if (toWrite > len)
>> + toWrite = len;
>> +
>> + if (safewrite(fd, buf, toWrite) < 0) {
>> + fprintf(stderr, "unable to write to %s (errno=%d)\n",
path, errno);
>> + goto error;
>> + }
>> +
>> + len -= toWrite;
>> + }
>> +
>> + len = 0;
>> + for (i = 0; offsets[i] != (off_t) -1; i++) {
>> + bool inData = startData;
>> +
>> + if (i % 2)
>> + inData = !inData;
>> +
>> + if (!inData &&
>> + fallocate(fd,
>> + FALLOC_FL_PUNCH_HOLE | FALLOC_FL_KEEP_SIZE,
>> + len, offsets[i] * 1024) < 0) {
>> + fprintf(stderr, "unable to punch a hole at offset %lld length
%lld\n",
>> + (long long) len, (long long) offsets[i]);
>> + goto error;
>> + }
>> +
>> + len += offsets[i] * 1024;
>> + }
>> +
>> + if (lseek(fd, 0, SEEK_SET) == (off_t) -1) {
>> + fprintf(stderr, "unable to lseek (errno=%d)\n", errno);
>> + goto error;
>> + }
>> +
>> + return fd;
>> + error:
>> + VIR_FORCE_CLOSE(fd);
>> + return -1;
>> +}
>> +
>
> I'm too lazy to think... Are making sure the following cases:
>
> start-data-end
> start-hole-data-end
> start-data-hole-end
> start-hole-data-hole-data-end
> start-data-hole-data-hole-end
> start-hole-data-hole-data-hole-end
> start-data-hole-data-hole-data-end
>
> Trying to think if the following would be valid:
>
> start-hole-end
>
> but my brain keeps telling me to stop thinking.
Yep, that one is valid too. I can add it to the set of tests below,
>
>> +#else /* !__linux__ */
>> +
>> +static int
>> +makeSparseFile(const off_t offsets[] ATTRIBUTE_UNUSED,
>> + const bool startData ATTRIBUTE_UNUSED)
>> +{
>> + return -1;
>> +}
>> +
>> +#endif /* !__linux__ */
>> +
>> +
>> +#define EXTENT 4
>> +static bool
>> +holesSupported(void)
>> +{
>> + off_t offsets[] = {EXTENT, EXTENT, EXTENT, -1};
>> + off_t tmp;
>> + int fd;
>> + bool ret = false;
>> +
>> + if ((fd = makeSparseFile(offsets, true)) < 0)
>> + goto cleanup;
>> +
>> + /* The way this works is: there are 4K of data followed by 4K hole followed
>> + * by 4K hole again. Check if the filesystem we are running the test suite
>> + * on supports holes. */
>> + if ((tmp = lseek(fd, 0, SEEK_DATA)) == (off_t) -1)
>> + goto cleanup;
>> +
>> + if (tmp != 0)
>> + goto cleanup;
>> +
>> + if ((tmp = lseek(fd, tmp, SEEK_HOLE)) == (off_t) -1)
>> + goto cleanup;
>> +
>> + if (tmp != EXTENT * 1024)
>> + goto cleanup;
>> +
>> + if ((tmp = lseek(fd, tmp, SEEK_DATA)) == (off_t) -1)
>> + goto cleanup;
>> +
>> + if (tmp != 2 * EXTENT * 1024)
>> + goto cleanup;
>> +
>> + if ((tmp = lseek(fd, tmp, SEEK_HOLE)) == (off_t) -1)
>> + goto cleanup;
>> +
>> + if (tmp != 3 * EXTENT * 1024)
>> + goto cleanup;
>> +
>> + ret = true;
>> + cleanup:
>> + VIR_FORCE_CLOSE(fd);
>> + return ret;
>> +}
>> +
>> +
>> +struct testFileInData {
>> + bool startData; /* whether the list of offsets starts with data section
*/
>> + off_t *offsets;
>> +};
>> +
>> +
>> +static int
>> +testFileInData(const void *opaque)
>> +{
>> + const struct testFileInData *data = opaque;
>> + int fd = -1;
>> + int ret = -1;
>> + size_t i;
>> +
>> + if ((fd = makeSparseFile(data->offsets, data->startData)) < 0)
>> + goto cleanup;
>> +
>> + for (i = 0; data->offsets[i] != (off_t) -1; i++) {
>> + bool shouldInData = data->startData;
>> + int realInData;
>> + unsigned long long shouldLen;
>> + unsigned long long realLen;
>> +
>> + if (i % 2)
>> + shouldInData = !shouldInData;
>> +
>> + if (virFileInData(fd, &realInData, &realLen) < 0)
>> + goto cleanup;
>> +
>> + if (realInData != shouldInData) {
>> + fprintf(stderr, "Unexpected data/hole. Expected %s got
%s\n",
>> + shouldInData ? "data" : "hole",
>> + realInData ? "data" : "hole");
>> + goto cleanup;
>> + }
>> +
>> + shouldLen = data->offsets[i] * 1024;
>> + if (realLen != shouldLen) {
>> + fprintf(stderr, "Unexpected section length. Expected %lld got
%lld\n",
>
> lld on an ull?
Ha! my compiler doesn't complain. Fixed.
Mine didn't either... nor did the other tool that shall remain nameless
>
>
> John
>
> I'll be OOO on Mon/Tue - just to "set" expectations of any chance at
me
> reading a response!
>
Fair enough. This is not a trivial feature. It touches parts of libvirt
that perhaps we all are scared to touch. So I expect some misconceptions
need to be cleared.
Michal
Still it's also easy enough to "not" use it by not requesting sparse
stream too, so while it does touch some common areas we can mitigate
risk to existing algorithms.
I'm all for this - just don't want to see us have an oh snap moment too.
I don't want to derail the process, but also be sure we don't have to
revisit it because of something forgotten. Maybe something here triggers
an oh snap moment.
Perhaps some more things to think about or what jiggled around the back
of my head the last couple of days.
... The primary consumer I assume would be 10G+ xfrs of sparse files.
After all 1G xfrs are fairly "snappy" in todays technology...
... Is there a desire to eventually know how far along in the copy you
might be? For a 100G file I might like to know whether it's still
copying or somehow stuck. For a 100G file that has a break in
communication it sure would be nice to "restart" from the last 100M
rather that restart from point 0.
... Is there a desire to know stats such as # of data segments, # of
sparse segments? Or even more details regarding each segment? Not that
we want to necessarily become that kind of provide, but we have the data
and could pass it back-n-forth. I just copied 10M of data, I just
skipped 100M of sparse space (details) or out of 10G there was 1G of
data an 9G of sparse space.
Would it be better to have an API that has an extendable data structure
that allows for future flexibility? Or do we just create a new API to
handle that that is a superset of the those being added via this series.
John