On 08/03/2017 04:50 AM, Peter Krempa wrote:
On Thu, Aug 03, 2017 at 10:28:59 +0200, Bjoern Walk wrote:
> Peter Krempa <pkrempa(a)redhat.com> [2017-08-03, 09:47AM +0200]:
>> On Thu, Aug 03, 2017 at 07:24:35 +0200, Bjoern Walk wrote:
>>> Peter Krempa <pkrempa(a)redhat.com> [2017-08-02, 05:39PM +0200]:
>>>> It turns out that our implementation of the hashing function is
>>>> endian-dependent and thus if used on various architectures the testsuite
>>>> may have different results. Work this around by mocking virHashCodeGen
>>>> to something which does not use bit operations instead of just setting a
>>>> deterministic seed.
>>>
>>> This does fix the issue on S390. Anyways, I'd also like to see the tests
>>> fixed that rely on the ordering of the hash table. And code that uses
>>
>> The tests are fixed. They are ordered correctly to the newly mocked
>> function. I don't quite get what more you'd like to see fixed.
>>
>
> No, the test is still dependent on the ordering. Just now the mocked
> hash table has deterministic ordering. This is not the same. Although it
> is improbable that this will bite us somewhere, I'd still prefer a clean
> solution.
I don't think it's worth doing this in the tests. The hash table at
least in case of the node name detection code is just an intermediate
step to fill the data into the domain definition and it's a conveniet
place to test the detection code.
Testing it in other places would remove the dependency on deterministic
ordering of the hash table but would not really add much value. The cost
would be much more complexity in the test case which I don't think it's
worth.
If you feel bothered by it, please post patches. I think currently the
testsuite tests the complex portion of the code without the bloating
necessary for the so-called "clean" solution.
NB: I didn't dig into the qemumonitorjsontest algorithm, but...
While going through the common object changes, I ended up looking at and
thinking about the hash table algorithms, initially it was just the
"grow" algorithm as I think it is a bit "aggressive" (perhaps
wasteful)
since as soon as 1 bucket chain exceeds 8 elements, the table is resized
by 8 unless the new size is 8 * 2048 - at which time no matter how full
a bucket is we don't resize the table.
The next thing I considered was using a prime number as the table bucket
size and it seems by just doing that will cause qemumonitorjsontest to
fail. Easy enough to reproduce by changing the virHashCreate call in
qemuBlockNodeNameGetBackingChain to use 53 instead of 50 (even 49 causes
failure). It doesn't matter if the test source also adjusts the initial
hash size.
Of course this makes sense given that virHashCodeGen is the backend of
the virHashComputeKey algorithm that uses the return value %
table->size. So if "size" changes, then obviously order changes.
While I get the code bloat conundrum, but if the order doesn't matter
then I wonder what other variables could change that would affect this
test and whether we really want to be constantly altering the mocks
"just" to get the right answer for this test.
John