On Tue, Aug 15, 2017 at 16:07:16 -0400, John Ferlan wrote:
On 08/15/2017 11:47 AM, Peter Krempa wrote:
> On Tue, Aug 15, 2017 at 10:02:38 -0400, John Ferlan wrote:
>> On 08/15/2017 08:01 AM, Peter Krempa wrote:
>>> On Mon, Aug 14, 2017 at 20:46:10 -0400, John Ferlan wrote:
>>>> On 08/03/2017 04:50 AM, Peter Krempa wrote:
[...]
Fair enough. I certainly don't want to spend my days
characterizing the
performance of various algorithms to alter the size of a hash table that
has a hash algorithm that is distributing the elements rather evenly. I
still ask myself is having 1 chain longer than 8 too few to cause a
grow? And at what inflection point should a grow be done.
[...]
That made me wonder if we shouldn't grow unless there was more
than
perhaps 16 elems in a bucket... Taking that approach grows the table
If you are going to try to optimize this, you certainly will need to
characterize the function and also determine the split betwen additions
and lookups and the probabilty of a grow which is computationally
intensive.
beyond 4096 buckets only when at greater than 26K elements. Space
vs.
time... We're not even close to needing 10's of thousands yet, but it
was to a degree an interesting exercise.
For most of our uses a linear list would be more than enough. The hash
table has a convenient API though if you need to access elements by text
keys. I don't think libvirt will ever reach the point when the hash
table would be a problem for an operation.