On Tue, Mar 27 2018, Herbert Xu wrote:

> On Tue, Mar 27, 2018 at 10:33:04AM +1100, NeilBrown wrote:
>> The current rhashtable will fail an insertion if the hashtable
>> it "too full", one of:
>>  - table already has 2^31 elements (-E2BIG)
>>  - a max_size was specified and table already has that
>>    many elements (rounded up to power of 2) (-E2BIG)
>>  - a single chain has more than 16 elements (-EBUSY)
>>  - table has more elements than the current table size,
>>    and allocating a new table fails (-ENOMEM)
>>  - a new page needed to be allocated for a nested table,
>>    and the memory allocation failed (-ENOMEM).
>> 
>> A traditional hash table does not have a concept of "too full", and
>> insertion only fails if the key already exists.  Many users of hash
>> tables have separate means of limiting the total number of entries,
>> and are not susceptible to an attack which could cause unusually large
>> hash chains.  For those users, the need to check for errors when
>> inserting objects to an rhashtable is an unnecessary burden and hence
>> a potential source of bugs (as these failures are likely to be rare).
>
> Did you actually encounter an insertion failure? The current code
> should never fail an insertion until you actually run ouf memory.
> That is unless you're using rhashtable when you should be using
> rhlist instead.

It is easy to get an -EBUSY insertion failure when .disable_count is
enabled, and I did get that.  Blindly propagating that up caused lustre
to get terribly confused - not too surprising really.

Even if I didn't seem errors in practive, if the interface can return an
error, then I need to check for the error and really should test that
handling each error works correctly.  It is much easier to write
reliable code when errors cannot happen, so I'd rather have that option.

Thanks,
NeilBrown

Attachment: signature.asc
Description: PGP signature

Reply via email to