Re: Question on rhashtable in worst-case scenario.

2016-04-02 Thread Johannes Berg
On Sat, 2016-04-02 at 09:46 +0800, Herbert Xu wrote:
> On Fri, Apr 01, 2016 at 11:34:10PM +0200, Johannes Berg wrote:
> > 
> > 
> > I was thinking about that one - it's not obvious to me from the
> > code
> > how this "explicitly checking for dups" would be done or let's say
> > how
> > rhashtable differentiates. But since it seems to work for Ben until
> > hitting a certain number of identical keys, surely that's just me
> > not
> > understanding the code rather than anything else :)
> It's really simple, rhashtable_insert_fast does not check for dups
> while rhashtable_lookup_insert_* do.

Oh, ok, thanks :)

johannes


Re: Question on rhashtable in worst-case scenario.

2016-04-02 Thread Johannes Berg
On Sat, 2016-04-02 at 09:46 +0800, Herbert Xu wrote:
> On Fri, Apr 01, 2016 at 11:34:10PM +0200, Johannes Berg wrote:
> > 
> > 
> > I was thinking about that one - it's not obvious to me from the
> > code
> > how this "explicitly checking for dups" would be done or let's say
> > how
> > rhashtable differentiates. But since it seems to work for Ben until
> > hitting a certain number of identical keys, surely that's just me
> > not
> > understanding the code rather than anything else :)
> It's really simple, rhashtable_insert_fast does not check for dups
> while rhashtable_lookup_insert_* do.

Oh, ok, thanks :)

johannes


Re: Question on rhashtable in worst-case scenario.

2016-04-01 Thread Herbert Xu
On Fri, Apr 01, 2016 at 11:34:10PM +0200, Johannes Berg wrote:
>
> I was thinking about that one - it's not obvious to me from the code
> how this "explicitly checking for dups" would be done or let's say how
> rhashtable differentiates. But since it seems to work for Ben until
> hitting a certain number of identical keys, surely that's just me not
> understanding the code rather than anything else :)

It's really simple, rhashtable_insert_fast does not check for dups
while rhashtable_lookup_insert_* do.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-04-01 Thread Herbert Xu
On Fri, Apr 01, 2016 at 11:34:10PM +0200, Johannes Berg wrote:
>
> I was thinking about that one - it's not obvious to me from the code
> how this "explicitly checking for dups" would be done or let's say how
> rhashtable differentiates. But since it seems to work for Ben until
> hitting a certain number of identical keys, surely that's just me not
> understanding the code rather than anything else :)

It's really simple, rhashtable_insert_fast does not check for dups
while rhashtable_lookup_insert_* do.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-04-01 Thread Johannes Berg
On Fri, 2016-04-01 at 08:46 +0800, Herbert Xu wrote:
> On Thu, Mar 31, 2016 at 05:29:59PM +0200, Johannes Berg wrote:
> > 
> > 
> > Does removing this completely disable the "-EEXIST" error? I can't
> > say
> > I fully understand the elasticity stuff in
> > __rhashtable_insert_fast().
> What EEXIST error are you talking about? The only one that can be
> returned on insertion is if you're explicitly checking for dups
> which clearly can't be the case for you.

I was thinking about that one - it's not obvious to me from the code
how this "explicitly checking for dups" would be done or let's say how
rhashtable differentiates. But since it seems to work for Ben until
hitting a certain number of identical keys, surely that's just me not
understanding the code rather than anything else :)

johannes


Re: Question on rhashtable in worst-case scenario.

2016-04-01 Thread Johannes Berg
On Fri, 2016-04-01 at 08:46 +0800, Herbert Xu wrote:
> On Thu, Mar 31, 2016 at 05:29:59PM +0200, Johannes Berg wrote:
> > 
> > 
> > Does removing this completely disable the "-EEXIST" error? I can't
> > say
> > I fully understand the elasticity stuff in
> > __rhashtable_insert_fast().
> What EEXIST error are you talking about? The only one that can be
> returned on insertion is if you're explicitly checking for dups
> which clearly can't be the case for you.

I was thinking about that one - it's not obvious to me from the code
how this "explicitly checking for dups" would be done or let's say how
rhashtable differentiates. But since it seems to work for Ben until
hitting a certain number of identical keys, surely that's just me not
understanding the code rather than anything else :)

johannes


Re: Question on rhashtable in worst-case scenario.

2016-04-01 Thread Ben Greear

On 03/31/2016 05:46 PM, Herbert Xu wrote:

On Thu, Mar 31, 2016 at 05:29:59PM +0200, Johannes Berg wrote:


Does removing this completely disable the "-EEXIST" error? I can't say
I fully understand the elasticity stuff in __rhashtable_insert_fast().


What EEXIST error are you talking about? The only one that can be
returned on insertion is if you're explicitly checking for dups
which clearly can't be the case for you.

If you're talking about the EEXIST error due to a rehash then it is
completely hidden from you by rhashtable_insert_rehash.

If you actually meant EBUSY then yes this should prevent it from
occurring, unless your chain-length exceeds 2^32.


EEXIST was on removal, and was a symptom of the failure to insert, not
really a problem itself.

I reverted my revert (ie, back to rhashtable), added Johanne's patch
to check insertion (and added my on pr_err there).

I also added this:

diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
index 38ef0be..c25b945 100644
--- a/net/mac80211/sta_info.c
+++ b/net/mac80211/sta_info.c
@@ -66,6 +66,7 @@

 static const struct rhashtable_params sta_rht_params = {
.nelem_hint = 3, /* start small */
+   .insecure_elasticity = true, /* Disable chain-length checks. */
.automatic_shrinking = true,
.head_offset = offsetof(struct sta_info, hash_node),
.key_offset = offsetof(struct sta_info, addr),


Now, my test case seems to pass, though I did have one strange issue
before I put in the pr_err.  I'm not sure if it was a hashtable issue
or something else..but I have lots of something-else going on in this system,
so I'd say that likely the patch above fixes rhashtable for my use case.

I will of course let you know if I run into more issues that appear
to be hashtable related!

Thanks,
Ben

--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



Re: Question on rhashtable in worst-case scenario.

2016-04-01 Thread Ben Greear

On 03/31/2016 05:46 PM, Herbert Xu wrote:

On Thu, Mar 31, 2016 at 05:29:59PM +0200, Johannes Berg wrote:


Does removing this completely disable the "-EEXIST" error? I can't say
I fully understand the elasticity stuff in __rhashtable_insert_fast().


What EEXIST error are you talking about? The only one that can be
returned on insertion is if you're explicitly checking for dups
which clearly can't be the case for you.

If you're talking about the EEXIST error due to a rehash then it is
completely hidden from you by rhashtable_insert_rehash.

If you actually meant EBUSY then yes this should prevent it from
occurring, unless your chain-length exceeds 2^32.


EEXIST was on removal, and was a symptom of the failure to insert, not
really a problem itself.

I reverted my revert (ie, back to rhashtable), added Johanne's patch
to check insertion (and added my on pr_err there).

I also added this:

diff --git a/net/mac80211/sta_info.c b/net/mac80211/sta_info.c
index 38ef0be..c25b945 100644
--- a/net/mac80211/sta_info.c
+++ b/net/mac80211/sta_info.c
@@ -66,6 +66,7 @@

 static const struct rhashtable_params sta_rht_params = {
.nelem_hint = 3, /* start small */
+   .insecure_elasticity = true, /* Disable chain-length checks. */
.automatic_shrinking = true,
.head_offset = offsetof(struct sta_info, hash_node),
.key_offset = offsetof(struct sta_info, addr),


Now, my test case seems to pass, though I did have one strange issue
before I put in the pr_err.  I'm not sure if it was a hashtable issue
or something else..but I have lots of something-else going on in this system,
so I'd say that likely the patch above fixes rhashtable for my use case.

I will of course let you know if I run into more issues that appear
to be hashtable related!

Thanks,
Ben

--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Herbert Xu
On Thu, Mar 31, 2016 at 05:29:59PM +0200, Johannes Berg wrote:
> 
> Does removing this completely disable the "-EEXIST" error? I can't say
> I fully understand the elasticity stuff in __rhashtable_insert_fast().

What EEXIST error are you talking about? The only one that can be
returned on insertion is if you're explicitly checking for dups
which clearly can't be the case for you.

If you're talking about the EEXIST error due to a rehash then it is
completely hidden from you by rhashtable_insert_rehash.

If you actually meant EBUSY then yes this should prevent it from
occurring, unless your chain-length exceeds 2^32.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Herbert Xu
On Thu, Mar 31, 2016 at 05:29:59PM +0200, Johannes Berg wrote:
> 
> Does removing this completely disable the "-EEXIST" error? I can't say
> I fully understand the elasticity stuff in __rhashtable_insert_fast().

What EEXIST error are you talking about? The only one that can be
returned on insertion is if you're explicitly checking for dups
which clearly can't be the case for you.

If you're talking about the EEXIST error due to a rehash then it is
completely hidden from you by rhashtable_insert_rehash.

If you actually meant EBUSY then yes this should prevent it from
occurring, unless your chain-length exceeds 2^32.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Johannes Berg
On Thu, 2016-03-31 at 15:50 +0800, Herbert Xu wrote:
> On Thu, Mar 31, 2016 at 09:46:45AM +0200, Johannes Berg wrote:
> > 
> > 
> > In this case, I think perhaps you can just patch your local system
> > with
> > the many interfaces connecting to the same AP to add the parameter
> > Herbert suggested (.insecure_elasticity = true in sta_rht_params).
> > This
> > is, after all, very much a case that "normal" operation doesn't
> > even
> > get close to.
> I think you should just turn it on everywhere for mac80211.  Chain
> length checks simply don't make sense when you allow duplicate
> keys in the hash table.

Yes, that's a good point, and we can - in certain corner cases - end up
with duplicate keys even in normal operation.

Does removing this completely disable the "-EEXIST" error? I can't say
I fully understand the elasticity stuff in __rhashtable_insert_fast().

johannes


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Johannes Berg
On Thu, 2016-03-31 at 15:50 +0800, Herbert Xu wrote:
> On Thu, Mar 31, 2016 at 09:46:45AM +0200, Johannes Berg wrote:
> > 
> > 
> > In this case, I think perhaps you can just patch your local system
> > with
> > the many interfaces connecting to the same AP to add the parameter
> > Herbert suggested (.insecure_elasticity = true in sta_rht_params).
> > This
> > is, after all, very much a case that "normal" operation doesn't
> > even
> > get close to.
> I think you should just turn it on everywhere for mac80211.  Chain
> length checks simply don't make sense when you allow duplicate
> keys in the hash table.

Yes, that's a good point, and we can - in certain corner cases - end up
with duplicate keys even in normal operation.

Does removing this completely disable the "-EEXIST" error? I can't say
I fully understand the elasticity stuff in __rhashtable_insert_fast().

johannes


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Johannes Berg
On Thu, 2016-03-31 at 08:13 -0700, Ben Greear wrote:
> 
> I see insertion failure, and then later, if of course fails to delete
> as well since it was never inserted to begin with.  There is no good
> way to deal with insertion error, so just need to fix the hashtable.

Oh, that's an oversight in mac80211 - it should be dealing with
insertion failures properly. This isn't really a problem either,
although it will lead to errors in your particular case.

https://p.sipsolutions.net/cf77c78d69a231d4.txt

johannes


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Johannes Berg
On Thu, 2016-03-31 at 08:13 -0700, Ben Greear wrote:
> 
> I see insertion failure, and then later, if of course fails to delete
> as well since it was never inserted to begin with.  There is no good
> way to deal with insertion error, so just need to fix the hashtable.

Oh, that's an oversight in mac80211 - it should be dealing with
insertion failures properly. This isn't really a problem either,
although it will lead to errors in your particular case.

https://p.sipsolutions.net/cf77c78d69a231d4.txt

johannes


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Ben Greear



On 03/31/2016 12:46 AM, Johannes Berg wrote:

On Wed, 2016-03-30 at 09:52 -0700, Ben Greear wrote:


If someone can fix rhashtable, then great.
I read some earlier comments [1] back when someone else reported
similar problems, and the comments seemed to indicate that rhashtable
was broken in this manner on purpose to protect against hashing
attacks.

If you are baking in this type of policy to what should be a basic
data-type, then it is not useful for how it is being used in
the mac80211 stack.

[1]  http://lkml.iu.edu/hypermail/linux/kernel/1512.2/01681.html



That's not really saying it's purposely broken, that's more saying that
Herbert didn't see a point in fixing a case that has awful behaviour
already.

However, I'm confused now - we can much more easily live with
*insertion* failures, as the linked email indicates, than *deletion*
failures, which I think you had indicated originally. Are you really
seeing *deletion* failures?

If there are in fact *deletion* failures then I think we really need to
address those in rhashtable, no matter the worst-case behaviour of the
hashing or keys, since we should be able to delete entries in order to
get back to something reasonable. Looking at the code though, I don't
actually see that happening.

If you're seeing only *insertion* failures, which you indicated in the
root of this thread, then I think for the general case in mac80211 we
can live with that - we use a seeded jhash for the hash function, and
since in the general case we cannot accept entries with identical MAC
addresses to start with, it shouldn't be possible to run into this
problem under normal use cases.


I see insertion failure, and then later, if of course fails to delete
as well since it was never inserted to begin with.  There is no good
way to deal with insertion error, so just need to fix the hashtable.



In this case, I think perhaps you can just patch your local system with
the many interfaces connecting to the same AP to add the parameter
Herbert suggested (.insecure_elasticity = true in sta_rht_params). This
is, after all, very much a case that "normal" operation doesn't even
get close to.


Old code, even stock kernels, could deal with this properly, so I think it
should be fixed by default.  I'll put rhash back in my tree and try that 
insecure
option and see if it works.

Thanks,
Ben


johannes



--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Ben Greear



On 03/31/2016 12:46 AM, Johannes Berg wrote:

On Wed, 2016-03-30 at 09:52 -0700, Ben Greear wrote:


If someone can fix rhashtable, then great.
I read some earlier comments [1] back when someone else reported
similar problems, and the comments seemed to indicate that rhashtable
was broken in this manner on purpose to protect against hashing
attacks.

If you are baking in this type of policy to what should be a basic
data-type, then it is not useful for how it is being used in
the mac80211 stack.

[1]  http://lkml.iu.edu/hypermail/linux/kernel/1512.2/01681.html



That's not really saying it's purposely broken, that's more saying that
Herbert didn't see a point in fixing a case that has awful behaviour
already.

However, I'm confused now - we can much more easily live with
*insertion* failures, as the linked email indicates, than *deletion*
failures, which I think you had indicated originally. Are you really
seeing *deletion* failures?

If there are in fact *deletion* failures then I think we really need to
address those in rhashtable, no matter the worst-case behaviour of the
hashing or keys, since we should be able to delete entries in order to
get back to something reasonable. Looking at the code though, I don't
actually see that happening.

If you're seeing only *insertion* failures, which you indicated in the
root of this thread, then I think for the general case in mac80211 we
can live with that - we use a seeded jhash for the hash function, and
since in the general case we cannot accept entries with identical MAC
addresses to start with, it shouldn't be possible to run into this
problem under normal use cases.


I see insertion failure, and then later, if of course fails to delete
as well since it was never inserted to begin with.  There is no good
way to deal with insertion error, so just need to fix the hashtable.



In this case, I think perhaps you can just patch your local system with
the many interfaces connecting to the same AP to add the parameter
Herbert suggested (.insecure_elasticity = true in sta_rht_params). This
is, after all, very much a case that "normal" operation doesn't even
get close to.


Old code, even stock kernels, could deal with this properly, so I think it
should be fixed by default.  I'll put rhash back in my tree and try that 
insecure
option and see if it works.

Thanks,
Ben


johannes



--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Herbert Xu
On Thu, Mar 31, 2016 at 09:46:45AM +0200, Johannes Berg wrote:
>
> In this case, I think perhaps you can just patch your local system with
> the many interfaces connecting to the same AP to add the parameter
> Herbert suggested (.insecure_elasticity = true in sta_rht_params). This
> is, after all, very much a case that "normal" operation doesn't even
> get close to.

I think you should just turn it on everywhere for mac80211.  Chain
length checks simply don't make sense when you allow duplicate
keys in the hash table.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Herbert Xu
On Thu, Mar 31, 2016 at 09:46:45AM +0200, Johannes Berg wrote:
>
> In this case, I think perhaps you can just patch your local system with
> the many interfaces connecting to the same AP to add the parameter
> Herbert suggested (.insecure_elasticity = true in sta_rht_params). This
> is, after all, very much a case that "normal" operation doesn't even
> get close to.

I think you should just turn it on everywhere for mac80211.  Chain
length checks simply don't make sense when you allow duplicate
keys in the hash table.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Johannes Berg
On Wed, 2016-03-30 at 09:52 -0700, Ben Greear wrote:

> If someone can fix rhashtable, then great.
> I read some earlier comments [1] back when someone else reported
> similar problems, and the comments seemed to indicate that rhashtable
> was broken in this manner on purpose to protect against hashing
> attacks.
> 
> If you are baking in this type of policy to what should be a basic
> data-type, then it is not useful for how it is being used in
> the mac80211 stack.
> 
> [1]  http://lkml.iu.edu/hypermail/linux/kernel/1512.2/01681.html
> 

That's not really saying it's purposely broken, that's more saying that
Herbert didn't see a point in fixing a case that has awful behaviour
already.

However, I'm confused now - we can much more easily live with
*insertion* failures, as the linked email indicates, than *deletion*
failures, which I think you had indicated originally. Are you really
seeing *deletion* failures?

If there are in fact *deletion* failures then I think we really need to
address those in rhashtable, no matter the worst-case behaviour of the
hashing or keys, since we should be able to delete entries in order to
get back to something reasonable. Looking at the code though, I don't
actually see that happening.

If you're seeing only *insertion* failures, which you indicated in the
root of this thread, then I think for the general case in mac80211 we
can live with that - we use a seeded jhash for the hash function, and
since in the general case we cannot accept entries with identical MAC
addresses to start with, it shouldn't be possible to run into this
problem under normal use cases.

In this case, I think perhaps you can just patch your local system with
the many interfaces connecting to the same AP to add the parameter
Herbert suggested (.insecure_elasticity = true in sta_rht_params). This
is, after all, very much a case that "normal" operation doesn't even
get close to.

johannes


Re: Question on rhashtable in worst-case scenario.

2016-03-31 Thread Johannes Berg
On Wed, 2016-03-30 at 09:52 -0700, Ben Greear wrote:

> If someone can fix rhashtable, then great.
> I read some earlier comments [1] back when someone else reported
> similar problems, and the comments seemed to indicate that rhashtable
> was broken in this manner on purpose to protect against hashing
> attacks.
> 
> If you are baking in this type of policy to what should be a basic
> data-type, then it is not useful for how it is being used in
> the mac80211 stack.
> 
> [1]  http://lkml.iu.edu/hypermail/linux/kernel/1512.2/01681.html
> 

That's not really saying it's purposely broken, that's more saying that
Herbert didn't see a point in fixing a case that has awful behaviour
already.

However, I'm confused now - we can much more easily live with
*insertion* failures, as the linked email indicates, than *deletion*
failures, which I think you had indicated originally. Are you really
seeing *deletion* failures?

If there are in fact *deletion* failures then I think we really need to
address those in rhashtable, no matter the worst-case behaviour of the
hashing or keys, since we should be able to delete entries in order to
get back to something reasonable. Looking at the code though, I don't
actually see that happening.

If you're seeing only *insertion* failures, which you indicated in the
root of this thread, then I think for the general case in mac80211 we
can live with that - we use a seeded jhash for the hash function, and
since in the general case we cannot accept entries with identical MAC
addresses to start with, it shouldn't be possible to run into this
problem under normal use cases.

In this case, I think perhaps you can just patch your local system with
the many interfaces connecting to the same AP to add the parameter
Herbert suggested (.insecure_elasticity = true in sta_rht_params). This
is, after all, very much a case that "normal" operation doesn't even
get close to.

johannes


Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Ben Greear

On 03/30/2016 09:38 AM, David Miller wrote:

From: Johannes Berg 
Date: Wed, 30 Mar 2016 11:14:12 +0200


On Tue, 2016-03-29 at 09:16 -0700, Ben Greear wrote:

Looks like rhashtable has too much policy in it to properly deal with
cases where there are too many hash collisions, so I am going to work
on reverting it's use in mac80211.


I'm not really all that happy with that approach - can't we fix the
rhashtable? It's a pretty rare corner case that many keys really are
identical and no kind of hash algorithm, but it seems much better to
still deal with it than to remove the rhashtable usage and go back to
hand-rolling something.


Yeah reverting seems like a really idiotic way to deal with the issue.



If someone can fix rhashtable, then great.
I read some earlier comments [1] back when someone else reported
similar problems, and the comments seemed to indicate that rhashtable was
broken in this manner on purpose to protect against hashing attacks.

If you are baking in this type of policy to what should be a basic
data-type, then it is not useful for how it is being used in
the mac80211 stack.

[1]  http://lkml.iu.edu/hypermail/linux/kernel/1512.2/01681.html

Thanks,
Ben

--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Ben Greear

On 03/30/2016 09:38 AM, David Miller wrote:

From: Johannes Berg 
Date: Wed, 30 Mar 2016 11:14:12 +0200


On Tue, 2016-03-29 at 09:16 -0700, Ben Greear wrote:

Looks like rhashtable has too much policy in it to properly deal with
cases where there are too many hash collisions, so I am going to work
on reverting it's use in mac80211.


I'm not really all that happy with that approach - can't we fix the
rhashtable? It's a pretty rare corner case that many keys really are
identical and no kind of hash algorithm, but it seems much better to
still deal with it than to remove the rhashtable usage and go back to
hand-rolling something.


Yeah reverting seems like a really idiotic way to deal with the issue.



If someone can fix rhashtable, then great.
I read some earlier comments [1] back when someone else reported
similar problems, and the comments seemed to indicate that rhashtable was
broken in this manner on purpose to protect against hashing attacks.

If you are baking in this type of policy to what should be a basic
data-type, then it is not useful for how it is being used in
the mac80211 stack.

[1]  http://lkml.iu.edu/hypermail/linux/kernel/1512.2/01681.html

Thanks,
Ben

--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread David Miller
From: Johannes Berg 
Date: Wed, 30 Mar 2016 11:14:12 +0200

> On Tue, 2016-03-29 at 09:16 -0700, Ben Greear wrote:
>> Looks like rhashtable has too much policy in it to properly deal with
>> cases where there are too many hash collisions, so I am going to work
>> on reverting it's use in mac80211.
> 
> I'm not really all that happy with that approach - can't we fix the
> rhashtable? It's a pretty rare corner case that many keys really are
> identical and no kind of hash algorithm, but it seems much better to
> still deal with it than to remove the rhashtable usage and go back to
> hand-rolling something.

Yeah reverting seems like a really idiotic way to deal with the issue.


Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread David Miller
From: Johannes Berg 
Date: Wed, 30 Mar 2016 11:14:12 +0200

> On Tue, 2016-03-29 at 09:16 -0700, Ben Greear wrote:
>> Looks like rhashtable has too much policy in it to properly deal with
>> cases where there are too many hash collisions, so I am going to work
>> on reverting it's use in mac80211.
> 
> I'm not really all that happy with that approach - can't we fix the
> rhashtable? It's a pretty rare corner case that many keys really are
> identical and no kind of hash algorithm, but it seems much better to
> still deal with it than to remove the rhashtable usage and go back to
> hand-rolling something.

Yeah reverting seems like a really idiotic way to deal with the issue.


Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Herbert Xu
On Wed, Mar 30, 2016 at 04:03:08PM +0200, Johannes Berg wrote:
> 
> But we really don't want that either - in the normal case where you
> don't create all these virtual interfaces for testing, you have a
> certain number of peers that can vary a lot (zero to hundreds, in
> theory thousands) where you *don't* have the same key, so we still want
> to have the rehashing if the chains get longer in that case.

insecure_elasticity only disables rehashing without growing, it
does not inhibit table expansion which is driven by the number of
objects in the whole table.

> It's really just the degenerate case that Ben is creating locally
> that's causing a problem, afaict, though it's a bit disconcerting that
> rhashtable in general can cause strange failures at delete time.

The failure is simple, rhashtable will rehash the table if a given
chain becomes too long.  This simply doesn't work if you hash many
objects with the same key since the chain will never get shorter
even after a rehash (or expansion).

Therefore if your hashtable has to do this then you have to disable
the rehash logic using the insecure_elasticity flag.

Alternatively you can construct your own linked list for objects
with the same key outside of rhashtable and hash the list instead.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Herbert Xu
On Wed, Mar 30, 2016 at 04:03:08PM +0200, Johannes Berg wrote:
> 
> But we really don't want that either - in the normal case where you
> don't create all these virtual interfaces for testing, you have a
> certain number of peers that can vary a lot (zero to hundreds, in
> theory thousands) where you *don't* have the same key, so we still want
> to have the rehashing if the chains get longer in that case.

insecure_elasticity only disables rehashing without growing, it
does not inhibit table expansion which is driven by the number of
objects in the whole table.

> It's really just the degenerate case that Ben is creating locally
> that's causing a problem, afaict, though it's a bit disconcerting that
> rhashtable in general can cause strange failures at delete time.

The failure is simple, rhashtable will rehash the table if a given
chain becomes too long.  This simply doesn't work if you hash many
objects with the same key since the chain will never get shorter
even after a rehash (or expansion).

Therefore if your hashtable has to do this then you have to disable
the rehash logic using the insecure_elasticity flag.

Alternatively you can construct your own linked list for objects
with the same key outside of rhashtable and hash the list instead.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Johannes Berg
On Wed, 2016-03-30 at 21:55 +0800, Herbert Xu wrote:

> Well to start with you should assess whether you really want to
> hash multiple objects with the same key.  In particular, can an
> adversary generate a large number of such objects?

No, the only reason this happens is local - if you take the single
hardware and connect it to the same AP many times. This is what Ben is
doing - he's creating virtual interfaces on top of the same physical
hardware, and then connection all of these to the same AP, mostly for
testing the AP.

> If your conclusion is that yes you really want to do this, then
> we have the parameter insecure_elasticity that you can use to
> disable the rehashing based on chain length.

But we really don't want that either - in the normal case where you
don't create all these virtual interfaces for testing, you have a
certain number of peers that can vary a lot (zero to hundreds, in
theory thousands) where you *don't* have the same key, so we still want
to have the rehashing if the chains get longer in that case.

It's really just the degenerate case that Ben is creating locally
that's causing a problem, afaict, though it's a bit disconcerting that
rhashtable in general can cause strange failures at delete time.

johannes


Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Johannes Berg
On Wed, 2016-03-30 at 21:55 +0800, Herbert Xu wrote:

> Well to start with you should assess whether you really want to
> hash multiple objects with the same key.  In particular, can an
> adversary generate a large number of such objects?

No, the only reason this happens is local - if you take the single
hardware and connect it to the same AP many times. This is what Ben is
doing - he's creating virtual interfaces on top of the same physical
hardware, and then connection all of these to the same AP, mostly for
testing the AP.

> If your conclusion is that yes you really want to do this, then
> we have the parameter insecure_elasticity that you can use to
> disable the rehashing based on chain length.

But we really don't want that either - in the normal case where you
don't create all these virtual interfaces for testing, you have a
certain number of peers that can vary a lot (zero to hundreds, in
theory thousands) where you *don't* have the same key, so we still want
to have the rehashing if the chains get longer in that case.

It's really just the degenerate case that Ben is creating locally
that's causing a problem, afaict, though it's a bit disconcerting that
rhashtable in general can cause strange failures at delete time.

johannes


Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Herbert Xu
On Wed, Mar 30, 2016 at 11:14:12AM +0200, Johannes Berg wrote:
> On Tue, 2016-03-29 at 09:16 -0700, Ben Greear wrote:
> > Looks like rhashtable has too much policy in it to properly deal with
> > cases where there are too many hash collisions, so I am going to work
> > on reverting it's use in mac80211.
> 
> I'm not really all that happy with that approach - can't we fix the
> rhashtable? It's a pretty rare corner case that many keys really are
> identical and no kind of hash algorithm, but it seems much better to
> still deal with it than to remove the rhashtable usage and go back to
> hand-rolling something.

Well to start with you should assess whether you really want to
hash multiple objects with the same key.  In particular, can an
adversary generate a large number of such objects?

If your conclusion is that yes you really want to do this, then
we have the parameter insecure_elasticity that you can use to
disable the rehashing based on chain length.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Herbert Xu
On Wed, Mar 30, 2016 at 11:14:12AM +0200, Johannes Berg wrote:
> On Tue, 2016-03-29 at 09:16 -0700, Ben Greear wrote:
> > Looks like rhashtable has too much policy in it to properly deal with
> > cases where there are too many hash collisions, so I am going to work
> > on reverting it's use in mac80211.
> 
> I'm not really all that happy with that approach - can't we fix the
> rhashtable? It's a pretty rare corner case that many keys really are
> identical and no kind of hash algorithm, but it seems much better to
> still deal with it than to remove the rhashtable usage and go back to
> hand-rolling something.

Well to start with you should assess whether you really want to
hash multiple objects with the same key.  In particular, can an
adversary generate a large number of such objects?

If your conclusion is that yes you really want to do this, then
we have the parameter insecure_elasticity that you can use to
disable the rehashing based on chain length.

Cheers,
-- 
Email: Herbert Xu 
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt


Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Johannes Berg
On Tue, 2016-03-29 at 09:16 -0700, Ben Greear wrote:
> Looks like rhashtable has too much policy in it to properly deal with
> cases where there are too many hash collisions, so I am going to work
> on reverting it's use in mac80211.

I'm not really all that happy with that approach - can't we fix the
rhashtable? It's a pretty rare corner case that many keys really are
identical and no kind of hash algorithm, but it seems much better to
still deal with it than to remove the rhashtable usage and go back to
hand-rolling something.

johannes



Re: Question on rhashtable in worst-case scenario.

2016-03-30 Thread Johannes Berg
On Tue, 2016-03-29 at 09:16 -0700, Ben Greear wrote:
> Looks like rhashtable has too much policy in it to properly deal with
> cases where there are too many hash collisions, so I am going to work
> on reverting it's use in mac80211.

I'm not really all that happy with that approach - can't we fix the
rhashtable? It's a pretty rare corner case that many keys really are
identical and no kind of hash algorithm, but it seems much better to
still deal with it than to remove the rhashtable usage and go back to
hand-rolling something.

johannes



Re: Question on rhashtable in worst-case scenario.

2016-03-29 Thread Ben Greear

Looks like rhashtable has too much policy in it to properly deal with
cases where there are too many hash collisions, so I am going to work on
reverting it's use in mac80211.

Thanks,
Ben

On 03/28/2016 01:29 PM, Ben Greear wrote:

Hello!

I have a use case for mac80211 where I create multiple stations to
the same remote peer MAC address.

I'm seeing cases where the rhashtable logic is returning -16 (EBUSY)
on insert (see sta_info_hash_add).
This is with the 4.4.6+ (plus local patches) kernel, and it has the patch 
mentioned
here:

https://lkml.org/lkml/2015/12/3/307

If I understand the code properly, my use case is going to be worst-case 
scenario,
where all of my items in the hash have the same key (peer mac addr).

I have my own secondary hash to handle most of my hot-path lookups, but I still 
need
the main hash to at least function in a linear-search manner.

Any idea what I can do to get rid of the EBUSY return code problem, or how
to debug it further?

Thanks,
Ben




--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com



Re: Question on rhashtable in worst-case scenario.

2016-03-29 Thread Ben Greear

Looks like rhashtable has too much policy in it to properly deal with
cases where there are too many hash collisions, so I am going to work on
reverting it's use in mac80211.

Thanks,
Ben

On 03/28/2016 01:29 PM, Ben Greear wrote:

Hello!

I have a use case for mac80211 where I create multiple stations to
the same remote peer MAC address.

I'm seeing cases where the rhashtable logic is returning -16 (EBUSY)
on insert (see sta_info_hash_add).
This is with the 4.4.6+ (plus local patches) kernel, and it has the patch 
mentioned
here:

https://lkml.org/lkml/2015/12/3/307

If I understand the code properly, my use case is going to be worst-case 
scenario,
where all of my items in the hash have the same key (peer mac addr).

I have my own secondary hash to handle most of my hot-path lookups, but I still 
need
the main hash to at least function in a linear-search manner.

Any idea what I can do to get rid of the EBUSY return code problem, or how
to debug it further?

Thanks,
Ben




--
Ben Greear 
Candela Technologies Inc  http://www.candelatech.com