On Tue, Jan 22, 2008 at 06:42:07AM -0500, jamal wrote:
...
> Jarek,
> 
> That looks different from the suggestion from Dave.

Hmm..., I'm not sure you mean my or your suggestion here, but you
are right anyway...

> May i throw in another bone? Theoretically i can see why it would be a
> really bad idea to walk 50K estimators every time you delete one - which
> is horrible if you are trying to destroy the say 50K of them and gets
> worse as the number of schedulers with 50K classes goes up.
> 
> But i am wondering why a simpler list couldnt be walked, meaning:
> 
> In gen_kill_estimator(), instead of:
> 
> for (idx=0; idx <= EST_MAX_INTERVAL; idx++) {
> 
> Would deriving a better initial index be a big improvement?
> for (e = elist[est->interval].list; e; e = e->next) {

Maybe I miss something, but there still could be a lot of this walking
and IMHO any such longer waiting with BHs disabled is hard to accept
with current memory sizes and low-latencies prices. And currently time
seems to be even more precious here: RCU can't even free any
gen_estimator memory during such large qdisc with classes deletion.

Thanks,
Jarek P.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to