On Sat, 2006-08-12 at 18:51 +0200, Indan Zupancic wrote:
> On Sat, August 12, 2006 16:14, Peter Zijlstra said:
> > Hi,
> >
> > here the latest effort, it includes a whole new trivial allocator with a
> > horrid name and an almost full rewrite of the deadlock prevention core.
> > This version does not do anything per device and hence does not depend
> > on the new netdev_alloc_skb() API.
> >
> > The reason to add a second allocator to the receive side is twofold:
> > 1) it allows easy detection of the memory pressure / OOM situation;
> > 2) it allows the receive path to be unbounded and go at full speed when
> >    resources permit.
> >
> > The choice of using the global memalloc reserve as a mempool makes that
> > the new allocator has to release pages as soon as possible; if we were
> > to hoard pages in the allocator the memalloc reserve would not get
> > replenished readily.
> 
> Version 2 had about 250 new lines of code, while v3 has close to 600, when
> including the SROG code. And that while things should have become simpler.
> So why use SROG instead of the old alloc_pages() based code? And why couldn't
> you use a slightly modified SLOB instead of writing a new allocator?
> It looks like overkill to me.

Of the 611 new lines about 150 are new comments.

Simpler yes, but also more complete; the old patches had serious issues
with the alternative allocation scheme.

As for why SROG, because trying to stick all the semantics needed for
all skb operations into the old approach was nasty, I had it almost
complete but it was horror (and more code than the SROG approach).

Why not SLOB, well, I mistakenly assumed that it was a simpler SLAB
allocator, my bad. However after having had a quick peek at it; whereas
it seems similar in intent it does not provide the things I look for.
Making it so and keep the old semantics and make it compile along side
of SLAB will take quite some effort.





-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to