[Sorry to revive this, slowly catching up with a backlog of mail.] The number of entries in the slabs table is typically less than 48. A large array of size->slab would probably chew up the L2 cache and well, it doesn't feel right..
I think it would be interesting to apply a function to the size input and determine a minimum value to start searching the array, that could save some cycles. Either that or use a binary tree to speed up the search. On Thu, Aug 23, 2007 at 02:17:00PM -0700, Tony Di Croce wrote: > I think you could gain a little speed in your slab allocator by re-writing > slabs_clsid() to use a lookup table. > > Basically, if you had an array with 1mb buckets, where each bucket was a > pointer (so that the total size of the array would be 4mb), you could locate > which slab to use simply by using the requested size as an index... It would > save walking that array of up to POWER_LARGEST buckets. > > Since this function is called by do_slab_alloc(), it could potentially be a > big win. > > td -- Paul Lindner ||||| | | | | | | | | | [EMAIL PROTECTED]
pgpZpFTWbXptP.pgp
Description: PGP signature
