On Tue, Apr 26, 2011 at 03:09:57PM -0500, Amit Kulkarni wrote:
> > > > This diff implements a tradeoff to gain speed at the cost of reducing
> > > > the randomness of chunk allocation in malloc slightly.
> > > >
> > > > The idea is only to randomize the first half of chunks in a page. The
> > > > second half of chunks will fill in the gaps in-order. The
> > > > effectiveness of the current randomization decreases already with the
> > > > number of free slots diminishing in a page.
> > > >
> > > > In one test case, this diff reduced the runtime from 31s to 25s. I'm
> > > > not completely sure if the reduced randomness is acceptable. But if
> > >
> > > Perhaps a quarter? We want to prevent adjacent consecutive
> > > allocations, which is still very likely at the half way point, but
> > > diminishes after that.
> >
> > Yes, that might be better, though you some of the performance gain is
> > lost because you are scanning a lot of bits: i free bits + all bits in
> > between that are not free. If a chunk page is pretty full, that's a
> > lot of bits before you find the i'th free chunk.
> >
> > Originally I though most of the time was lost getting the random bits,
> > but now it seems the scanning of the bits is to blame. Unless I'm
> > misinterpreting my data....
>
>
> Hi Otto,
>
> Now that OpenBSD defaults to use bigmem it will suffer from small
> page size on certain platforms like amd64, sparc64.
How, why? Note that bigmem is/was only relevant on amd64.
>
> What do you guys think if the page size is dynamically adjusted to the
> datasize of FFS1 i.e when I fire up disklabel it is by default 16Kb
> on FFS1 on amd64. And higher on FFS2 only systems? Or you could base it
> on RAM size detected on system. I don't know what's best, you guys know.
What page size are you talking about? Page size in the sense of
allocation unit for mmap(2) is closely tied to the machine's hardware.
You just cannot change that.
Malloc page size could be make bigger, but that has drawbacks: not as
many out-of-bound accesses will be caught.
>
> If then you combine with your original/tedu@ suggestion of consecutive
> allocations nearing end of page, it would be best. Then, you have a
> bigger page size which packs in more allocations and also gives more
> room to randomize. Of course,there is the problem of guard pages where
> memory is wasted, if you have guard page size == normal page size.
> Guard page size could be current page size, which is 4096 on amd64.
>
> This would take care of almost all scenarios, including supporting old
> drives, current disks with massive cache, and upcoming world of SSDs.
> And the RAM sizes are exploding too, 8GB stick is becoming mainstream.
OpenBSD runs on more machines than your newest toys. malloc(3) is
designed to be a general purpose allocater useable from a vax to big
64-bit machines. That brings some drawbacks, yes, but that's the way
it is.
> IMHO, increasing the page size will lead to bigger sustainable gains.
I'm not convinced.
-Otto
>
> Thanks,
> amit