Thanks for your reply. I finally succeeded in my lobbying efforts of
increasing the RAM on our custom board from 16 M to 32M. This piece of
software didn't run at all on the 16 M board.
Our current software with download-uncompress feature integrated leaves
around 12-15M left after startup, I think (of which I need 2x4M)

Hm, you are saying one thing that gets me thinking. I assumed, perhaps
incorrectly, that the 'chunks' of memory in the power-of-two scheme were
pre-allocated at kernel start. That is, n * 128k, m * 256k, y * 512k etc And
I thought that the big chunks of 2M and 4M were also pre-allocated at
startup and never used for small allocations.
I thought that this was the point of the SLAB allocator, i.e. avoiding
fragmentations with fixed size pre-allocated blocks.
But if you are saying that earlier allocated blocks of memory could be
chopped up to smaller chunks later on in the execution, I will follow your
advice of claiming them at startup. Actually, I do so in one instance
already. With the 32M board revisions, this should be fine.

So there is no 'guaranteed' way of making sure the system can deliver
reliably a 4M chunk of memory over the time of execution, right?

Thanks,
Harry

On 11/1/07, Gavin Lambert <[EMAIL PROTECTED]> wrote:
>
> Quoth Harry Gunnarsson:
> > So here's the problem, if I run this download-uncompress routine
> > back-to-back, it always works fine the first time, but it could
> > bail on the second/third/fourth time due to allocation failure
> > on one of the big buffers I need. I never figured out why this
> > is and I haven't figured out how to circumvent this. Since I
> > have returned the buffers with free(), the system should be
> > able to hand them out again in the next malloc() call, right?
>
> How much "spare" RAM do you have?  Since the 5272 doesn't have an MMU, any
> large memory requests must be satisfied by a single contiguous block of
> memory.  If the block that you got last time has been broken up into
> smaller
> blocks (for smaller allocations) in the meantime then it will no longer be
> available for your big allocation (and due to memory fragmentation, may
> never be again).
>
> If you're trying to allocate a significant fraction of your total RAM in a
> large chunk then your best bet is to allocate it once at program startup
> and
> keep it around the whole time.  That way you're guaranteed that it'll
> always
> be available.  (Assuming, of course, that you know in advance how much
> room
> you'll need and that other processes won't start having RAM starvation
> from
> this.)
>
>
>
> _______________________________________________
> uClinux-dev mailing list
> uClinux-dev@uclinux.org
> http://mailman.uclinux.org/mailman/listinfo/uclinux-dev
> This message was resent by uclinux-dev@uclinux.org
> To unsubscribe see:
> http://mailman.uclinux.org/mailman/options/uclinux-dev
>
_______________________________________________
uClinux-dev mailing list
uClinux-dev@uclinux.org
http://mailman.uclinux.org/mailman/listinfo/uclinux-dev
This message was resent by uclinux-dev@uclinux.org
To unsubscribe see:
http://mailman.uclinux.org/mailman/options/uclinux-dev

Reply via email to