Le 03/12/2014 17:42, Thiago Macieira a écrit :
On Wednesday 03 December 2014 10:36:26 Dominig ar Foll wrote:
the model that I have used in previous project was to check have the
ressource manager to try to allocate the memory amount considered to be
the healthy minimum for a very short time an to release it.
That model also allow to check that not only memory is available but a
decent amount of continuous memory can be allocated.
Contiguous memory does not make sense in userspace. One contiguous block of
virtual memory can be backed by a series of discontiguous pages. And there are
very, very few legitimate uses of contiguous physical blocks of memory and
they're all related to hardware.
Yes and that why that check was done. It also indicate a level of
fragmentation of RAM in the system what can be used a guessing point for
thing starting to go bad.
I don't recommend trying to allocate memory to check if you can allocate
memory, as you may cause the very problem you're trying to prevent (suppose
something else tries to allocate a healthy amount of memory at the same time).
Besides, you have to fault in all of those pages to make sure you actually got
them, which will imply CPU usage...
All the Telco product that I worked on before joining Intel, use that
model. Checking has a cost.
The trick is to ask for enough in one go to be able to take a decision
on Kernel refusal without forcing the kernel to take action by itself.
Not that nice, but works pretty well.
Dominig
_______________________________________________
Dev mailing list
[email protected]
https://lists.tizen.org/listinfo/dev