Thank you for your reply. See below for comments.
Erik de Castro Lopo wrote:
Carlo Sogono wrote:
Does RHEL or Linux in general limit the amount of memory being used by a
single process?
All linux systems limit the amount to memory to be less than
the totla virtual memory of the system :-).
I understand that, but my question was with regards to a *single*
process, or will each process just share whatever memory is left?
In the first maybe 5 million mallocs it can do
about 100,000 mallocs per second, however after more than 1 GB worth it
slows down to just a few thousand per second.
How much virtual and real memory do you actually have?
We have 9GB of physical memory. At the moment, my application has to be
able to comfortably handle 4GB of memory handled by just *one* process.
Are these mallocs being freed or do you just keep on mallocing?
Our application will be mallocing and freeing simultaneously, but I am
trying to simulate our worst case scenario, wherein the process will
have to keep 4GB of data (160 byte chunks) in memory.
Is there something I can do on Linux or RHEL, or maybe something else I
should do in my coding?
Before doing anything you really need to figure out what the
problem is :-).
The application we're building is a relatively simple server /
application gateway for a telco. It just has to be able to handle a
large amount of data in memory. I am 101% sure my coding's logic is not
flawed, as it is a very simple application for now...
Thanks,
Carlo
Some stats...
You might also want to try the dstat program as recommended to me
by Martin Vissier (sp?):
http://dag.wieers.com/home-made/dstat/
Cheers,
Erik
_______________________________________________
coders mailing list
[email protected]
http://lists.slug.org.au/listinfo/coders