Rich Freeman <rich0 <at> gentoo.org> writes:



> A big problem with Linux along these fronts is that we don't really
> have good mechanisms for prioritizing memory use.  You can set hard
> limits of course, which aren't flexible, but otherwise software is
> trusted to just guess how much RAM it should use.

Exactamundo!
Besides fine grained controls I want it in a fat_boy controllable gui!
Clustering is where it's at. NOW much of the fuss I read
in the clustering groups, particularly Spark and other 
"in_memory" tools, is all about monitoring and managing
all types of memory and related issues. [1] 


> It would be nice if processes could allocate cache RAM, which could be
> preferentially freed if the kernel deems necessary.  If some pages are
> easier to regenerate than to swap, this could also be flagged (I have
> a 50Mbps connection - I'd rather see my browser re-fetch pages than go
> to disk when the disk is already busy).  There are probably a lot of
> other ways that memory use could be optimized with hinting.

I think you need to look into apache spark. It is exploding. Technology
to run certain codes 100% in memory looks to be a revolution, driven
by the mesos/spark clusters. [2] The weapons on top of mesos/spark
are Python, Java and Scala (in portage).


hth,
James

[1] https://issues.apache.org/jira/browse/SPARK-3535

[2] https://amplab.cs.berkeley.edu/

http://radar.oreilly.com/2014/06/a-growing-number-of-applications-are-being-built-with-spark.html


Reply via email to