Andrei Alexandrescu wrote:
Benji Smith wrote:
Actually, memory allocated in the JVM is very cache-friendly, since
two subsequent allocations will always be adjacent to one another in
physical memory. And, since the JVM uses a moving GC, long-lived
objects move closer and closer together.
Well the problem is that the allocation size grows quickly. Allocate and
dispose one object per loop -> pages will be quickly eaten.
for (...) {
JavaClassWithAReallyLongNameAsTheyUsuallyAre o = factory.giveMeOne();
o.method();
}
The escape analyzer could catch that the variable doesn't survive the
pass through the loop, but the call to method makes things rather tricky
(virtual, source unavailable...). So then we're facing a quickly growing
allocation block and consequently less cache friendliness and more
frequent collections.
Andrei
Good point. I remember five years ago when people were buzzing about the
possible implementation of escape analysis in the next Java version, and
how it'd move a boatload of intermediate object allocations from the
heap to the stack. Personally, I don't think it'll ever happen. They
can't even agree on how to get *closures* into the language.
I personally think the JVM and the HotSpot compiler are two of the
greatest accomplishments of computer science. But the Java community has
long since jumped the shark, and I don't expect much innovation from
that neighborhood anymore.
--benji