If the number of object allocation mentioned earlier in this thread are real,
yes vm heap management can be a bottleneck. There has to be some
locking done somewhere otherwise the heap would corrupt :)

The other bottleneck can come from garbage collection which has to freeze 
object allocation completely or partially.

This internal process has to reclaim unreferenced objects otherwise you may end 
up 
exhausting the heap. That can even susoend your app while gc is running 
depending
on the strategy used.

In java 6, the vm is suppose to adjust the gc strategy by looking at your app
behavior. That may take some time to profile however. If your app has this heavy
memory requirement abruptely, the vm may not have enough data to adapt.

Have a look here, you will find how to set explicitly the gc behavior:

http://www.oracle.com/technetwork/java/javase/gc-tuning-6-140523.html

This is specific to java 6, java 5 is a different beast as java 7.

Maybe this can help, I emphasize,  *can help* not having looked deeply into
your problem.

Luc P.


> 
> On Dec 8, 2012, at 10:19 PM, meteorfox wrote:
> > 
> > Now if you run vmstat 1 while running your benchmark you'll notice that the 
> > run queue will be most of the time at 8, meaning that 8 "processes" are 
> > waiting for CPU, and this is due to memory accesses (in this case, since 
> > this is not true for all applications).
> > 
> > So, I feel your benchmark may be artificial and does not truly represent 
> > your real application, make sure to profile your real application, and 
> > optimize according to the bottlenecks. There are really useful tools out 
> > there 
> > for profiling, such as VisualVM, perf.
> 
> Thanks Carlos.
> 
> I don't actually think that the benchmark is particularly artificial. It's 
> very difficult to profile my actual application because it uses random 
> numbers all over the place and is highly and nonlinearly variable in lots of 
> ways. But I think that the benchmark I'm running really is pretty 
> representative.
> 
> In any event, WHY would all of that waiting be happening? Logically, nothing 
> should have to be waiting for anything else. We know from the fact that we 
> get good speedups from multiple simultaneous JVM runs, each just running one 
> call to my burn function, that the hardware is capable of performing well 
> with multiple instances of this benchmark running concurrently. It MUST be 
> able to handle all of the memory allocation and everything else. It's just 
> when we try to launch them all in parallel from the same Clojure process, 
> using pmap OR agents OR reducers, that we fail to get the concurrency 
> speedups. Why should this be? And what can be done about it?
> 
> I know nearly nothing about the internals of the JVM, but is there perhaps a 
> bottleneck on memory allocation because there's a single serial allocator? 
> Perhaps when we run multiple JVMs each has its own allocator so we don't have 
> the bottleneck? If all of this makes sense and is true, then perhaps (wishful 
> thinking) there's a JVM option like "use parallel memory allocators" that 
> will fix it?!
> 
> Thanks,
> 
>  -Lee
>  
> 
> -- 
> You received this message because you are subscribed to the Google
> Groups "Clojure" group.
> To post to this group, send email to clojure@googlegroups.com
> Note that posts from new members are moderated - please be patient with your 
> first post.
> To unsubscribe from this group, send email to
> clojure+unsubscr...@googlegroups.com
> For more options, visit this group at
> http://groups.google.com/group/clojure?hl=en
> 
--
Softaddicts<lprefonta...@softaddicts.ca> sent by ibisMail from my ipad!

-- 
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en

Reply via email to