On 29/03/13 19:45, Joshua Greben wrote:
Yes, there are a lot of large literals.

The general slowness is probably this then - I still don't really see why it goes as slow later on

If the data is available, I'll take a look. I don't have some hidden plan about speeding up large literals but if there is something that can be done, I'd like to do it.

(tempted to run a separate 2 caches: one for large, one for small literals)


I am using tdbloader.

tdbloader2 may do better but the large literals thing has an impact on it as well (sometimes less so).


The OS is Red Hat 6.3. The hardware is definitely shared, but I am
not sure what the other applications are. I will have to ask the Sys
Admin. If memory serves me the hardware has something like 90-ish GB
or RAM, so I currently have the lion's share. All the RAM was being
used by the tdbloader at the time. I will also have to inquire about
a possible ulimit on memory-mapped files.

Good to know - if the java process is huge (it will be larger than the machine as it includes the file part of memory file areas) then

I don't think that I mentioned this, but I gave 3200M each as the
JVM_ARGS (Xms and Xmx). Before that I tried giving it 60G but it
started swapping on the GC. Maybe there is a happy median?

3G or large enough for your large literals in the node cache (no easy to estimate I'm afraid).

The memory mapped files aren't in the heap so a very large heap is actually bad as it reduces the space for file caching.


Thanks again for the pointers and advice.

- Josh

Reply via email to