Hello Shawn,

Primary assumption:  You have a 64-bit OS and a 64-bit JVM.

>>>>>>>>>>>>>Jepp, it's running 64-bit Linux with 64-bit JVM

It sounds to me like you're I/O bound, because your machine cannot
keep enough of your index in RAM.  Relative to your 100GB index, you
only have a maximum of 14GB of RAM available to the OS disk cache,
since Java's heap size is 10GB.

>>>>>>>>>>>>>>>>>>>>The load test seems to be more CPU bound than I/O bound. 
>>>>>>>>>>>>>>>>>>>>All cores are fully busy and iostat says that there isn't 
>>>>>>>>>>>>>>>>>>>>much more disk I/O going on than without load test. The 
>>>>>>>>>>>>>>>>>>>>index is on a RAID10 array with four disks.

How much disk space do all of the index files that end in "x" take up?
 I would venture a guess that it's significantly more than 14GB.  On
Linux, you could do this command to tally it quickly:

# du -hc *x

>>>>>>>>>>>>>>>>>>>>>>>27G total

# du -hc `ls | egrep -v "tvf|fdt"`

>>>>>>>>>>>>>>>>>>>>>>>51G total

If you installed enough RAM so the disk cache can be much larger than
the total size of those files ending in "x", you'd probably stop
having these performance issues.  Realizing that this is a
Alternatively, you could take steps to reduce the size of your index,
or perhaps add more machines to go distributed.

>>>>>>>>>>>>>>>>>>>>>>>>>>Unfortunately, this doesn't seem to be the problem. 
>>>>>>>>>>>>>>>>>>>>>>>>>>The queries themselves are running fine. The problem 
>>>>>>>>>>>>>>>>>>>>>>>>>>is that the replications is crawling when there are 
>>>>>>>>>>>>>>>>>>>>>>>>>>many queries going on and that the replication speed 
>>>>>>>>>>>>>>>>>>>>>>>>>>stays low even after the load is gone.



Cheers
Vadim

Reply via email to