Ok sorry I just add the parameter -XX:+UseParallelGC and it seems to don't go
oom.




sunnyfr wrote:
> 
> Actually I just notices, lot of request didn"t bring back correct answer,
> but " No read Solr server available" so my jmeter didn't take that for an
> error. Obviously out of memory, and a file gc.log is created with :
> 0.054: [GC [PSYoungGen: 5121K->256K(298688K)] 5121K->256K(981376K),
> 0.0020630 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 0.056: [Full GC (System) [PSYoungGen: 256K->0K(298688K)] [PSOldGen:
> 0K->180K(682688K)] 256K->180K(981376K) [PSPermGen: 3002K->3002K(21248K)],
> 0.0055170 secs] [Times: user=0.00 sys=0.00, real=0.00 secs]
> 
> so far my tomcat55 file is configurate like that :
> JAVA_OPTS="-Xms1000m -Xmx4000m -XX:+HeapDumpOnOutOfMemoryError
> -Xloggc:gc.log -XX:+PrintGCDetails -XX:+PrintGCTimeStamps"
> 
> My error:
> Dec 11 14:16:27 solr-test jsvc.exec[30653]: Dec 11, 2008 2:16:27 PM
> org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} hits=0 status=0 QTime=1  Dec 11, 2008 2:16:27
> PM org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} status=0 QTime=1  Dec 11, 2008 2:16:27 PM
> org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} hits=0 status=0 QTime=1  Dec 11, 2008 2:16:27
> PM org.apache.solr.core.SolrCore execute INFO: [video] webapp=/solr
> path=/admin/ping params={} status=0 QTime=2
> Dec 11 14:16:27 solr-test jsvc.exec[30653]: java.lang.OutOfMemoryError: GC
> overhead limit exceeded Dumping heap to java_pid30655.hprof ...
> 
> 
> 
> 
> Thanks for your help
> 
> 
> 
> Shalin Shekhar Mangar wrote:
>> 
>> Are each of those queries unique?
>> 
>> First time queries are slower. They are cached by Solr and the same query
>> again will return results very quickly because it won't need to hit the
>> file
>> system.
>> 
>> On Thu, Dec 11, 2008 at 4:08 PM, sunnyfr <[EMAIL PROTECTED]> wrote:
>> 
>>>
>>> Hi,
>>>
>>> I'm doing a stress test on solr.
>>> I've around 8,5M of doc, the size of my data's directory is 5,6G.
>>>
>>> I've  indexed again my data to make it faster, and applied all the last
>>> patch.
>>> My index data store just two field : id and text (which is a copy of
>>> three
>>> fiels)
>>> But I still think it's very long, what do you think?
>>>
>>> For 50request/sec during 40mn, my average  respond time : 1235msec.
>>> 49430request.
>>>
>>> When I make this test with 100request second during 10mn and 10 other
>>> minutes with 50 request : my average respond time is 1600msec. Don't you
>>> think it's a bit long.
>>>
>>> Should I partition this index more ? or what should I do to make this
>>> work
>>> faster.
>>> I can read post with people who have just 300msec request for 300Go of
>>> index
>>> partionned ?
>>> My request for collecting all this book is quite complex and have lot of
>>> table linked together, maybe it would be faster if I create a csv file ?
>>>
>>> The server that I'm using for the test has 8G of memory.
>>> 4CPU : Intel(R) Xeon(R) CPU            5160  @ 3.00GHz
>>> Tomcat55 : -Xms2000m -Xmx4000m
>>> Solr 1.3.
>>>
>>> What can I modify to make it more performant ? memory, indexation ...?
>>> Does it can come from my request to the mysql database which is too much
>>> linked ?
>>>
>>> Thanks a lot for your help,
>>> Johanna
>>>
>>>
>>> --
>>> View this message in context:
>>> http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20953079.html
>>> Sent from the Solr - User mailing list archive at Nabble.com.
>>>
>>>
>> 
>> 
>> -- 
>> Regards,
>> Shalin Shekhar Mangar.
>> 
>> 
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Make-it-more-performant---solr-1.3---1200msec-respond-time.-tp20953079p20955479.html
Sent from the Solr - User mailing list archive at Nabble.com.

Reply via email to