Re: High CPU Usage in export handler

2016-11-08 Thread Erick Erickson
Joel:

I did a little work with SOLR-9296 to try to reduce the number of
objects created, which would relieve GC pressure both at creation and
collection time. I didn't measure CPU utilization before/after, but I
did see up to a 11% increase in throughput.

It wouldn't hurt my feelings at all to have someone grab that JIRA
away from me since it's pretty obvious I'm not going to get back to it
for a while.

Erick

On Tue, Nov 8, 2016 at 11:44 AM, Ray Niu  wrote:
> Thanks Joel.
>
> 2016-11-08 11:43 GMT-08:00 Joel Bernstein :
>
>> It sounds like your scenario, is around 25 queries per second, each pulling
>> entire results. This would be enough to drive up CPU usage as you have more
>> concurrent requests then CPU's. Since there isn't much IO blocking
>> happening, in the scenario you describe, I would expect some pretty busy
>> CPU's.
>>
>> That being said I think it would useful to understand exactly where the
>> hotspots are in Lucene to see if we can make this more efficient.
>>
>> Leading up to the 6.4 release I'll try to spend some time understanding the
>> Lucene hotspots with /export. I'll report back to this thread when I have
>> more info.
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Mon, Nov 7, 2016 at 3:44 PM, Ray Niu  wrote:
>>
>> > Hello:
>> >Any follow up?
>> >
>> > 2016-11-03 11:18 GMT-07:00 Ray Niu :
>> >
>> > > the soft commit is 15 seconds and hard commit is 10 minutes.
>> > >
>> > > 2016-11-03 11:11 GMT-07:00 Erick Erickson :
>> > >
>> > >> Followup question: You say you're indexing 100 docs/second.  How often
>> > >> are you _committing_? Either
>> > >> soft commit
>> > >> or
>> > >> hardcommit with openSearcher=true
>> > >>
>> > >> ?
>> > >>
>> > >> Best,
>> > >> Erick
>> > >>
>> > >> On Thu, Nov 3, 2016 at 11:00 AM, Ray Niu  wrote:
>> > >> > Thanks Joel
>> > >> > here is the information you requested.
>> > >> > Are you doing heavy writes at the time?
>> > >> > we are doing write very frequently, but not very heavy, we will
>> update
>> > >> > about 100 solr document per second.
>> > >> > How many concurrent reads are are happening?
>> > >> > the concurrent reads are about 1000-2000 per minute per node
>> > >> > What version of Solr are you using?
>> > >> > we are using solr 5.5.2
>> > >> > What is the field definition for the double, is it docValues?
>> > >> > the field definition is
>> > >> > > > >> > docValues="true"/>
>> > >> >
>> > >> >
>> > >> > 2016-11-03 6:30 GMT-07:00 Joel Bernstein :
>> > >> >
>> > >> >> Are you doing heavy writes at the time?
>> > >> >>
>> > >> >> How many concurrent reads are are happening?
>> > >> >>
>> > >> >> What version of Solr are you using?
>> > >> >>
>> > >> >> What is the field definition for the double, is it docValues?
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >>
>> > >> >> Joel Bernstein
>> > >> >> http://joelsolr.blogspot.com/
>> > >> >>
>> > >> >> On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu 
>> > wrote:
>> > >> >>
>> > >> >> > Hello:
>> > >> >> >We are using export handler in Solr Cloud to get some data, we
>> > >> only
>> > >> >> > request for one field, which type is tdouble, it works well at
>> the
>> > >> >> > beginning, but recently we saw high CPU issue in all the solr
>> cloud
>> > >> >> nodes,
>> > >> >> > we took some thread dump and found following information:
>> > >> >> >
>> > >> >> >java.lang.Thread.State: RUNNABLE
>> > >> >> >
>> > >> >> > at java.lang.Thread.isAlive(Native Method)
>> > >> >> >
>> > >> >> > at
>> > >> >> > org.apache.lucene.util.CloseableThreadLocal.purge(
>> > >> >> > CloseableThreadLocal.java:115)
>> > >> >> >
>> > >> >> > - locked <0x0006e24d86a8> (a java.util.WeakHashMap)
>> > >> >> >
>> > >> >> > at
>> > >> >> > org.apache.lucene.util.CloseableThreadLocal.maybePurge(
>> > >> >> > CloseableThreadLocal.java:105)
>> > >> >> >
>> > >> >> > at
>> > >> >> > org.apache.lucene.util.CloseableThreadLocal.get(
>> > >> >> > CloseableThreadLocal.java:88)
>> > >> >> >
>> > >> >> > at
>> > >> >> > org.apache.lucene.index.CodecReader.getNumericDocValues(
>> > >> >> > CodecReader.java:143)
>> > >> >> >
>> > >> >> > at
>> > >> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
>> > >> >> > FilterLeafReader.java:430)
>> > >> >> >
>> > >> >> > at
>> > >> >> > org.apache.lucene.uninverting.UninvertingReader.
>> > getNumericDocValues(
>> > >> >> > UninvertingReader.java:239)
>> > >> >> >
>> > >> >> > at
>> > >> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
>> > >> >> > FilterLeafReader.java:430)
>> > >> >> >
>> > >> >> > Is this a known issue for export handler? As we only fetch up to
>> > 5000
>> > >> >> > documents, it should not be data volume issue.
>> > >> >> >
>> > >> >> > Can anyone help on that? Thanks a lot.
>> > >> >> >
>> > >> >>
>> > >>
>> > >
>> > >
>> >
>>


Re: High CPU Usage in export handler

2016-11-08 Thread Ray Niu
Thanks Joel.

2016-11-08 11:43 GMT-08:00 Joel Bernstein :

> It sounds like your scenario, is around 25 queries per second, each pulling
> entire results. This would be enough to drive up CPU usage as you have more
> concurrent requests then CPU's. Since there isn't much IO blocking
> happening, in the scenario you describe, I would expect some pretty busy
> CPU's.
>
> That being said I think it would useful to understand exactly where the
> hotspots are in Lucene to see if we can make this more efficient.
>
> Leading up to the 6.4 release I'll try to spend some time understanding the
> Lucene hotspots with /export. I'll report back to this thread when I have
> more info.
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Mon, Nov 7, 2016 at 3:44 PM, Ray Niu  wrote:
>
> > Hello:
> >Any follow up?
> >
> > 2016-11-03 11:18 GMT-07:00 Ray Niu :
> >
> > > the soft commit is 15 seconds and hard commit is 10 minutes.
> > >
> > > 2016-11-03 11:11 GMT-07:00 Erick Erickson :
> > >
> > >> Followup question: You say you're indexing 100 docs/second.  How often
> > >> are you _committing_? Either
> > >> soft commit
> > >> or
> > >> hardcommit with openSearcher=true
> > >>
> > >> ?
> > >>
> > >> Best,
> > >> Erick
> > >>
> > >> On Thu, Nov 3, 2016 at 11:00 AM, Ray Niu  wrote:
> > >> > Thanks Joel
> > >> > here is the information you requested.
> > >> > Are you doing heavy writes at the time?
> > >> > we are doing write very frequently, but not very heavy, we will
> update
> > >> > about 100 solr document per second.
> > >> > How many concurrent reads are are happening?
> > >> > the concurrent reads are about 1000-2000 per minute per node
> > >> > What version of Solr are you using?
> > >> > we are using solr 5.5.2
> > >> > What is the field definition for the double, is it docValues?
> > >> > the field definition is
> > >> >  > >> > docValues="true"/>
> > >> >
> > >> >
> > >> > 2016-11-03 6:30 GMT-07:00 Joel Bernstein :
> > >> >
> > >> >> Are you doing heavy writes at the time?
> > >> >>
> > >> >> How many concurrent reads are are happening?
> > >> >>
> > >> >> What version of Solr are you using?
> > >> >>
> > >> >> What is the field definition for the double, is it docValues?
> > >> >>
> > >> >>
> > >> >>
> > >> >>
> > >> >> Joel Bernstein
> > >> >> http://joelsolr.blogspot.com/
> > >> >>
> > >> >> On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu 
> > wrote:
> > >> >>
> > >> >> > Hello:
> > >> >> >We are using export handler in Solr Cloud to get some data, we
> > >> only
> > >> >> > request for one field, which type is tdouble, it works well at
> the
> > >> >> > beginning, but recently we saw high CPU issue in all the solr
> cloud
> > >> >> nodes,
> > >> >> > we took some thread dump and found following information:
> > >> >> >
> > >> >> >java.lang.Thread.State: RUNNABLE
> > >> >> >
> > >> >> > at java.lang.Thread.isAlive(Native Method)
> > >> >> >
> > >> >> > at
> > >> >> > org.apache.lucene.util.CloseableThreadLocal.purge(
> > >> >> > CloseableThreadLocal.java:115)
> > >> >> >
> > >> >> > - locked <0x0006e24d86a8> (a java.util.WeakHashMap)
> > >> >> >
> > >> >> > at
> > >> >> > org.apache.lucene.util.CloseableThreadLocal.maybePurge(
> > >> >> > CloseableThreadLocal.java:105)
> > >> >> >
> > >> >> > at
> > >> >> > org.apache.lucene.util.CloseableThreadLocal.get(
> > >> >> > CloseableThreadLocal.java:88)
> > >> >> >
> > >> >> > at
> > >> >> > org.apache.lucene.index.CodecReader.getNumericDocValues(
> > >> >> > CodecReader.java:143)
> > >> >> >
> > >> >> > at
> > >> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> > >> >> > FilterLeafReader.java:430)
> > >> >> >
> > >> >> > at
> > >> >> > org.apache.lucene.uninverting.UninvertingReader.
> > getNumericDocValues(
> > >> >> > UninvertingReader.java:239)
> > >> >> >
> > >> >> > at
> > >> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> > >> >> > FilterLeafReader.java:430)
> > >> >> >
> > >> >> > Is this a known issue for export handler? As we only fetch up to
> > 5000
> > >> >> > documents, it should not be data volume issue.
> > >> >> >
> > >> >> > Can anyone help on that? Thanks a lot.
> > >> >> >
> > >> >>
> > >>
> > >
> > >
> >
>


Re: High CPU Usage in export handler

2016-11-08 Thread Joel Bernstein
It sounds like your scenario, is around 25 queries per second, each pulling
entire results. This would be enough to drive up CPU usage as you have more
concurrent requests then CPU's. Since there isn't much IO blocking
happening, in the scenario you describe, I would expect some pretty busy
CPU's.

That being said I think it would useful to understand exactly where the
hotspots are in Lucene to see if we can make this more efficient.

Leading up to the 6.4 release I'll try to spend some time understanding the
Lucene hotspots with /export. I'll report back to this thread when I have
more info.


Joel Bernstein
http://joelsolr.blogspot.com/

On Mon, Nov 7, 2016 at 3:44 PM, Ray Niu  wrote:

> Hello:
>Any follow up?
>
> 2016-11-03 11:18 GMT-07:00 Ray Niu :
>
> > the soft commit is 15 seconds and hard commit is 10 minutes.
> >
> > 2016-11-03 11:11 GMT-07:00 Erick Erickson :
> >
> >> Followup question: You say you're indexing 100 docs/second.  How often
> >> are you _committing_? Either
> >> soft commit
> >> or
> >> hardcommit with openSearcher=true
> >>
> >> ?
> >>
> >> Best,
> >> Erick
> >>
> >> On Thu, Nov 3, 2016 at 11:00 AM, Ray Niu  wrote:
> >> > Thanks Joel
> >> > here is the information you requested.
> >> > Are you doing heavy writes at the time?
> >> > we are doing write very frequently, but not very heavy, we will update
> >> > about 100 solr document per second.
> >> > How many concurrent reads are are happening?
> >> > the concurrent reads are about 1000-2000 per minute per node
> >> > What version of Solr are you using?
> >> > we are using solr 5.5.2
> >> > What is the field definition for the double, is it docValues?
> >> > the field definition is
> >> >  >> > docValues="true"/>
> >> >
> >> >
> >> > 2016-11-03 6:30 GMT-07:00 Joel Bernstein :
> >> >
> >> >> Are you doing heavy writes at the time?
> >> >>
> >> >> How many concurrent reads are are happening?
> >> >>
> >> >> What version of Solr are you using?
> >> >>
> >> >> What is the field definition for the double, is it docValues?
> >> >>
> >> >>
> >> >>
> >> >>
> >> >> Joel Bernstein
> >> >> http://joelsolr.blogspot.com/
> >> >>
> >> >> On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu 
> wrote:
> >> >>
> >> >> > Hello:
> >> >> >We are using export handler in Solr Cloud to get some data, we
> >> only
> >> >> > request for one field, which type is tdouble, it works well at the
> >> >> > beginning, but recently we saw high CPU issue in all the solr cloud
> >> >> nodes,
> >> >> > we took some thread dump and found following information:
> >> >> >
> >> >> >java.lang.Thread.State: RUNNABLE
> >> >> >
> >> >> > at java.lang.Thread.isAlive(Native Method)
> >> >> >
> >> >> > at
> >> >> > org.apache.lucene.util.CloseableThreadLocal.purge(
> >> >> > CloseableThreadLocal.java:115)
> >> >> >
> >> >> > - locked <0x0006e24d86a8> (a java.util.WeakHashMap)
> >> >> >
> >> >> > at
> >> >> > org.apache.lucene.util.CloseableThreadLocal.maybePurge(
> >> >> > CloseableThreadLocal.java:105)
> >> >> >
> >> >> > at
> >> >> > org.apache.lucene.util.CloseableThreadLocal.get(
> >> >> > CloseableThreadLocal.java:88)
> >> >> >
> >> >> > at
> >> >> > org.apache.lucene.index.CodecReader.getNumericDocValues(
> >> >> > CodecReader.java:143)
> >> >> >
> >> >> > at
> >> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> >> >> > FilterLeafReader.java:430)
> >> >> >
> >> >> > at
> >> >> > org.apache.lucene.uninverting.UninvertingReader.
> getNumericDocValues(
> >> >> > UninvertingReader.java:239)
> >> >> >
> >> >> > at
> >> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> >> >> > FilterLeafReader.java:430)
> >> >> >
> >> >> > Is this a known issue for export handler? As we only fetch up to
> 5000
> >> >> > documents, it should not be data volume issue.
> >> >> >
> >> >> > Can anyone help on that? Thanks a lot.
> >> >> >
> >> >>
> >>
> >
> >
>


Re: High CPU Usage in export handler

2016-11-07 Thread Ray Niu
Hello:
   Any follow up?

2016-11-03 11:18 GMT-07:00 Ray Niu :

> the soft commit is 15 seconds and hard commit is 10 minutes.
>
> 2016-11-03 11:11 GMT-07:00 Erick Erickson :
>
>> Followup question: You say you're indexing 100 docs/second.  How often
>> are you _committing_? Either
>> soft commit
>> or
>> hardcommit with openSearcher=true
>>
>> ?
>>
>> Best,
>> Erick
>>
>> On Thu, Nov 3, 2016 at 11:00 AM, Ray Niu  wrote:
>> > Thanks Joel
>> > here is the information you requested.
>> > Are you doing heavy writes at the time?
>> > we are doing write very frequently, but not very heavy, we will update
>> > about 100 solr document per second.
>> > How many concurrent reads are are happening?
>> > the concurrent reads are about 1000-2000 per minute per node
>> > What version of Solr are you using?
>> > we are using solr 5.5.2
>> > What is the field definition for the double, is it docValues?
>> > the field definition is
>> > > > docValues="true"/>
>> >
>> >
>> > 2016-11-03 6:30 GMT-07:00 Joel Bernstein :
>> >
>> >> Are you doing heavy writes at the time?
>> >>
>> >> How many concurrent reads are are happening?
>> >>
>> >> What version of Solr are you using?
>> >>
>> >> What is the field definition for the double, is it docValues?
>> >>
>> >>
>> >>
>> >>
>> >> Joel Bernstein
>> >> http://joelsolr.blogspot.com/
>> >>
>> >> On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu  wrote:
>> >>
>> >> > Hello:
>> >> >We are using export handler in Solr Cloud to get some data, we
>> only
>> >> > request for one field, which type is tdouble, it works well at the
>> >> > beginning, but recently we saw high CPU issue in all the solr cloud
>> >> nodes,
>> >> > we took some thread dump and found following information:
>> >> >
>> >> >java.lang.Thread.State: RUNNABLE
>> >> >
>> >> > at java.lang.Thread.isAlive(Native Method)
>> >> >
>> >> > at
>> >> > org.apache.lucene.util.CloseableThreadLocal.purge(
>> >> > CloseableThreadLocal.java:115)
>> >> >
>> >> > - locked <0x0006e24d86a8> (a java.util.WeakHashMap)
>> >> >
>> >> > at
>> >> > org.apache.lucene.util.CloseableThreadLocal.maybePurge(
>> >> > CloseableThreadLocal.java:105)
>> >> >
>> >> > at
>> >> > org.apache.lucene.util.CloseableThreadLocal.get(
>> >> > CloseableThreadLocal.java:88)
>> >> >
>> >> > at
>> >> > org.apache.lucene.index.CodecReader.getNumericDocValues(
>> >> > CodecReader.java:143)
>> >> >
>> >> > at
>> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
>> >> > FilterLeafReader.java:430)
>> >> >
>> >> > at
>> >> > org.apache.lucene.uninverting.UninvertingReader.getNumericDocValues(
>> >> > UninvertingReader.java:239)
>> >> >
>> >> > at
>> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
>> >> > FilterLeafReader.java:430)
>> >> >
>> >> > Is this a known issue for export handler? As we only fetch up to 5000
>> >> > documents, it should not be data volume issue.
>> >> >
>> >> > Can anyone help on that? Thanks a lot.
>> >> >
>> >>
>>
>
>


Re: High CPU Usage in export handler

2016-11-03 Thread Erick Erickson
Followup question: You say you're indexing 100 docs/second.  How often
are you _committing_? Either
soft commit
or
hardcommit with openSearcher=true

?

Best,
Erick

On Thu, Nov 3, 2016 at 11:00 AM, Ray Niu  wrote:
> Thanks Joel
> here is the information you requested.
> Are you doing heavy writes at the time?
> we are doing write very frequently, but not very heavy, we will update
> about 100 solr document per second.
> How many concurrent reads are are happening?
> the concurrent reads are about 1000-2000 per minute per node
> What version of Solr are you using?
> we are using solr 5.5.2
> What is the field definition for the double, is it docValues?
> the field definition is
>  docValues="true"/>
>
>
> 2016-11-03 6:30 GMT-07:00 Joel Bernstein :
>
>> Are you doing heavy writes at the time?
>>
>> How many concurrent reads are are happening?
>>
>> What version of Solr are you using?
>>
>> What is the field definition for the double, is it docValues?
>>
>>
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>> On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu  wrote:
>>
>> > Hello:
>> >We are using export handler in Solr Cloud to get some data, we only
>> > request for one field, which type is tdouble, it works well at the
>> > beginning, but recently we saw high CPU issue in all the solr cloud
>> nodes,
>> > we took some thread dump and found following information:
>> >
>> >java.lang.Thread.State: RUNNABLE
>> >
>> > at java.lang.Thread.isAlive(Native Method)
>> >
>> > at
>> > org.apache.lucene.util.CloseableThreadLocal.purge(
>> > CloseableThreadLocal.java:115)
>> >
>> > - locked <0x0006e24d86a8> (a java.util.WeakHashMap)
>> >
>> > at
>> > org.apache.lucene.util.CloseableThreadLocal.maybePurge(
>> > CloseableThreadLocal.java:105)
>> >
>> > at
>> > org.apache.lucene.util.CloseableThreadLocal.get(
>> > CloseableThreadLocal.java:88)
>> >
>> > at
>> > org.apache.lucene.index.CodecReader.getNumericDocValues(
>> > CodecReader.java:143)
>> >
>> > at
>> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
>> > FilterLeafReader.java:430)
>> >
>> > at
>> > org.apache.lucene.uninverting.UninvertingReader.getNumericDocValues(
>> > UninvertingReader.java:239)
>> >
>> > at
>> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
>> > FilterLeafReader.java:430)
>> >
>> > Is this a known issue for export handler? As we only fetch up to 5000
>> > documents, it should not be data volume issue.
>> >
>> > Can anyone help on that? Thanks a lot.
>> >
>>


Re: High CPU Usage in export handler

2016-11-03 Thread Ray Niu
the soft commit is 15 seconds and hard commit is 10 minutes.

2016-11-03 11:11 GMT-07:00 Erick Erickson :

> Followup question: You say you're indexing 100 docs/second.  How often
> are you _committing_? Either
> soft commit
> or
> hardcommit with openSearcher=true
>
> ?
>
> Best,
> Erick
>
> On Thu, Nov 3, 2016 at 11:00 AM, Ray Niu  wrote:
> > Thanks Joel
> > here is the information you requested.
> > Are you doing heavy writes at the time?
> > we are doing write very frequently, but not very heavy, we will update
> > about 100 solr document per second.
> > How many concurrent reads are are happening?
> > the concurrent reads are about 1000-2000 per minute per node
> > What version of Solr are you using?
> > we are using solr 5.5.2
> > What is the field definition for the double, is it docValues?
> > the field definition is
> >  > docValues="true"/>
> >
> >
> > 2016-11-03 6:30 GMT-07:00 Joel Bernstein :
> >
> >> Are you doing heavy writes at the time?
> >>
> >> How many concurrent reads are are happening?
> >>
> >> What version of Solr are you using?
> >>
> >> What is the field definition for the double, is it docValues?
> >>
> >>
> >>
> >>
> >> Joel Bernstein
> >> http://joelsolr.blogspot.com/
> >>
> >> On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu  wrote:
> >>
> >> > Hello:
> >> >We are using export handler in Solr Cloud to get some data, we only
> >> > request for one field, which type is tdouble, it works well at the
> >> > beginning, but recently we saw high CPU issue in all the solr cloud
> >> nodes,
> >> > we took some thread dump and found following information:
> >> >
> >> >java.lang.Thread.State: RUNNABLE
> >> >
> >> > at java.lang.Thread.isAlive(Native Method)
> >> >
> >> > at
> >> > org.apache.lucene.util.CloseableThreadLocal.purge(
> >> > CloseableThreadLocal.java:115)
> >> >
> >> > - locked <0x0006e24d86a8> (a java.util.WeakHashMap)
> >> >
> >> > at
> >> > org.apache.lucene.util.CloseableThreadLocal.maybePurge(
> >> > CloseableThreadLocal.java:105)
> >> >
> >> > at
> >> > org.apache.lucene.util.CloseableThreadLocal.get(
> >> > CloseableThreadLocal.java:88)
> >> >
> >> > at
> >> > org.apache.lucene.index.CodecReader.getNumericDocValues(
> >> > CodecReader.java:143)
> >> >
> >> > at
> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> >> > FilterLeafReader.java:430)
> >> >
> >> > at
> >> > org.apache.lucene.uninverting.UninvertingReader.getNumericDocValues(
> >> > UninvertingReader.java:239)
> >> >
> >> > at
> >> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> >> > FilterLeafReader.java:430)
> >> >
> >> > Is this a known issue for export handler? As we only fetch up to 5000
> >> > documents, it should not be data volume issue.
> >> >
> >> > Can anyone help on that? Thanks a lot.
> >> >
> >>
>


Re: High CPU Usage in export handler

2016-11-03 Thread Ray Niu
Thanks Joel
here is the information you requested.
Are you doing heavy writes at the time?
we are doing write very frequently, but not very heavy, we will update
about 100 solr document per second.
How many concurrent reads are are happening?
the concurrent reads are about 1000-2000 per minute per node
What version of Solr are you using?
we are using solr 5.5.2
What is the field definition for the double, is it docValues?
the field definition is



2016-11-03 6:30 GMT-07:00 Joel Bernstein :

> Are you doing heavy writes at the time?
>
> How many concurrent reads are are happening?
>
> What version of Solr are you using?
>
> What is the field definition for the double, is it docValues?
>
>
>
>
> Joel Bernstein
> http://joelsolr.blogspot.com/
>
> On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu  wrote:
>
> > Hello:
> >We are using export handler in Solr Cloud to get some data, we only
> > request for one field, which type is tdouble, it works well at the
> > beginning, but recently we saw high CPU issue in all the solr cloud
> nodes,
> > we took some thread dump and found following information:
> >
> >java.lang.Thread.State: RUNNABLE
> >
> > at java.lang.Thread.isAlive(Native Method)
> >
> > at
> > org.apache.lucene.util.CloseableThreadLocal.purge(
> > CloseableThreadLocal.java:115)
> >
> > - locked <0x0006e24d86a8> (a java.util.WeakHashMap)
> >
> > at
> > org.apache.lucene.util.CloseableThreadLocal.maybePurge(
> > CloseableThreadLocal.java:105)
> >
> > at
> > org.apache.lucene.util.CloseableThreadLocal.get(
> > CloseableThreadLocal.java:88)
> >
> > at
> > org.apache.lucene.index.CodecReader.getNumericDocValues(
> > CodecReader.java:143)
> >
> > at
> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> > FilterLeafReader.java:430)
> >
> > at
> > org.apache.lucene.uninverting.UninvertingReader.getNumericDocValues(
> > UninvertingReader.java:239)
> >
> > at
> > org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> > FilterLeafReader.java:430)
> >
> > Is this a known issue for export handler? As we only fetch up to 5000
> > documents, it should not be data volume issue.
> >
> > Can anyone help on that? Thanks a lot.
> >
>


Re: High CPU Usage in export handler

2016-11-03 Thread Joel Bernstein
Are you doing heavy writes at the time?

How many concurrent reads are are happening?

What version of Solr are you using?

What is the field definition for the double, is it docValues?




Joel Bernstein
http://joelsolr.blogspot.com/

On Thu, Nov 3, 2016 at 12:56 AM, Ray Niu  wrote:

> Hello:
>We are using export handler in Solr Cloud to get some data, we only
> request for one field, which type is tdouble, it works well at the
> beginning, but recently we saw high CPU issue in all the solr cloud nodes,
> we took some thread dump and found following information:
>
>java.lang.Thread.State: RUNNABLE
>
> at java.lang.Thread.isAlive(Native Method)
>
> at
> org.apache.lucene.util.CloseableThreadLocal.purge(
> CloseableThreadLocal.java:115)
>
> - locked <0x0006e24d86a8> (a java.util.WeakHashMap)
>
> at
> org.apache.lucene.util.CloseableThreadLocal.maybePurge(
> CloseableThreadLocal.java:105)
>
> at
> org.apache.lucene.util.CloseableThreadLocal.get(
> CloseableThreadLocal.java:88)
>
> at
> org.apache.lucene.index.CodecReader.getNumericDocValues(
> CodecReader.java:143)
>
> at
> org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> FilterLeafReader.java:430)
>
> at
> org.apache.lucene.uninverting.UninvertingReader.getNumericDocValues(
> UninvertingReader.java:239)
>
> at
> org.apache.lucene.index.FilterLeafReader.getNumericDocValues(
> FilterLeafReader.java:430)
>
> Is this a known issue for export handler? As we only fetch up to 5000
> documents, it should not be data volume issue.
>
> Can anyone help on that? Thanks a lot.
>