On 2/7/2018 8:08 AM, Shawn Heisey wrote:
If your queries are producing the correct results,
then I will tell you that the "summary" part of your query example is
quite possibly completely unnecessary
After further thought, I have concluded that this part of what I said is
probably completely w
On 2/7/2018 5:20 AM, Maulin Rathod wrote:
Further analyzing issue we found that asking for too many rows (e.g.
rows=1000) can cause full GC problem as mentioned in below link.
This is because when you ask for 10 million rows, Solr allocates a
memory structure capable of storing informatio
t: 31 January 2018 22:47
To: solr-user
Subject: Re: Long GC Pauses
Just to double check, when you san you're seeing 60-200 sec GC pauses are you
looking at the GC logs (or using some kind of monitor) or is that the time it
takes the query to respond to the client? Because a single
To: solr-user
Subject: Re: Long GC Pauses
Just to double check, when you san you're seeing 60-200 sec GC pauses are you
looking at the GC logs (or using some kind of monitor) or is that the time it
takes the query to respond to the client? Because a single GC pause that long
on 40G is unu
ose in on a real fix.
> >
> > Best,
> >
> > Jason
> >
> > On Wed, Jan 31, 2018 at 8:17 AM, Maulin Rathod
> wrote:
> >
> >> Hi,
> >>
> >> We are using solr cloud 6.1. We have around 20 collection on 4 nodes (We
> >> have 2 shards and ea
ial causes/solutions
> to close in on a real fix.
>
> Best,
>
> Jason
>
> On Wed, Jan 31, 2018 at 8:17 AM, Maulin Rathod wrote:
>
>> Hi,
>>
>> We are using solr cloud 6.1. We have around 20 collection on 4 nodes (We
>> have 2 shards and each shar
We have around 20 collection on 4 nodes (We
> have 2 shards and each shard have 2 replicas). We have allocated 40 GB RAM
> to each shard.
>
> Intermittently we found long GC pauses (60 sec to 200 sec) due to which
> solr stops responding and hence collections goes in recovering mode.
Hi,
We are using solr cloud 6.1. We have around 20 collection on 4 nodes (We have 2
shards and each shard have 2 replicas). We have allocated 40 GB RAM to each
shard.
Intermittently we found long GC pauses (60 sec to 200 sec) due to which solr
stops responding and hence collections goes in
The reason is GC pauses mostly at the client side and not the server side.
I guess you are using solrj client and this exception is thrown in the
client logs.
On Fri, May 19, 2017 at 11:46 PM, Joel Bernstein wrote:
> Odd, I haven't run into this behavior. Are you getting the disconnect from
> th
Odd, I haven't run into this behavior. Are you getting the disconnect from
the client side, or is this happening in a stream being run inside Solr?
Joel Bernstein
http://joelsolr.blogspot.com/
On Fri, May 19, 2017 at 1:40 PM, Timothy Potter
wrote:
> No, not every time, but there was no GC pau
No, not every time, but there was no GC pause on the Solr side (no
gaps in the log, nothing in the gc log) ... in the zk log, I do see
this around the same time:
2017-05-05T13:59:52,362 - INFO
[NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] -
Closed socket connection for client /127
You get this every time you run the expression?
Joel Bernstein
http://joelsolr.blogspot.com/
On Fri, May 19, 2017 at 10:44 AM, Timothy Potter
wrote:
> I'm executing a streaming expr and get this error:
>
> Caused by: org.apache.solr.common.SolrException: Could not load
> collection from ZK:
> M
I'm executing a streaming expr and get this error:
Caused by: org.apache.solr.common.SolrException: Could not load
collection from ZK:
MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002
at
org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:1098)
at
l keeping 100K docs
> per page, I don't see long GC pauses.
500K docs is far less than your worst-case 80*100K. You are not keeping the
effective page size constant across your tests. You need to do that in order to
conclude that it is the result set size that is the problem.
> So
On 4/13/2017 11:51 AM, Chetas Joshi wrote:
> Thanks for the insights into the memory requirements. Looks like cursor
> approach is going to require a lot of memory for millions of documents.
> If I run a query that returns only 500K documents still keeping 100K docs
> per page, I don
Hi Shawn,
Thanks for the insights into the memory requirements. Looks like cursor
approach is going to require a lot of memory for millions of documents.
If I run a query that returns only 500K documents still keeping 100K docs
per page, I don't see long GC pauses. So it is not really the n
You're missing the point of my comment. Since they already are
docValues, you can use the /export functionality to get the results
back as a _stream_ and avoid all of the overhead of the aggregator
node doing a merge sort and all of that.
You'll have to do this from SolrJ, but see CloudSolrStream.
On 4/12/2017 5:19 PM, Chetas Joshi wrote:
> I am getting back 100K results per page.
> The fields have docValues enabled and I am getting sorted results based on
> "id" and 2 more fields (String: 32 Bytes and Long: 8 Bytes).
>
> I have a solr Cloud of 80 nodes. There will be one shard that will ge
25 GB
> >> > > >>
> >> > > >> I am querying a solr collection with index size = 500 MB per
> core.
> >> > > >
> >> > > > I see that you and I have traded messages before on the list.
> >> > > >
> >
; > I see that you and I have traded messages before on the list.
>> > > >
>> > > > How much total system memory is there per server? How many of these
>> > > > 500MB cores are on each server? How many docs are in a 500MB core?
>> > The
>>
ory is there per server? How many of these
> > > > 500MB cores are on each server? How many docs are in a 500MB core?
> > The
> > > > answers to these questions may affect the other advice that I give
> you.
> > > >
> > > >>
>> The off-heap (25 GB) is huge so that it can load the entire index.
> > >
> > > I still know very little about how HDFS handles caching and memory.
> You
> > > want to be sure that as much data as possible from your indexes is
> > > sitting in local me
dles caching and memory. You
> > want to be sure that as much data as possible from your indexes is
> > sitting in local memory on the server.
> >
> >> Using cursor approach (number of rows = 100K), I read 2 fields (Total 40
> >> bytes per solr doc) from the Solr do
satisfy the query. The docs are
>> sorted by "id" and then by those 2 fields.
>>
>> I am not able to understand why the heap memory is getting full and Full
>> GCs are consecutively running with long GC pauses (> 30 seconds). I am
>> using CMS GC.
>
&
e to understand why the heap memory is getting full and Full
> GCs are consecutively running with long GC pauses (> 30 seconds). I am
> using CMS GC.
A 20GB heap is quite large. Do you actually need it to be that large?
If you graph JVM heap usage over a long period of time, what are the low
by those 2 fields.
I am not able to understand why the heap memory is getting full and Full
GCs are consecutively running with long GC pauses (> 30 seconds). I am
using CMS GC.
-XX:NewRatio=3 \
-XX:SurvivorRatio=4 \
-XX:TargetSurvivorRatio=90 \
-XX:MaxTenuringThreshold=8 \
-XX:+UseConcMarkS
26 matches
Mail list logo