The reason is GC pauses mostly at the client side and not the server side.
I guess you are using solrj client and this exception is thrown in the
client logs.
On Fri, May 19, 2017 at 11:46 PM, Joel Bernstein wrote:
> Odd, I haven't run into this behavior. Are you getting the
We already faced this issue and found out the issue to be long GC pauses
itself on either client side or server side.
Regards,
Piyush
On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey wrote:
> On 5/3/2017 7:32 AM, Satya Marivada wrote:
> > I see below exceptions in my logs
I have also noticed this issue and it happens while creating the collated
result. Mostly due to huge version mismatch between the server and client.
Best idea would be to use same server and client version.
Or else switch off collation (spell check you can still keep on) and do the
collation (
x like 20K docs/sec I don't know why.
>
> --
>
> /Yago Riveiro
>
> On 16 Dec 2016, 08:39 +, Piyush Kunal <piyush.ku...@myntra.com>,
> wrote:
> > Anyone has noticed such issue before?
> >
> > On Thu, Dec 15, 2016 at 4:36 PM, Piyush Kunal <piyush.ku.
I think 70GB is too huge for a shard.
How much memory does the system is having?
Incase solr does not have sufficient memory to load the indexes, it will
use only the amount of memory defined in your Solr Caches.
Although you are on HFDS, solr performances will be really bad if it has do
disk IO
Anyone has noticed such issue before?
On Thu, Dec 15, 2016 at 4:36 PM, Piyush Kunal <piyush.ku...@myntra.com>
wrote:
> This is happening when heavy indexing like 100/second is going on.
>
> On Thu, Dec 15, 2016 at 4:33 PM, Piyush Kunal <piyush.ku...@myntra.com>
> wrote:
This is happening when heavy indexing like 100/second is going on.
On Thu, Dec 15, 2016 at 4:33 PM, Piyush Kunal <piyush.ku...@myntra.com>
wrote:
> - We have solr6.1.0 cluster running on production with 1 shard and 5
> replicas.
> - Zookeeper quorum on 3 nodes.
> - Using a c
- We have solr6.1.0 cluster running on production with 1 shard and 5
replicas.
- Zookeeper quorum on 3 nodes.
- Using a chroot in zookeeper to segregate the configs from other
collections.
- Using solrj5.1.0 as our client to query solr.
Usually things work fine but on and off we witness this
All our shards and replicas reside on different machines with 16GB RAM and
4 cores.
On Tue, Dec 13, 2016 at 1:44 AM, Piyush Kunal <piyush.ku...@myntra.com>
wrote:
> We did the following change:
>
> 1. Previously we had 1 shard and 32 replicas for 1.2million documents of
>
We did the following change:
1. Previously we had 1 shard and 32 replicas for 1.2million documents of
size 5 GB.
2. We changed it to 4 shards and 8 replicas for 1.2 million documents of
size 5GB
We have a combined RPM of around 20k rpm for solr.
But unfortunately we saw a degrade in performance
We are using a solrcloud 6.1 cluster with zookeeper.
We have 6 nodes running behind the cluster.
If I use solrj client with zookeeper, it will round robin across all the
servers and distribute equal load across them.
But I want to give priority to some nodes (with better configuration) to have
I would be using solrcloud on solr 6.1.0 and will be having more number of
shards than my previous set-up.
On Mon, Aug 29, 2016 at 11:38 PM, Piyush Kunal <piyush.ku...@myntra.com>
wrote:
> Is there any way through which I can migrate my index which is currently
> on 4.9 to 6.1?
Is there any way through which I can migrate my index which is currently on
4.9 to 6.1?
Looking for something backup and restore.
13 matches
Mail list logo