Re: Session expired when executing streaming expression, but no long GC pauses ...
The reason is GC pauses mostly at the client side and not the server side. I guess you are using solrj client and this exception is thrown in the client logs. On Fri, May 19, 2017 at 11:46 PM, Joel Bernstein wrote: > Odd, I haven't run into this behavior. Are you getting the disconnect from > the client side, or is this happening in a stream being run inside Solr? > > > > Joel Bernstein > http://joelsolr.blogspot.com/ > > On Fri, May 19, 2017 at 1:40 PM, Timothy Potter > wrote: > > > No, not every time, but there was no GC pause on the Solr side (no > > gaps in the log, nothing in the gc log) ... in the zk log, I do see > > this around the same time: > > > > 2017-05-05T13:59:52,362 - INFO > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] - > > Closed socket connection for client /127.0.0.1:54140 which had > > sessionid 0x15bd8bdd3500022 > > 2017-05-05T13:59:52,818 - WARN > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@357] - caught > > end of stream exception > > org.apache.zookeeper.server.ServerCnxn$EndOfStreamException: Unable to > > read additional data from client sessionid 0x15bd8bdd3500023, likely > > client has closed socket > > at org.apache.zookeeper.server.NIOServerCnxn.doIO( > > NIOServerCnxn.java:228) > > [zookeeper-3.4.6.jar:3.4.6-1569965] > > at org.apache.zookeeper.server.NIOServerCnxnFactory.run( > > NIOServerCnxnFactory.java:208) > > [zookeeper-3.4.6.jar:3.4.6-1569965] > > at java.lang.Thread.run(Thread.java:745) [?:1.8.0_66-internal] > > 2017-05-05T13:59:52,819 - INFO > > [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:9983:NIOServerCnxn@1007] - > > Closed socket connection for client /127.0.0.1:54200 which had > > sessionid 0x15bd8bdd3500023 > > ... > > > > 2017-05-05T14:00:00,001 - INFO [SessionTracker:ZooKeeperServer@347] - > > Expiring session 0x15bd8bdd3500023, timeout of 1ms exceeded > > > > On Fri, May 19, 2017 at 9:48 AM, Joel Bernstein > > wrote: > > > You get this every time you run the expression? > > > > > > Joel Bernstein > > > http://joelsolr.blogspot.com/ > > > > > > On Fri, May 19, 2017 at 10:44 AM, Timothy Potter > > > > wrote: > > > > > >> I'm executing a streaming expr and get this error: > > >> > > >> Caused by: org.apache.solr.common.SolrException: Could not load > > >> collection from ZK: > > >> MovieLens_Ratings_f2e6f8b0_3199_11e7_b8ab_0242ac110002 > > >> at org.apache.solr.common.cloud.ZkStateReader. > > getCollectionLive( > > >> ZkStateReader.java:1098) > > >> at org.apache.solr.common.cloud.ZkStateReader$ > > >> LazyCollectionRef.get(ZkStateReader.java:638) > > >> at org.apache.solr.client.solrj.impl.CloudSolrClient. > > >> getDocCollection(CloudSolrClient.java:1482) > > >> at org.apache.solr.client.solrj.impl.CloudSolrClient. > > >> requestWithRetryOnStaleState(CloudSolrClient.java:1092) > > >> at org.apache.solr.client.solrj.impl.CloudSolrClient.request( > > >> CloudSolrClient.java:1057) > > >> at org.apache.solr.client.solrj.io.stream.FacetStream.open( > > >> FacetStream.java:356) > > >> ... 38 more > > >> Caused by: org.apache.zookeeper.KeeperException$ > > SessionExpiredException: > > >> KeeperErrorCode = Session expired for > > >> /collections/MovieLens_Ratings_f2e6f8b0_3199_11e7_ > > >> b8ab_0242ac110002/state.json > > >> at org.apache.zookeeper.KeeperException.create( > > >> KeeperException.java:127) > > >> at org.apache.zookeeper.KeeperException.create( > > >> KeeperException.java:51) > > >> at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper. > java:1155) > > >> at org.apache.solr.common.cloud.SolrZkClient$7.execute( > > >> SolrZkClient.java:356) > > >> at org.apache.solr.common.cloud.SolrZkClient$7.execute( > > >> SolrZkClient.java:353) > > >> at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation( > > >> ZkCmdExecutor.java:60) > > >> at org.apache.solr.common.cloud.SolrZkClient.getData( > > >> SolrZkClient.java:353) > > >> at org.apache.solr.common.cloud.ZkStateReader. > > >> fetchCollectionState(ZkStateReader.java:1110) > > >> at org.apache.solr.common.cloud.ZkStateReader. > > getCollectionLive( > > >> ZkStateReader.java:1096) > > >> ... 43 more > > >> > > >> I've scoured the GC logs for solr and there are no long pauses > > >> (nothing even over 1 second) ... any ideas why that session could be > > >> expired? > > >> > > >
Re: SessionExpiredException
We already faced this issue and found out the issue to be long GC pauses itself on either client side or server side. Regards, Piyush On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey wrote: > On 5/3/2017 7:32 AM, Satya Marivada wrote: > > I see below exceptions in my logs sometimes. What could be causing it? > > > > org.apache.zookeeper.KeeperException$SessionExpiredException: > > Based on my limited research, this would tend to indicate that the > heartbeats ZK uses to detect when sessions have gone inactive are not > occurring in a timely fashion. > > Common causes seem to be: > > JVM Garbage collections. These can cause the entire JVM to pause for an > extended period of time, and this time may exceed the configured timeouts. > > Excess client connections to ZK. ZK limits the number of connections > from each client address, with the idea of preventing denial of service > attacks. If a client is misbehaving, it may make more connections than > it should. You can try increasing the limit in the ZK config, but if > this is the reason for the exception, then something's probably wrong, > and you may be just hiding the real problem. > > Although we might have bugs causing the second situation, the first > situation seems more likely. > > Thanks, > Shawn > >
Re: Migrate Documents to Another Collection
I have also noticed this issue and it happens while creating the collated result. Mostly due to huge version mismatch between the server and client. Best idea would be to use same server and client version. Or else switch off collation (spell check you can still keep on) and do the collation ( it's nothing but concat the spell checks) in your application itself. On 14-Feb-2017 7:34 am, "alias" <524839...@qq.com> wrote: hi I use solrj 5.5.0 to inquire solr3.6 reported the following error: Java.lang.ClassCastException: java.lang.Boolean can not be cast to org.apache.solr.common.util.NamedList At org.apache.solr.client.solrj.response.SpellCheckResponse. (SpellCheckResponse.java:47) At org.apache.solr.client.solrj.response.QueryResponse.extractSpellCheckInfo (QueryResponse.java:179) At org.apache.solr.client.solrj.response.QueryResponse.setResponse (QueryResponse.java:153) At org.apache.solr.client.solrj.SolrRequest.process (SolrRequest.java:149) At org.apache.solr.client.solrj.SolrClient.query (SolrClient.java:942) At org.apache.solr.client.solrj.SolrClient.query (SolrClient.java:957) At com.vip.vipme.demo.utils.SolrTest.testCategoryIdPC (SolrTest.java:66) At com.vip.vipme.demo.SolrjServlet1.doGet (SolrjServlet1.java:33) At javax.servlet.http.HttpServlet.service (HttpServlet.java:707) At javax.servlet.http.HttpServlet.service (HttpServlet.java:820) At org.mortbay.jetty.servlet.ServletHolder.handle (ServletHolder.java:487) At org.mortbay.jetty.servlet.ServletHandler.handle (ServletHandler.java:362) At org.mortbay.jetty.security.SecurityHandler.handle (SecurityHandler.java:216) At org.mortbay.jetty.servlet.SessionHandler.handle (SessionHandler.java:181) At org.mortbay.jetty.handler.ContextHandler.handle (ContextHandler.java:712) At org.mortbay.jetty.webapp.WebAppContext.handle (WebAppContext.java:405) If you set the query.set ("spellcheck", Boolean.FALSE); can solve this problem, But I would like to know what the specific reasons for this problem thinks
Re: Getting Error - Session expired for /collections/sprod/state.json
Looks like an issue with 6.x version then. But this seems too basic. Not sure if community would not have caught this till now. On Fri, Dec 16, 2016 at 2:55 PM, Yago Riveiro wrote: > I had some of this error in my logs too on 6.3.0 > > My cluster also index like 20K docs/sec I don't know why. > > -- > > /Yago Riveiro > > On 16 Dec 2016, 08:39 +, Piyush Kunal , > wrote: > > Anyone has noticed such issue before? > > > > On Thu, Dec 15, 2016 at 4:36 PM, Piyush Kunal > wrote: > > > > > This is happening when heavy indexing like 100/second is going on. > > > > > > On Thu, Dec 15, 2016 at 4:33 PM, Piyush Kunal > > wrote: > > > > > > > - We have solr6.1.0 cluster running on production with 1 shard and 5 > > > > replicas. > > > > - Zookeeper quorum on 3 nodes. > > > > - Using a chroot in zookeeper to segregate the configs from other > > > > collections. > > > > - Using solrj5.1.0 as our client to query solr. > > > > > > > > > > > > > > > > Usually things work fine but on and off we witness this exception > coming > > > > up: > > > > = > > > > org.apache.solr.common.SolrException: Could not load collection from > > > > ZK:sprod > > > > at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive > > > > (ZkStateReader.java:815) > > > > at org.apache.solr.common.cloud.ZkStateReader$5.get(ZkStateRead > > > > er.java:477) > > > > at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocColl > > > > ection(CloudSolrClient.java:1174) > > > > at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWit > > > > hRetryOnStaleState(CloudSolrClient.java:807) > > > > at org.apache.solr.client.solrj.impl.CloudSolrClient.request(Cl > > > > oudSolrClient.java:782) > > > > -- > > > > Caused by: org.apache.zookeeper.KeeperException$ > SessionExpiredException: > > > > KeeperErrorCode = Session expired for /collections/sprod/state.json > > > > at org.apache.zookeeper.KeeperException.create(KeeperException. > > > > java:127) > > > > at org.apache.zookeeper.KeeperException.create(KeeperException. > > > > java:51) > > > > at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) > > > > at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkCl > > > > ient.java:311) > > > > at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkCl > > > > ient.java:308) > > > > at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(Zk > > > > CmdExecutor.java:61) > > > > at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClien > > > > t.java:308) > > > > -- > > > > org.apache.solr.common.SolrException: Could not load collection from > > > > ZK:sprod > > > > at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive > > > > (ZkStateReader.java:815) > > > > at org.apache.solr.common.cloud.ZkStateReader$5.get(ZkStateRead > > > > er.java:477) > > > > at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocColl > > > > ection(CloudSolrClient.java:1174) > > > > at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWit > > > > hRetryOnStaleState(CloudSolrClient.java:807) > > > > at org.apache.solr.client.solrj.impl.CloudSolrClient.request(Cl > > > > oudSolrClient.java:782) > > > > -- > > > > Caused by: org.apache.zookeeper.KeeperException$ > SessionExpiredException: > > > > KeeperErrorCode = Session expired for /collections/sprod/state.json > > > > at org.apache.zookeeper.KeeperException.create(KeeperException. > > > > java:127) > > > > at org.apache.zookeeper.KeeperException.create(KeeperException. > > > > java:51) > > > > at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) > > > > at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkCl > > > > ient.java:311) > > > > at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkCl > > > > ient.java:308) > > > > at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(Zk > > > > CmdExecutor.java:61) > > > > at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClien > > > > t.java:308) > > > > ==
Re: Solr on HDFS: increase in query time with increase in data
I think 70GB is too huge for a shard. How much memory does the system is having? Incase solr does not have sufficient memory to load the indexes, it will use only the amount of memory defined in your Solr Caches. Although you are on HFDS, solr performances will be really bad if it has do disk IO at the query time. The best option for you is to shard it into atleast 8-10 nodes and create appropriate replicas according to your read traffic. Regards, Piyush On Fri, Dec 16, 2016 at 12:15 PM, Reth RM wrote: > I think the shard index size is huge and should be split. > > On Wed, Dec 14, 2016 at 10:58 AM, Chetas Joshi > wrote: > > > Hi everyone, > > > > I am running Solr 5.5.0 on HDFS. It is a solrCloud of 50 nodes and I have > > the following config. > > maxShardsperNode: 1 > > replicationFactor: 1 > > > > I have been ingesting data into Solr for the last 3 months. With increase > > in data, I am observing increase in the query time. Currently the size of > > my indices is 70 GB per shard (i.e. per node). > > > > I am using cursor approach (/export handler) using SolrJ client to get > back > > results from Solr. All the fields I am querying on and all the fields > that > > I get back from Solr are indexed and have docValues enabled as well. What > > could be the reason behind increase in query time? > > > > Has this got something to do with the OS disk cache that is used for > > loading the Solr indices? When a query is fired, will Solr wait for all > > (70GB) of disk cache being available so that it can load the index file? > > > > Thnaks! > > >
Re: Getting Error - Session expired for /collections/sprod/state.json
Anyone has noticed such issue before? On Thu, Dec 15, 2016 at 4:36 PM, Piyush Kunal wrote: > This is happening when heavy indexing like 100/second is going on. > > On Thu, Dec 15, 2016 at 4:33 PM, Piyush Kunal > wrote: > >> - We have solr6.1.0 cluster running on production with 1 shard and 5 >> replicas. >> - Zookeeper quorum on 3 nodes. >> - Using a chroot in zookeeper to segregate the configs from other >> collections. >> - Using solrj5.1.0 as our client to query solr. >> >> >> >> Usually things work fine but on and off we witness this exception coming >> up: >> = >> org.apache.solr.common.SolrException: Could not load collection from >> ZK:sprod >> at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive >> (ZkStateReader.java:815) >> at org.apache.solr.common.cloud.ZkStateReader$5.get(ZkStateRead >> er.java:477) >> at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocColl >> ection(CloudSolrClient.java:1174) >> at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWit >> hRetryOnStaleState(CloudSolrClient.java:807) >> at org.apache.solr.client.solrj.impl.CloudSolrClient.request(Cl >> oudSolrClient.java:782) >> -- >> Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: >> KeeperErrorCode = Session expired for /collections/sprod/state.json >> at org.apache.zookeeper.KeeperException.create(KeeperException. >> java:127) >> at org.apache.zookeeper.KeeperException.create(KeeperException. >> java:51) >> at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) >> at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkCl >> ient.java:311) >> at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkCl >> ient.java:308) >> at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(Zk >> CmdExecutor.java:61) >> at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClien >> t.java:308) >> -- >> org.apache.solr.common.SolrException: Could not load collection from >> ZK:sprod >> at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive >> (ZkStateReader.java:815) >> at org.apache.solr.common.cloud.ZkStateReader$5.get(ZkStateRead >> er.java:477) >> at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocColl >> ection(CloudSolrClient.java:1174) >> at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWit >> hRetryOnStaleState(CloudSolrClient.java:807) >> at org.apache.solr.client.solrj.impl.CloudSolrClient.request(Cl >> oudSolrClient.java:782) >> -- >> Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: >> KeeperErrorCode = Session expired for /collections/sprod/state.json >> at org.apache.zookeeper.KeeperException.create(KeeperException. >> java:127) >> at org.apache.zookeeper.KeeperException.create(KeeperException. >> java:51) >> at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) >> at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkCl >> ient.java:311) >> at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkCl >> ient.java:308) >> at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(Zk >> CmdExecutor.java:61) >> at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClien >> t.java:308) >> = >> >> >> >> >> >> This is our zoo.cfg: >> == >> tickTime=2000 >> dataDir=/var/lib/zookeeper >> clientPort=2181 >> initLimit=5 >> syncLimit=2 >> server.1=192.168.70.27:2888:3888 >> server.2=192.168.70.64:2889:3889 >> server.3=192.168.70.26:2889:3889 >> maxClientCnxns=300 >> maxSessionTimeout=9 >> === >> >> >> >> >> >> This is our solr.xml on server side >> === >> >> >> >> >> >> ${host:} >> ${jetty.port:8983} >> ${hostContext:solr} >> >> ${genericCoreNodeNames:true} >> >> ${zkClientTimeout:3} >> ${distribUpdateSoTimeout:60} >> > name="distribUpdateConnTimeout">${distribUpdateConnTimeout:6} >> > name="zkCredentialsProvider">${zkCredentialsProvider:org.apache.solr.common.cloud.DefaultZkCredentialsProvider} >> > name="zkACLProvider">${zkACLProvider:org.apache.solr.common.cloud.DefaultZkACLProvider} >> >> >> >> > class="HttpShardHandlerFactory"> >> ${socketTimeout:60} >> ${connTimeout:6} >> >> >> >> === >> >> >> >> >> Any help appreciated. >> >> Regards, >> Piyush >> > >
Re: Getting Error - Session expired for /collections/sprod/state.json
This is happening when heavy indexing like 100/second is going on. On Thu, Dec 15, 2016 at 4:33 PM, Piyush Kunal wrote: > - We have solr6.1.0 cluster running on production with 1 shard and 5 > replicas. > - Zookeeper quorum on 3 nodes. > - Using a chroot in zookeeper to segregate the configs from other > collections. > - Using solrj5.1.0 as our client to query solr. > > > > Usually things work fine but on and off we witness this exception coming > up: > = > org.apache.solr.common.SolrException: Could not load collection from > ZK:sprod > at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive( > ZkStateReader.java:815) > at org.apache.solr.common.cloud.ZkStateReader$5.get( > ZkStateReader.java:477) > at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection( > CloudSolrClient.java:1174) > at org.apache.solr.client.solrj.impl.CloudSolrClient. > requestWithRetryOnStaleState(CloudSolrClient.java:807) > at org.apache.solr.client.solrj.impl.CloudSolrClient.request( > CloudSolrClient.java:782) > -- > Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: > KeeperErrorCode = Session expired for /collections/sprod/state.json > at org.apache.zookeeper.KeeperException.create( > KeeperException.java:127) > at org.apache.zookeeper.KeeperException.create( > KeeperException.java:51) > at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) > at org.apache.solr.common.cloud.SolrZkClient$5.execute( > SolrZkClient.java:311) > at org.apache.solr.common.cloud.SolrZkClient$5.execute( > SolrZkClient.java:308) > at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation( > ZkCmdExecutor.java:61) > at org.apache.solr.common.cloud.SolrZkClient.exists( > SolrZkClient.java:308) > -- > org.apache.solr.common.SolrException: Could not load collection from > ZK:sprod > at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive( > ZkStateReader.java:815) > at org.apache.solr.common.cloud.ZkStateReader$5.get( > ZkStateReader.java:477) > at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection( > CloudSolrClient.java:1174) > at org.apache.solr.client.solrj.impl.CloudSolrClient. > requestWithRetryOnStaleState(CloudSolrClient.java:807) > at org.apache.solr.client.solrj.impl.CloudSolrClient.request( > CloudSolrClient.java:782) > -- > Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: > KeeperErrorCode = Session expired for /collections/sprod/state.json > at org.apache.zookeeper.KeeperException.create( > KeeperException.java:127) > at org.apache.zookeeper.KeeperException.create( > KeeperException.java:51) > at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) > at org.apache.solr.common.cloud.SolrZkClient$5.execute( > SolrZkClient.java:311) > at org.apache.solr.common.cloud.SolrZkClient$5.execute( > SolrZkClient.java:308) > at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation( > ZkCmdExecutor.java:61) > at org.apache.solr.common.cloud.SolrZkClient.exists( > SolrZkClient.java:308) > = > > > > > > This is our zoo.cfg: > == > tickTime=2000 > dataDir=/var/lib/zookeeper > clientPort=2181 > initLimit=5 > syncLimit=2 > server.1=192.168.70.27:2888:3888 > server.2=192.168.70.64:2889:3889 > server.3=192.168.70.26:2889:3889 > maxClientCnxns=300 > maxSessionTimeout=9 > === > > > > > > This is our solr.xml on server side > === > > > > > > ${host:} > ${jetty.port:8983} > ${hostContext:solr} > > ${genericCoreNodeNames:true} > > ${zkClientTimeout:3} > ${distribUpdateSoTimeout:60} > name="distribUpdateConnTimeout">${distribUpdateConnTimeout:6} > name="zkCredentialsProvider">${zkCredentialsProvider:org.apache.solr.common.cloud.DefaultZkCredentialsProvider} > name="zkACLProvider">${zkACLProvider:org.apache.solr.common.cloud.DefaultZkACLProvider} > > > >class="HttpShardHandlerFactory"> > ${socketTimeout:60} > ${connTimeout:6} > > > > === > > > > > Any help appreciated. > > Regards, > Piyush >
Getting Error - Session expired for /collections/sprod/state.json
- We have solr6.1.0 cluster running on production with 1 shard and 5 replicas. - Zookeeper quorum on 3 nodes. - Using a chroot in zookeeper to segregate the configs from other collections. - Using solrj5.1.0 as our client to query solr. Usually things work fine but on and off we witness this exception coming up: = org.apache.solr.common.SolrException: Could not load collection from ZK:sprod at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:815) at org.apache.solr.common.cloud.ZkStateReader$5.get(ZkStateReader.java:477) at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1174) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:807) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782) -- Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /collections/sprod/state.json at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:311) at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:308) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61) at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308) -- org.apache.solr.common.SolrException: Could not load collection from ZK:sprod at org.apache.solr.common.cloud.ZkStateReader.getCollectionLive(ZkStateReader.java:815) at org.apache.solr.common.cloud.ZkStateReader$5.get(ZkStateReader.java:477) at org.apache.solr.client.solrj.impl.CloudSolrClient.getDocCollection(CloudSolrClient.java:1174) at org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:807) at org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782) -- Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode = Session expired for /collections/sprod/state.json at org.apache.zookeeper.KeeperException.create(KeeperException.java:127) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045) at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:311) at org.apache.solr.common.cloud.SolrZkClient$5.execute(SolrZkClient.java:308) at org.apache.solr.common.cloud.ZkCmdExecutor.retryOperation(ZkCmdExecutor.java:61) at org.apache.solr.common.cloud.SolrZkClient.exists(SolrZkClient.java:308) = This is our zoo.cfg: == tickTime=2000 dataDir=/var/lib/zookeeper clientPort=2181 initLimit=5 syncLimit=2 server.1=192.168.70.27:2888:3888 server.2=192.168.70.64:2889:3889 server.3=192.168.70.26:2889:3889 maxClientCnxns=300 maxSessionTimeout=9 === This is our solr.xml on server side === ${host:} ${jetty.port:8983} ${hostContext:solr} ${genericCoreNodeNames:true} ${zkClientTimeout:3} ${distribUpdateSoTimeout:60} ${distribUpdateConnTimeout:6} ${zkCredentialsProvider:org.apache.solr.common.cloud.DefaultZkCredentialsProvider} ${zkACLProvider:org.apache.solr.common.cloud.DefaultZkACLProvider} ${socketTimeout:60} ${connTimeout:6} === Any help appreciated. Regards, Piyush
Re: Does sharding improve or degrade performance?
All our shards and replicas reside on different machines with 16GB RAM and 4 cores. On Tue, Dec 13, 2016 at 1:44 AM, Piyush Kunal wrote: > We did the following change: > > 1. Previously we had 1 shard and 32 replicas for 1.2million documents of > size 5 GB. > 2. We changed it to 4 shards and 8 replicas for 1.2 million documents of > size 5GB > > We have a combined RPM of around 20k rpm for solr. > > But unfortunately we saw a degrade in performance with RTs going insanely > high when we moved to setup 2. > > What could be probable reasons and how it can be fixed? >
Does sharding improve or degrade performance?
We did the following change: 1. Previously we had 1 shard and 32 replicas for 1.2million documents of size 5 GB. 2. We changed it to 4 shards and 8 replicas for 1.2 million documents of size 5GB We have a combined RPM of around 20k rpm for solr. But unfortunately we saw a degrade in performance with RTs going insanely high when we moved to setup 2. What could be probable reasons and how it can be fixed?
Using solrcloud6 and solrj client with HAProxy.
We are using a solrcloud 6.1 cluster with zookeeper. We have 6 nodes running behind the cluster. If I use solrj client with zookeeper, it will round robin across all the servers and distribute equal load across them. But I want to give priority to some nodes (with better configuration) to have more load. Previously we used to use a HAproxy above all the nodes which we can easily configure to put high loads on some nodes. But if zookeeper is doing the load-balancing, is there some way where we can give more load to some nodes? (by using HAProxy or something)
Re: Migrate data from solr4.9 to solr6.1
I would be using solrcloud on solr 6.1.0 and will be having more number of shards than my previous set-up. On Mon, Aug 29, 2016 at 11:38 PM, Piyush Kunal wrote: > Is there any way through which I can migrate my index which is currently > on 4.9 to 6.1? > > Looking for something backup and restore. >
Migrate data from solr4.9 to solr6.1
Is there any way through which I can migrate my index which is currently on 4.9 to 6.1? Looking for something backup and restore.