Solr nodes going down
I am seeing a weird issue of all the solr nodes on a single collection are shown as down, even though I restart the solr and zookeeper services. Little background Its just one collection with 4 replica's and the collection size is about ~140GB, I did enabled ttl that runs for every 5 mins to delete the expired documents from the collection. It used to work good till some time when the collection size is below this 140GB but all of sudden its now showing down and not seeing any related errors in the solr logs. Can someone guid me on how to troubleshoot this? Solr version: 8.2 Zookeeper: 3.8.4 -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
exclude a solr node to not to take select requests
Is there a way where I can configure one solr node to not take the select requests in a solr cloud? -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Understand on intermittent solr replica going to GONE state
Solr version: 8.2; Zoo - 3.4 I am progressively adding collection by collections with 3 replica's on each, and all of a sudden we got to see the load averages on solr nodes were bumped and also memory usage went to 65% usage on JAVA process , with that some replica's had went to "GONE" state (as per solr cloud) , until I restarted the solr service its been this issue. Need some guidance on where to start with on finding the root cause for this little outage? Data points: At the time we saw this outage, there are 3 instances of the copy tool which is actually pulling the data from old solr (5) and getting it indexed to new solr (8.2), which we stopped as and when we saw this outage as not really sure if that's creating the issue. we have around 12 solr nodes that are mapped to diff collections, each node got 8 Cores cpu with 64GB RAM (40GB was allocated to JVM HEAP) Based on the alerts , what we observed is that the load averages on few solr nodes were very high like 36.5, 26.7, 20 (not really sure if this is what concerning here as I used to see much lesser numbers like 3's and 4's on the averages but now this went to double digits) Also observed the below errors during that time from solr logs o.a.s.s.HttpSolrCall Unable to write response, client closed connection or we are shutting down org.eclipse.jetty.io.EofException: Closed at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:620) at org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:55) at org.apache.solr.response.QueryResponseWriterUtil$1.write(QueryResponseWriterUtil.java:54) at java.io.OutputStream.write(OutputStream.java:116) at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221) at sun.nio.cs.StreamEncoder.implWrite(StreamEncoder.java:282) at sun.nio.cs.StreamEncoder.write(StreamEncoder.java:125) at java.io.OutputStreamWriter.write(OutputStreamWriter.java:207) at org.apache.solr.util.FastWriter.flush(FastWriter.java:140) at org.apache.solr.util.FastWriter.flushBuffer(FastWriter.java:154) at org.apache.solr.response.TextResponseWriter.close(TextResponseWriter.java:93) at org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:73) at org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) at org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Question on solr metrics
Can we get the metrics for a particular time range? I know metrics history was not enabled, so that I will be having only from when the solr node is up and running last time, but even from it can we do a data range like for example on to see CPU usage on a particular time range? Note: Solr version: 8.2 -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Question on metric values
I am new to metrics api in solr , when I try to do solr/admin/metrics?prefix=QUERY./select.requests its throwing numbers against each collection that I have, I can understand those are the requests coming in against each collection, but for how much frequencies?? Like are those numbers from the time the collection went live or are those like last n minutes or any config based?? also what's the default frequencies when we don't configure anything?? Note: I am using solr 8.2 -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: Need help in understanding the below error message when running solr-exporter
Can someone help on the above pls?? On Sat, Oct 17, 2020 at 6:22 AM yaswanth kumar wrote: > Using Solr 8.2; Zoo 3.4; Solr mode: Cloud with multiple collections; Basic > Authentication: Enabled > > I am trying to run the > > export JAVA_OPTS="-Djavax.net.ssl.trustStore=etc/solr-keystore.jks > -Djavax.net.ssl.trustStorePassword=solrssl > -Dsolr.httpclient.builder.factory=org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactory > -Dbasicauth=solrrocks:" > > export > CLASSPATH_PREFIX="../../server/solr-webapp/webapp/WEB-INF/lib/commons-codec-1.11.jar" > > /bin/solr-exporter -p 8085 -z localhost:2181/solr -f > ./conf/solr-exporter-config.xml -n 16 > > and seeing these below messages and on the grafana solr dashboard I do see > panels coming in but data is not populating on them. > > Can someone help me if I am missing something interms of configuration? > > WARN - 2020-10-17 11:17:59.687; org.apache.solr.prometheus.scraper.Async; > Error occurred during metrics collection => > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.lang.NullPointerException > at > java.base/java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) > java.util.concurrent.ExecutionException: java.lang.RuntimeException: > java.lang.NullPointerException > at > java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:395) > ~[?:?] > at > java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1999) > ~[?:?] > at > org.apache.solr.prometheus.scraper.Async.lambda$null$1(Async.java:45) > [solr-prometheus-exporter-8.2.0.jar:8.2.0 > 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15:10:57] > at > org.apache.solr.prometheus.scraper.Async$$Lambda$190/.accept(Unknown > Source) [solr-prometheus-exporter-8.2.0.jar:8.2.0 > 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15:10:57] > at > java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:183) > [?:?] > at > java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:177) > [?:?] > at > java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1654) > [?:?] > at > java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:497) [?:?] > at > java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:487) > [?:?] > at > java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:150) > [?:?] > at > java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:173) > [?:?] > at > java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:239) [?:?] > at > java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:497) [?:?] > at > org.apache.solr.prometheus.scraper.Async.lambda$waitForAllSuccessfulResponses$3(Async.java:43) > [solr-prometheus-exporter-8.2.0.jar:8.2.0 > 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15:10:57] > at > org.apache.solr.prometheus.scraper.Async$$Lambda$165/.apply(Unknown > Source) [solr-prometheus-exporter-8.2.0.jar:8.2.0 > 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15:10:57] > at > java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:986) > [?:?] > at > java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:970) > [?:?] > at > java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) > [?:?] > at > java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1705) > [?:?] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) > [solr-solrj-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - > ivera - 2019-07-19 15:11:07] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$142/.run(Unknown > Source) [solr-solrj-8.2.0.jar:8.2.0 > 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15:11:07] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > [?:?] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > [?:?] > at java.lang.Thread.run(Thread.java:834) [?:?] > Caused by: java.lang.RuntimeException: java.lang.NullPointerException > at > org.apache.solr.prometheus.collector.SchedulerMetricsCollector.lambda$collectMetrics$0(SchedulerMetricsCollector.java:92) > ~[solr-pr
Need help in understanding the below error message when running solr-exporter
19 15:10:57] at org.apache.solr.prometheus.scraper.SolrCloudScraper.collections(SolrCloudScraper.java:133) ~[solr-prometheus-exporter-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15:10:57] at org.apache.solr.prometheus.collector.CollectionsCollector.collect(CollectionsCollector.java:35) ~[solr-prometheus-exporter-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15:10:57] at org.apache.solr.prometheus.collector.SchedulerMetricsCollector.lambda$collectMetrics$0(SchedulerMetricsCollector.java:90) ~[solr-prometheus-exporter-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15:10:57] at org.apache.solr.prometheus.collector.SchedulerMetricsCollector$$Lambda$163/.get(Unknown Source) ~[?:?] at java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1700) ~[?:?] ... 5 more -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: Info about legacyMode cluster property
Can someone help on the above question? On Thu, Oct 15, 2020 at 1:09 PM yaswanth kumar wrote: > Can someone explain what are the implications when we change > legacyMode=true on solr 8.2 > > We have migrated from solr 5.5 to solr 8.2 everything worked great but > when we are trying to add a core to existing collection with core api > create it’s asking to pass the coreNodeName or switch legacyMode to true. > When we switched it worked fine . But we need to understand on what are the > cons because seems like this is false by default from solr 7 > > Sent from my iPhone -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
converting string to solr.TextField
I am using solr 8.2 Can I change the schema fieldtype from string to solr.TextField without indexing? The reason is that string has only 32K char limit where as I am looking to store more than 32K now. The contents on this field doesn't require any analysis or tokenized but I need this field in the queries and as well as output fields. -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Info about legacyMode cluster property
Can someone explain what are the implications when we change legacyMode=true on solr 8.2 We have migrated from solr 5.5 to solr 8.2 everything worked great but when we are trying to add a core to existing collection with core api create it’s asking to pass the coreNodeName or switch legacyMode to true. When we switched it worked fine . But we need to understand on what are the cons because seems like this is false by default from solr 7 Sent from my iPhone
Is metrics api enabled by default in solr 8.2
Can I get some info on where to disable or enable metrics api on solr 8.2 ? I believe its enabled by default on solr 8.2 , where can I check the configurations? and also how can I disable if I want to disable it -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Need urgent help -- High cpu on solr
I am using solr 8.2 with zoo 3.4, and configured 5 node solr cloud with around 100 collections each collection having ~20k documents. These nodes are vm's with 6 core cpu and 2 cores per socket. All of sudden seeing hikes on CPU's and which brought down some nodes (GONE state on solr cloud and also faced latencies while trying to login to those nodes ssh) Memory : 32GB and 20GB was allotted for jvm heap on solr config. 200 100 true false 4 These are just from the defaults that shipped with SOLR package. One data point is that these nodes gets very frequent hits to them for searching, so do I need to consider increasing the above sizes to get down the CPU's and see more stable solr cloud? -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
unable to addReplica
.run(EatWhatYouKill.java:126)\n\tat org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366)\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781)\n\tat org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n", -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Need help in trying to understand the error
.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) ~[jetty-servlet-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) ~[jetty-rewrite-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.Server.handle(Server.java:505) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) ~[jetty-server-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) ~[jetty-io-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) ~[jetty-io-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427) ~[jetty-io-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:321) ~[jetty-io-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159) ~[jetty-io-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) ~[jetty-io-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) ~[jetty-io-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) ~[jetty-util-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) ~[jetty-util-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) ~[jetty-util-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) ~[jetty-util-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) ~[jetty-util-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781) ~[jetty-util-9.4.19.v20190610.jar:9.4.19.v20190610] at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917) ~[jetty-util-9.4.19.v20190610.jar:9.4.19.v20190610] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: javax.crypto.BadPaddingException: RSA private key operation failed at sun.security.rsa.NativeRSACore.crtCrypt_Native(NativeRSACore.java:149) ~[?:?] at sun.security.rsa.NativeRSACore.rsa(NativeRSACore.java:91) ~[?:?] at sun.security.rsa.RSACore.rsa(RSACore.java:149) ~[?:?] at com.sun.crypto.provider.RSACipher.doFinal(RSACipher.java:355) ~[?:?] at com.sun.crypto.provider.RSACipher.engineDoFinal(RSACipher.java:392) ~[?:?] at javax.crypto.Cipher.doFinal(Cipher.java:2260) ~[?:?] at org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:323) ~[solr-core-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15:11:04] Need to understand what these errors are about? and is there any way to remediate these errors? -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: Any solr api to force leader on a specified node
Hi wunder Thanks for replying on this.. I did setup solr cloud with 4 nodes being one node having DIH configured that pulls data from ms sql every minute.. if I install DIH on rest of the nodes it’s causing connection issues on the source dB which I don’t want and manage with only one sever polling dB while rest are used as replicas for search. So now everything works fine but when the severs are rebooted for maintenance and once they come back and if the leader is not the one that doesn’t have DIH it stops pulling data from sql , so that’s the reason why I want always to force a node as leader Sent from my iPhone > On Oct 11, 2020, at 11:05 PM, Walter Underwood wrote: > > That requirement is not necessary. Let Solr choose a leader. > > Why is someone making this bad requirement? > > wunder > Walter Underwood > wun...@wunderwood.org > http://observer.wunderwood.org/ (my blog) > >> On Oct 11, 2020, at 8:01 PM, yaswanth kumar wrote: >> >> Can someone pls help me to know if there is any solr api /config where we >> can make sure to always opt leader on a particular solr node in solr cloud?? >> >> Using solr 8.2 and zoo 3.4 >> >> I have four nodes and my requirement is to always make a particular node as >> leader >> >> Sent from my iPhone >
Shipping solr logs to any open source log viewer
Can someone let me know if there is any integration available to ship all solr logs live to any open source log viewers?? Sent from my iPhone
Any solr api to force leader on a specified node
Can someone pls help me to know if there is any solr api /config where we can make sure to always opt leader on a particular solr node in solr cloud?? Using solr 8.2 and zoo 3.4 I have four nodes and my requirement is to always make a particular node as leader Sent from my iPhone
Re: Question about solr commits
Thank you very much both Eric and Shawn Sent from my iPhone > On Oct 7, 2020, at 10:41 PM, Shawn Heisey wrote: > > On 10/7/2020 4:40 PM, yaswanth kumar wrote: >> I have the below in my solrconfig.xml >> >> >> ${solr.Data.dir:} >> >> >> ${solr.autoCommit.maxTime:6} >> false >> >> >> ${solr.autoSoftCommit.maxTime:5000} >> >> >> Does this mean even though we are always sending data with commit=false on >> update solr api, the above should do the commit every minute (6 ms) >> right? > > Assuming that you have not defined the "solr.autoCommit.maxTime" and/or > "solr.autoSoftCommit.maxTime" properties, this config has autoCommit set to > 60 seconds without opening a searcher, and autoSoftCommit set to 5 seconds. > > So five seconds after any indexing begins, Solr will do a soft commit. When > that commit finishes, changes to the index will be visible to queries. One > minute after any indexing begins, Solr will do a hard commit, which > guarantees that data is written to disk, but it will NOT open a new searcher, > which means that when the hard commit happens, any pending changes to the > index will not be visible. > > It's not "every five seconds" or "every 60 seconds" ... When any changes are > made, Solr starts a timer. When the timer expires, the commit is fired. If > no changes are made, no commits happen, because the timer isn't started. > > Thanks, > Shawn
Question about solr commits
I have the below in my solrconfig.xml ${solr.Data.dir:} ${solr.autoCommit.maxTime:6} false ${solr.autoSoftCommit.maxTime:5000} Does this mean even though we are always sending data with commit=false on update solr api, the above should do the commit every minute (6 ms) right? -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Transaction not closed on ms sql
Can some one help in troubleshooting some issues that happening from DIH?? Solr version: 8.2; zookeeper 3.4 Solr cloud with 4 nodes and 3 zookeepers 1. Configured DIH for ms sql with mssql jdbc driver, and when trying to pull the data from mssql it’s connecting and fetching records but we do see the connection that was opened on the other end mssql was not closed even though the full import was completed .. need some help in troubleshooting why it’s leaving connections open 2. The way I have scheduled this import api call is like a util that will be hitting DIH api every min with a solr pool url and with this it looks like multiple calls are going from different solr nodes which I don’t want .. I always need the call to be taken by only one node.. can we control this with any config?? Or is this happening because I have three zoo’s?? Please suggest the best approach 3. I do see some records are shown as failed while doing import, is there a way to track these failures?? Like why a minimal no of records are failing?? Sent from my iPhone
Any blog or url that explain step by step configure grafana dashboard to monitor solr metrics
Can some one post here any blogs or url where I can get the detailed steps involved in configuring grafana dashboard for monitoring solr metrics?? Sent from my iPhone
How to persist the data in dataimport.properties
Can someone help me on how to persists the data that's updated in dataimport.properties file because it got a last index time so that my data import depends on it for catching up the delta imports. What I noticed is that every time when I restart solr this file is wiped out and getting its default content instead of what I used to see before solr service restart. So want to know if there is anything that I can do to persist the last successful index timestamp? Solr version: 8.2 Zookeeper: 3.4 -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: Addreplica throwing error when authentication is enabled
Hi Ben Thanks for looking.. but I am not understanding about the file encrypted stuff that you mentioned?? Which file are you saying encrypted ? Security.json?? Sent from my iPhone > On Sep 1, 2020, at 10:56 PM, Ben wrote: > > It appears the issue is with the encrypted file. Are these files encrypted? > If yes, you need to decrypt it first. > > moreCaused by: javax.crypto.BadPaddingException: RSA private key operation > failed > > Best, > Ben > >> On Tue, Sep 1, 2020, 10:51 PM yaswanth kumar wrote: >> >> Can some one please help me on the below error?? >> >> Solr 8.2; zookeeper 3.4 >> >> Enabled authentication and authentication and make sure that the role gets >> all access >> >> Now just add a collection with single replica and once done .. now try to >> add another replica with addreplica solr api and that’s throwing error .. >> note: this is happening only when security.json was enabled with >> authentication >> >> Below is the error >> Collection: test operation: restore >> failed:org.apache.solr.common.SolrException: ADDREPLICA failed to create >> replicaCollection: test operation: restore >> failed:org.apache.solr.common.SolrException: ADDREPLICA failed to create >> replica at >> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler$ShardRequestTracker.processResponses(OverseerCollectionMessageHandler.java:1030) >> at >> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler$ShardRequestTracker.processResponses(OverseerCollectionMessageHandler.java:1013) >> at >> org.apache.solr.cloud.api.collections.AddReplicaCmd.lambda$addReplica$1(AddReplicaCmd.java:177) >> at >> org.apache.solr.cloud.api.collections.AddReplicaCmd$$Lambda$798/.run(Unknown >> Source) at >> org.apache.solr.cloud.api.collections.AddReplicaCmd.addReplica(AddReplicaCmd.java:199) >> at >> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.addReplica(OverseerCollectionMessageHandler.java:708) >> at >> org.apache.solr.cloud.api.collections.RestoreCmd.call(RestoreCmd.java:286) >> at >> org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:264) >> at >> org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:505) >> at >> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) >> at >> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$142/.run(Unknown >> Source) at >> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) >> at >> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) >> at java.base/java.lang.Thread.run(Thread.java:834)Caused by: >> org.apache.solr.common.SolrException: javax.crypto.BadPaddingException: RSA >> private key operation failed at >> org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:325) at >> org.apache.solr.security.PKIAuthenticationPlugin.generateToken(PKIAuthenticationPlugin.java:305) >> at >> org.apache.solr.security.PKIAuthenticationPlugin.access$200(PKIAuthenticationPlugin.java:61) >> at >> org.apache.solr.security.PKIAuthenticationPlugin$2.onQueued(PKIAuthenticationPlugin.java:239) >> at >> org.apache.solr.client.solrj.impl.Http2SolrClient.decorateRequest(Http2SolrClient.java:468) >> at >> org.apache.solr.client.solrj.impl.Http2SolrClient.makeRequest(Http2SolrClient.java:455) >> at >> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:364) >> at >> org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:746) >> at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1274) at >> org.apache.solr.handler.component.HttpShardHandler.request(HttpShardHandler.java:238) >> at >> org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:199) >> at >> org.apache.solr.handler.component.HttpShardHandler$$Lambda$512/.call(Unknown >> Source) at >> java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at >> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) >> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at >> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181) >> ... 5 moreCaused by: javax.crypto.BadPaddingException: RSA priva
Addreplica throwing error when authentication is enabled
Can some one please help me on the below error?? Solr 8.2; zookeeper 3.4 Enabled authentication and authentication and make sure that the role gets all access Now just add a collection with single replica and once done .. now try to add another replica with addreplica solr api and that’s throwing error .. note: this is happening only when security.json was enabled with authentication Below is the error Collection: test operation: restore failed:org.apache.solr.common.SolrException: ADDREPLICA failed to create replicaCollection: test operation: restore failed:org.apache.solr.common.SolrException: ADDREPLICA failed to create replica at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler$ShardRequestTracker.processResponses(OverseerCollectionMessageHandler.java:1030) at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler$ShardRequestTracker.processResponses(OverseerCollectionMessageHandler.java:1013) at org.apache.solr.cloud.api.collections.AddReplicaCmd.lambda$addReplica$1(AddReplicaCmd.java:177) at org.apache.solr.cloud.api.collections.AddReplicaCmd$$Lambda$798/.run(Unknown Source) at org.apache.solr.cloud.api.collections.AddReplicaCmd.addReplica(AddReplicaCmd.java:199) at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.addReplica(OverseerCollectionMessageHandler.java:708) at org.apache.solr.cloud.api.collections.RestoreCmd.call(RestoreCmd.java:286) at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:264) at org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:505) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$142/.run(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834)Caused by: org.apache.solr.common.SolrException: javax.crypto.BadPaddingException: RSA private key operation failed at org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:325) at org.apache.solr.security.PKIAuthenticationPlugin.generateToken(PKIAuthenticationPlugin.java:305) at org.apache.solr.security.PKIAuthenticationPlugin.access$200(PKIAuthenticationPlugin.java:61) at org.apache.solr.security.PKIAuthenticationPlugin$2.onQueued(PKIAuthenticationPlugin.java:239) at org.apache.solr.client.solrj.impl.Http2SolrClient.decorateRequest(Http2SolrClient.java:468) at org.apache.solr.client.solrj.impl.Http2SolrClient.makeRequest(Http2SolrClient.java:455) at org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:364) at org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:746) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1274) at org.apache.solr.handler.component.HttpShardHandler.request(HttpShardHandler.java:238) at org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:199) at org.apache.solr.handler.component.HttpShardHandler$$Lambda$512/.call(Unknown Source) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181) ... 5 moreCaused by: javax.crypto.BadPaddingException: RSA private key operation failed at java.base/sun.security.rsa.NativeRSACore.crtCrypt_Native(NativeRSACore.java:149) at java.base/sun.security.rsa.NativeRSACore.rsa(NativeRSACore.java:91) at java.base/sun.security.rsa.RSACore.rsa(RSACore.java:149) at java.base/com.sun.crypto.provider.RSACipher.doFinal(RSACipher.java:355) at java.base/com.sun.crypto.provider.RSACipher.engineDoFinal(RSACipher.java:392) at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2260) at org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:323) ... 20 more That's the error stack trace I am seeing, as soon as I call the restore API I am seeing the collection test with a single core on the cloud but its in down state. No of nodes that I configured with solr cloud is : 2 Testing on a single collection with 2 replicas Here is my security.json looks like { "authentication":{ "class":"solr.BasicAuthPlugin", "credentials": { "admin":"", "dev":""} , "":{"v":11}, "blockUnknown":true, "forwardCredentials":true}, "authorization":{ "class":"solr.RuleBasedAuthorizationPlugin", "user-role": { "solradmin":[ "admin", "dev"], "dev":["read"]} , "":{"v":9}, "permissions":[ { "name":"
Re: Understanding Solr heap %
I got some understanding now about my actual question.. thanks all for your valuable theories Sent from my iPhone > On Sep 1, 2020, at 2:01 PM, Joe Doupnik wrote: > > As I have not received the follow-on message to mine I will cut&paste > it below. > My comments on that are the numbers are the numbers. More importantly, I > have run large imports ~0.5M docs and I have watched as that progresses. My > crawler paces material into Solr. Memory usage (Linux "top") shows cyclic > small rises and falls, peaking at about 2GB as the crawler introduces 1 and 3 > second pauses every hundred and thousand submissions.. The test shown in my > original message is sufficient to show the nature of Solr versions and the > choice of garbage collector, and other folks can do similar experiments on > their gear. The quoted tests are indeed representative of large and small > amounts of various kinds of documents, and I say that based on much > experience observing the details. > Quibble about GC names if you wish, but please do see those experimental > results. Also note the difference in our SOLR_HEAP values: 2GB in my work, > 8GB in yours. I have found 2GB to work well for importing small and very > large collections (of many file varieties). > Thanks, > Joe D. >> This is misleading and not particularly good advice. >> >> Solr 8 does NOT contain G1. G1GC is a feature of the JVM. We’ve been using >> it with Java 8 and Solr 6.6.2 for a few years. >> >> A test with eighty documents doesn’t test anything. Try a million documents >> to >> get Solr memory usage warmed up. >> >> GC_TUNE has been in the solr.in.sh file for a long time. Here are the >> settings >> we use with Java 8. We have about 120 hosts running Solr in six prod >> clusters. >> >> SOLR_HEAP=8g >> # Use G1 GC -- wunder 2017-01-23 >> # Settings from https://wiki.apache.org/solr/ShawnHeisey >> GC_TUNE=" \ >> -XX:+UseG1GC \ >> -XX:+ParallelRefProcEnabled \ >> -XX:G1HeapRegionSize=8m \ >> -XX:MaxGCPauseMillis=200 \ >> -XX:+UseLargePages \ >> -XX:+AggressiveOpts \ >> " >> >> wunder >> Walter Underwood >> wun...@wunderwood.org >> http://observer.wunderwood.org/ (my blog) > >> On 01/09/2020 16:39, Joe Doupnik wrote: >> Erick states this correctly. To give some numbers from my experiences, >> here are two slides from my presentation about installing Solr >> (https://netlab1.net/, locate item "Solr/Lucene Search Service"): >>> >> >>> >> >> Thus we see a) experiments are the key, just as Erick says, and b) the >> choice of garbage collection algorithm plays a major role. >> In my setup I assigned SOLR_HEAP to be 2048m, SOLR_OPTS has -Xss1024k, >> plus stock GC_TUNE values. Your "memorage" may vary. >> Thanks, >> Joe D. >> >>> On 01/09/2020 15:33, Erick Erickson wrote: >>> You want to run with the smallest heap you can due to Lucene’s use of >>> MMapDirectory, >>> see the excellent: >>> >>> https://blog.thetaphi.de/2012/07/use-lucenes-mmapdirectory-on-64bit.html >>> >>> There’s also little reason to have different Xms and Xmx values, that just >>> means you’ll >>> eventually move a bunch of memory around as the heap expands, I usually set >>> them both >>> to the same value. >>> >>> How to determine what “the smallest heap you can” is? Unfortunately there’s >>> no good way >>> outside of stress-testing your application with less and less memory until >>> you have problems, >>> then add some extra… >>> >>> Best, >>> Erick >>> >>>>> On Sep 1, 2020, at 10:27 AM, Dominique Bejean >>>>> wrote: >>>>> >>>>> Hi, >>>>> >>>>> As all Java applications the Heap memory is regularly cleaned by the >>>>> garbage collector (some young items moved to the old generation heap zone >>>>> and unused old items removed from the old generation heap zone). This >>>>> causes heap usage to continuously grow and reduce. >>>>> >>>>> Regards >>>>> >>>>> Dominique >>>>> >>>>> >>>>> >>>>> >>>>> Le mar. 1 sept. 2020 à 13:50, yaswanth kumar a >>>>> écrit : >>>>> >>>>> Can someone make me understand on how the value % on t
Understanding Solr heap %
Can someone make me understand on how the value % on the column Heap is calculated. I did created a new solr cloud with 3 solr nodes and one zookeeper, its not yet live neither interms of indexing or searching, but I do see some spikes in the HEAP column against nodes when I refresh the page multiple times. Its like almost going to 95% (sometimes) and then coming down to 50% Solr version: 8.2 Zookeeper: 3.4 JVM size configured in solr.in.sh is min of 1GB to max of 10GB (actually RAM size on the node is 16GB) Basically need to understand if I need to worry about this heap % which was quite altering before making it live? or is that quite normal, because this is new UI change on solr cloud is kind of new to us as we used to have solr 5 version before and this UI component doesn't exists then. -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com Sent from my iPhone
Re: New replica is being added to existing collections upon server reboot
sorry there is a typo on earlier message.. I didn't make any changes in autoscalling policies/triggers On Thu, Aug 27, 2020 at 12:12 PM yaswanth kumar wrote: > Thanks for looking into this. > > I did used replication factor as 3 with autoAddReplica=true, and did make > any changes in autoscalling policies/triggers everything is defaults that > comes with solr. > > On Thu, Aug 27, 2020 at 11:50 AM Howard Gonzalez < > howard.gonza...@careerbuilder.com> wrote: > >> Hi, could you share the replication factor that you're using for those >> collections (in case they are NRT replicas)? DId you make any changes in >> autoscaling policies/triggers? >> >> From: yaswanth kumar >> Sent: Thursday, August 27, 2020 11:37 AM >> To: solr-user@lucene.apache.org >> Subject: New replica is being added to existing collections upon server >> reboot >> >> Can someone help me understand on why the below is happening? >> >> Solr: 8.2; Zookeer:3.5 >> >> One zookeeper + 3 solr nodes >> >> Initially created multiple collections with 3 replicas , indexed data >> everything looked great. >> >> We now restarted all 3 solr nodes, and we started the zookeeper and solr >> services , cloud came back good but on each collection we are now seeing a >> 4th replica and that is in down status. Other than this additional node >> every other operation is working fine. >> >> Please let me know on what case this could happen, also want to know if >> there is any easy way to get rid of the 4th newly added down solr replica >> for all collections. >> >> -- >> Thanks & Regards, >> Yaswanth Kumar Konathala. >> yaswanth...@gmail.com >> > > > -- > Thanks & Regards, > Yaswanth Kumar Konathala. > yaswanth...@gmail.com > -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: New replica is being added to existing collections upon server reboot
Thanks for looking into this. I did used replication factor as 3 with autoAddReplica=true, and did make any changes in autoscalling policies/triggers everything is defaults that comes with solr. On Thu, Aug 27, 2020 at 11:50 AM Howard Gonzalez < howard.gonza...@careerbuilder.com> wrote: > Hi, could you share the replication factor that you're using for those > collections (in case they are NRT replicas)? DId you make any changes in > autoscaling policies/triggers? > ____ > From: yaswanth kumar > Sent: Thursday, August 27, 2020 11:37 AM > To: solr-user@lucene.apache.org > Subject: New replica is being added to existing collections upon server > reboot > > Can someone help me understand on why the below is happening? > > Solr: 8.2; Zookeer:3.5 > > One zookeeper + 3 solr nodes > > Initially created multiple collections with 3 replicas , indexed data > everything looked great. > > We now restarted all 3 solr nodes, and we started the zookeeper and solr > services , cloud came back good but on each collection we are now seeing a > 4th replica and that is in down status. Other than this additional node > every other operation is working fine. > > Please let me know on what case this could happen, also want to know if > there is any easy way to get rid of the 4th newly added down solr replica > for all collections. > > -- > Thanks & Regards, > Yaswanth Kumar Konathala. > yaswanth...@gmail.com > -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
New replica is being added to existing collections upon server reboot
Can someone help me understand on why the below is happening? Solr: 8.2; Zookeer:3.5 One zookeeper + 3 solr nodes Initially created multiple collections with 3 replicas , indexed data everything looked great. We now restarted all 3 solr nodes, and we started the zookeeper and solr services , cloud came back good but on each collection we are now seeing a 4th replica and that is in down status. Other than this additional node every other operation is working fine. Please let me know on what case this could happen, also want to know if there is any easy way to get rid of the 4th newly added down solr replica for all collections. -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: All cores gone along with all solr configuration upon reboot
Hi Erick, Here is the latest most error that I captured which seems to be actually deleting the cores ( I did noticed that the core folders under the path ../solr/server/solr were deleted one by one when the server came back from reboot) 2020-08-24 04:41:27.424 ERROR (coreContainerWorkExecutor-2-thread-1-processing-n:9.70.170.51:8080_solr) [ ] o.a.s.c.CoreContainer Error waiting for SolrCore to be loaded on$ at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1875) *org.apache.solr.cloud.ZkController$NotInClusterStateException: coreNodeName core_node3 does not exist in shard shard1, ignore the exception if the replica was deleted*at org.apache.solr.cloud.ZkController.checkStateInZk(ZkController.java:1875) ~[solr-core-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19$ at org.apache.solr.cloud.ZkController.preRegister(ZkController.java:1774) ~[solr-core-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19 15$ at org.apache.solr.core.CoreContainer.createFromDescriptor(CoreContainer.java:1238) ~[solr-core-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 201$ at org.apache.solr.core.CoreContainer.lambda$load$13(CoreContainer.java:756) ~[solr-core-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e962165410b65fe - ivera - 2019-07-19$ at org.apache.solr.core.CoreContainer$$Lambda$343/.call(Unknown Source) ~[?:?] at com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:202) ~[metrics-core-4.0.5.jar:4.0.5] at java.util.concurrent.FutureTask.run(FutureTask.java:264) ~[?:?] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) ~[solr-solrj-8.2.0.jar:8.2.0 31d7ec7bbfdcd2c4cc61d9d35e$ at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$142/.run(Unknown Source) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] For some reason I believe solr is not able to find the replica in the clusterstate and its causing the delete activity, not really sure on why its not able to find it in clusterstate, I think due to some issue looks like first clusterstate is getting wiped out and then slowly rest the cores are getting deleted themselves. As you asked I did cross checked once again on the port numbers and I am using 2181 as a clientport and the same is what I see in the dashboard screen of solr for ZKHOST., not really sure on how can I prevent this going forward. One thing here is that I am using Solr basic AUTHENTICATION plugin if it makes any difference. On Sat, Aug 22, 2020 at 11:55 AM Erick Erickson wrote: > Autopurge shouldn’t matter, that’s just cleaning up old snapshots. That > is, it should be configured, but having it enabled or not should have no > bearing on your data disappearing. > > Also, are you absolutely certain that you are using your external ZK? > Check the port on the admin screen. 9983 is the default for embededded ZK. > > All that said, nothing in Solr just deletes all this. The fact that you > only saw this on reboot is highly suspicious, some external-to-Solr > process, anything from a startup script to restoring a disk image to…. is > removing that data I suspect. > > Best, > Erick > > > On Aug 22, 2020, at 9:24 AM, yaswanth kumar > wrote: > > > > Thanks Eric for looking into this.. > > > > But as I said before I confirmed that the paths in zookeeper were > changed to local path than the /tmp that comes default with package. Does > the zoo.cfg need to have autopurge settings ??which I don’t have in my > config > > > > Also I did make sure that zoo.cfg inside solr and my external zoo are > pointing to the same and have same configs if it matters. > > > > Sent from my iPhone > > > >> On Aug 22, 2020, at 9:07 AM, Erick Erickson > wrote: > >> > >> Sounds like you didn’t change Zookeeper data dir. Zookeeper defaults > to putting its data in /tmp/zookeeper, see the zookeeper config file. And, > of course, when you reboot it goes away. > >> > >> I’ve always disliked this, but the Zookeeper folks did it that way. So > if you just copy zoo_sample.cfg to zoo.cfg that’s what you get, not under > Solr’s control. > >> > >> As for how to recover, assuming you put your configsets in some kind of > version control as we recommend: > >> > >> 0> set up Zookeeper to keep it’s data somewhere permanent. You may want > to archive snapshots upon occasion as well.
Re: All cores gone along with all solr configuration upon reboot
Thanks Eric for looking into this.. But as I said before I confirmed that the paths in zookeeper were changed to local path than the /tmp that comes default with package. Does the zoo.cfg need to have autopurge settings ??which I don’t have in my config Also I did make sure that zoo.cfg inside solr and my external zoo are pointing to the same and have same configs if it matters. Sent from my iPhone > On Aug 22, 2020, at 9:07 AM, Erick Erickson wrote: > > Sounds like you didn’t change Zookeeper data dir. Zookeeper defaults to > putting its data in /tmp/zookeeper, see the zookeeper config file. And, of > course, when you reboot it goes away. > > I’ve always disliked this, but the Zookeeper folks did it that way. So if you > just copy zoo_sample.cfg to zoo.cfg that’s what you get, not under Solr’s > control. > > As for how to recover, assuming you put your configsets in some kind of > version control as we recommend: > > 0> set up Zookeeper to keep it’s data somewhere permanent. You may want to > archive snapshots upon occasion as well. > > 1> save away the data directory for _one_ replica from each shard of every > collection somewhere. You should have a bunch of directories like > SOLR_HOME/…./collection1_shard1_replica_n1/data. > > 2> recreate all your collections with leader-only new collections with the > exact same number of shards, i.e. shards with only a single replica. > > 3> shut down all your Solr instances > > 4> copy the data directories you saved in <2>. You _MUST_ copy to > corresponding shards. The important bit is that a data directory from > collection1_shard1 goes back to collection1_shard1. If you copy it back to > collection1_shard2 Bad Things Happen. Actually, I’d delete the target data > directories first and then copy. > > 5> restart your Solr instances and verify they look OK. > > 6> use the collections API ADDREPLICA to build out your collections. > > Best, > Erick > >> On Aug 22, 2020, at 12:10 AM, yaswanth kumar wrote: >> >> Can someone help me on the below issue?? >> >> I have configured solr 8.2 with one zookeeper 3.4 and 3 solr nodes >> >> All the configs were pushed initially and Also Indexed all the data into >> multiple collections with 3 replicas on each collection >> >> Now part of server maintenance these solr nodes were restarted and once they >> came back solr could became empty.. lost all the collections .. all >> collections specific instance directories in the path /solr/server/solr >> Were deleted ..but data folders are intact nothing lost.. not really sure on >> how to recover from this situation. >> >> Did make sure that the zoo.cfg was properly configured (permanent paths for >> zoo data and logs instead of /tmp )as I am using external zoo instead of the >> one that comes with solr. >> >> Solr data path is a nas storage which is a common for all three solr nodes >> >> Another data point is that I enabled solr basic authentication as well if >> that’s making any difference. Even clusterstate , schema’s, security Json >> were all lost.. really looking for a help in understanding to prevent this >> happening again. >> >> Sent from my iPhone >
All cores gone along with all solr configuration upon reboot
Can someone help me on the below issue?? I have configured solr 8.2 with one zookeeper 3.4 and 3 solr nodes All the configs were pushed initially and Also Indexed all the data into multiple collections with 3 replicas on each collection Now part of server maintenance these solr nodes were restarted and once they came back solr could became empty.. lost all the collections .. all collections specific instance directories in the path /solr/server/solr Were deleted ..but data folders are intact nothing lost.. not really sure on how to recover from this situation. Did make sure that the zoo.cfg was properly configured (permanent paths for zoo data and logs instead of /tmp )as I am using external zoo instead of the one that comes with solr. Solr data path is a nas storage which is a common for all three solr nodes Another data point is that I enabled solr basic authentication as well if that’s making any difference. Even clusterstate , schema’s, security Json were all lost.. really looking for a help in understanding to prevent this happening again. Sent from my iPhone
Deleted collection is getting back after restart
I am using solr with zookeeper ensemble, but some time when we delete the collection with the solr API , they are getting disappeared from solr cloud but after some days, when the machines are rebooted, they are coming back on the cloud but with down status. Not really sure if its an issue with zookeeper which is not persisting the delete activity on the clusterstate (even clusterstate is showing back these nodes with down status) Also the similar issue is happening when we update the schema , when we do any modifications to solr schema and upload it via zookeeper it works fine until the next reboot of the solr boxes, once the reboot is done, for some reason its getting the back the older version schema. Is it mandatory that we need to restart zookeeper upon doing the above two operations always?? -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Adding additional zookeeper on top
Hi Team Can someone let me know if we can do an upgrade to zookeeper ensemble from standalone ?? I have 3 solr nodes with one zookeeper running on one of the node .. and it’s a solr cloud .. so now can I install zookeeper on another node just to make sure it’s not a single point of failure when the solr node that got zookeeper is down?? Also want to understand what’s the best formula of choosing no of zookeeper that’s needed for solr cloud like for how many solr nodes .. how many zookeeper do we need to maintain for best fault tolerance Sent from my iPhone
Re: wt=xml not defaulting the results to xml format
Thanks for looking into this Erick, solr/PROXIMITY_DATA_V2/select?q=pkey:223_*&group=true&group.field=country_en&fl=country_en that's what the url I am hitting, and also I made sure that initParams is all commented like this and also I made sure that there is no uncommneted section defined for initParams. Also from the solrcloud I did make sure that I am checking the correct collection and verified solrconfig.xml by choosing the collection and browsing files within the same collection. What ever I am trying is not working other than sending wt=xml as a parameter while hitting the url. Thanks, On Fri, Aug 7, 2020 at 10:31 AM Erick Erickson wrote: > Please show us the _exact_ URL you’re sending as well as the response > header, particularly the echoed params. > > This is a long shot, but also take a look at any “initParams” sections in > solrconfig.xml. The “wt” parameter you’ve specified in your select handler > should override anything in the section of initParams. But > you’re handler is specifying wt in the defualts section, if your initParams > have the json wt specified in an invariants section that would control. > > I also recommend you look at your solrconfig through the admin UI, that > insures that you’re looking at the same solrconfig that your collection is > actually using. Then check your collections/ to > double check that your collection is using the configset you think it is. > This latter assumes SolrCloud. > > This is likely something in your configurations that is not as you expect. > > Best, > Erick > > > On Aug 7, 2020, at 10:19 AM, yaswanth kumar > wrote: > > > > Thanks Shawn, for looking into this. > > > > I did make sure that no explicit parameter wt is being sent and also > > verified the logs and even that's not showing up any extra parameters. > But > > it's always taking json as a default, unless I pass it explicitly as > wt=xml > > which I don't want to do it here. Is there something else that I need to > do > > ? > > > > On Fri, Aug 7, 2020 at 4:23 AM Shawn Heisey wrote: > > > >> How are you sending the query request that doesn't come back as xml? I > >> suspect that the request is being sent with an explicit wt parameter > set to > >> something other than xml. Making a query with the admin ui would do > this, > >> and it would probably default to json. > >> > >> When you make a query, assuming you haven't changed the logging config, > >> every parameter in that request can be found in the log entry for the > >> query, including those that come from the solrconfig.xml. > >> > >> Sorry about the top posted reply. It's the only option on this email > app. > >> My computer isn't available so I'm on my phone. > >> > >> Get TypeApp for Android > >> > >> On Aug 6, 2020, 21:52, at 21:52, yaswanth kumar > >> wrote: > >>> Can someone help me on this ASAP? I am using solr 8.2.0 and below is > >>> the > >>> snippet from solrconfig.xml for one of the configset, where I am trying > >>> to > >>> default the results into xml format but its giving me as a json result. > >>> > >>> > >>> > >>> > >>> all > >>> 10 > >>> > >>>pkey > >>>xml > >>> > >>> > >>> Can some one let me know if I need to do something more to always get a > >>> solr /select query results as XML?? > >>> -- > >>> Thanks & Regards, > >>> Yaswanth Kumar Konathala. > >>> yaswanth...@gmail.com > >> > >> > > > > -- > > Thanks & Regards, > > Yaswanth Kumar Konathala. > > yaswanth...@gmail.com > > -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: wt=xml not defaulting the results to xml format
Thanks Shawn, for looking into this. I did make sure that no explicit parameter wt is being sent and also verified the logs and even that's not showing up any extra parameters. But it's always taking json as a default, unless I pass it explicitly as wt=xml which I don't want to do it here. Is there something else that I need to do ? On Fri, Aug 7, 2020 at 4:23 AM Shawn Heisey wrote: > How are you sending the query request that doesn't come back as xml? I > suspect that the request is being sent with an explicit wt parameter set to > something other than xml. Making a query with the admin ui would do this, > and it would probably default to json. > > When you make a query, assuming you haven't changed the logging config, > every parameter in that request can be found in the log entry for the > query, including those that come from the solrconfig.xml. > > Sorry about the top posted reply. It's the only option on this email app. > My computer isn't available so I'm on my phone. > > Get TypeApp for Android > > On Aug 6, 2020, 21:52, at 21:52, yaswanth kumar > wrote: > >Can someone help me on this ASAP? I am using solr 8.2.0 and below is > >the > >snippet from solrconfig.xml for one of the configset, where I am trying > >to > >default the results into xml format but its giving me as a json result. > > > > > > > > > > all > > 10 > > > > pkey > > xml > > > > > >Can some one let me know if I need to do something more to always get a > >solr /select query results as XML?? > >-- > >Thanks & Regards, > >Yaswanth Kumar Konathala. > >yaswanth...@gmail.com > > -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
wt=xml not defaulting the results to xml format
Can someone help me on this ASAP? I am using solr 8.2.0 and below is the snippet from solrconfig.xml for one of the configset, where I am trying to default the results into xml format but its giving me as a json result. all 10 pkey xml Can some one let me know if I need to do something more to always get a solr /select query results as XML?? -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: zookeeper data and collection properties were lost
Thanks Erick for a quick response. Here are my responses for your questions 1# I did make sure that zoo.cfg got the proper data dir and its not pointing to temp folder; do I need to set the variables in ZK_ENV.sh. as well on top of the zoo.cfg ?? 2# I can confirm that we are not using the embedded one but we are using a standalone zookeeper 3.4.14 and also the admin UI is showing what we configured (port 2181) Here are my confusions, as I said we are in two node architecture in DEV but maintaining only one instance of zookeeper, is that true that I need to maintain the same folder structure that we specify on the dataDir of zoo.cfg on both the nodes ?? Thanks, On Mon, Jul 20, 2020 at 12:22 PM Erick Erickson wrote: > Some possibilities: > > 1> you haven’t changed your data dir for Zookeeper from the default > "/tmp/zookeeper” > > 2> you aren’t pointing to the Zookeepers you think you are. In particular > are you running embedded zookeeper? This should be apparent if you look on > the admin page ant the zookeeper URLs you’re pointing at are on port 9983 > > this is almost certainly some kind of misconfiguration, zookeeper data > doesn’t just disappear on its own that I know of. The admin UI will also > show you the exact parameters that Solr starts up with, check that they’re > all pointing to the ZK ensemble you expect and that the data directory is > preserved across restarts/reboots etc. > > Best, > Erick > > > On Jul 20, 2020, at 12:02 PM, yaswanth kumar > wrote: > > > > HI Team, > > > > Can someone help me understand on what could be the reason to lose both > > zookeeper data and also the collection information that will be stored > for > > each collection in the path ../solr/server/solr/ > > > > Here are the details of what versions that we use > > > > Solr - 8.2 > > Zookeeper 3.4.14 > > > > Two node solr cloud with zookeeper on single node, and when ever we see > an > > issue with networking between these two nodes, and once the connectivity > is > > restored, but when we restart the zookeeper service , everything was lost > > under /zookeeper_data/version-2/ and also the collection folders that > used > > to exists under ../solr/server/solr/ > > > > *Note*: We are testing this in DEV environment, but with this behavior we > > are afraid of moving this to production without knowing if that's an > issue > > with some configuration or zookeeper behavior and we need to adjust > > something else to not to wipe out the configs. > > > > -- > > Thanks & Regards, > > Yaswanth Kumar Konathala. > > yaswanth...@gmail.com > > -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
zookeeper data and collection properties were lost
HI Team, Can someone help me understand on what could be the reason to lose both zookeeper data and also the collection information that will be stored for each collection in the path ../solr/server/solr/ Here are the details of what versions that we use Solr - 8.2 Zookeeper 3.4.14 Two node solr cloud with zookeeper on single node, and when ever we see an issue with networking between these two nodes, and once the connectivity is restored, but when we restart the zookeeper service , everything was lost under /zookeeper_data/version-2/ and also the collection folders that used to exists under ../solr/server/solr/ *Note*: We are testing this in DEV environment, but with this behavior we are afraid of moving this to production without knowing if that's an issue with some configuration or zookeeper behavior and we need to adjust something else to not to wipe out the configs. -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: solr fq with contains not returning any results
Can I get some traction on to this?? --Yaswanth On Wed, Jun 24, 2020 at 2:18 PM yaswanth kumar wrote: > Thanks Erick, > > I have now added &debug=query and found a diff between old solr and new > solr > > new solr (8.2) which is not giving results is as follows > > "debug":{ > "rawquerystring":"*:*", > "querystring":"*:*", > "parsedquery":"MatchAllDocsQuery(*:*)", > "parsedquery_toString":"*:*", > "explain":{}, > "QParser":"LuceneQParser", > "filter_queries":["auto_nsallschools:*bostonschool*"], > "parsed_filter_queries":["auto_nsallschools:_star_bostonschool_star_"], > > Where as solr 5.5 which is getting me the results is as follows > > "debug":{ > "rawquerystring":"*:*", > "querystring":"*:*", > "parsedquery":"MatchAllDocsQuery(*:*)", > "parsedquery_toString":"*:*", > "explain":{}, > "QParser":"LuceneQParser", > "filter_queries":["auto_nsallschools:*bostonschool*"], > "parsed_filter_queries":["auto_nsallschools:*bostonschool*"], > > I know in schema there are analyzer against this field but not getting on > why its making differences here. > > Thanks, > > On Wed, Jun 24, 2020 at 9:24 AM Erick Erickson > wrote: > >> You need to do several things to track down why. >> >> First, use something (admin UI, terms query, etc) to see >> exactly what’s in your index. The admin/analysis screen is useful here. >> >> Second, aldd &debug=query to the query on both machines and >> see what the actual parsed query looks like. >> >> Comparing those should give you a clue. >> >> Best, >> Erick >> >> > On Jun 24, 2020, at 9:20 AM, yaswanth kumar >> wrote: >> > >> > "nsallschools":["BostonSchool"] >> > >> > That's how the data is stored against the field. >> > >> > We have a functionality where we can do "Starts with, Contains, Ends >> with"; >> > Also if you look at the above schema we are using >> > >> > > > synonyms="punctuation-whitelist.txt" ignoreCase="true" expand="false"/> >> > >> > >> > >> > >> > Also the strange part is that its working fine in Solr 5.5 but not in >> Solr >> > 8.2 any thoughts?? >> > >> > Thanks, >> > >> > On Wed, Jun 24, 2020 at 3:15 AM Jörn Franke >> wrote: >> > >> >> I don’t know your data, but could it be that you tokenize differently ? >> >> >> >> Why do you do the wildcard search at all? Maybe a different tokenizing >> >> strategy can bring you more effieciently results? Depends on what you >> need >> >> to achieve of course ... >> >> >> >>> Am 24.06.2020 um 05:37 schrieb yaswanth kumar > >: >> >>> >> >>> I am using solr 8.2 >> >>> >> >>> And when trying to do fq=auto_nsallschools:*bostonschool*, the data is >> >> not >> >>> being returned. But if I do the same in solr 5.5 (which I already have >> >> and >> >>> we are in process of migrating to 8.2 ) its returning results. >> >>> >> >>> if I do fq=auto_nsallschools:bostonschool >> >>> or >> >>> fq=auto_nsallschools:bostonschool* its returning results but when I >> try >> >>> with contains like described above or >> fq=auto_nsallschools:*bostonschool >> >>> (ends with) it's not returning any results. >> >>> >> >>> The field which we are already using is a copy field and multi valued, >> >> am I >> >>> doing something wrong? or does 8.2 need some adjustment in the >> configs? >> >>> >> >>> Here is the schema >> >>> >> >>> > >> stored="true" >> >>> multiValued="true"/> >> >>> > indexed="true" >> >>> stored="false" multiValued="true"/> >> >>> >> >>> >> >>> >> >>> > >>> positionIncrementGap="100"> >> &g
Re: solr fq with contains not returning any results
Thanks Erick, I have now added &debug=query and found a diff between old solr and new solr new solr (8.2) which is not giving results is as follows "debug":{ "rawquerystring":"*:*", "querystring":"*:*", "parsedquery":"MatchAllDocsQuery(*:*)", "parsedquery_toString":"*:*", "explain":{}, "QParser":"LuceneQParser", "filter_queries":["auto_nsallschools:*bostonschool*"], "parsed_filter_queries":["auto_nsallschools:_star_bostonschool_star_"], Where as solr 5.5 which is getting me the results is as follows "debug":{ "rawquerystring":"*:*", "querystring":"*:*", "parsedquery":"MatchAllDocsQuery(*:*)", "parsedquery_toString":"*:*", "explain":{}, "QParser":"LuceneQParser", "filter_queries":["auto_nsallschools:*bostonschool*"], "parsed_filter_queries":["auto_nsallschools:*bostonschool*"], I know in schema there are analyzer against this field but not getting on why its making differences here. Thanks, On Wed, Jun 24, 2020 at 9:24 AM Erick Erickson wrote: > You need to do several things to track down why. > > First, use something (admin UI, terms query, etc) to see > exactly what’s in your index. The admin/analysis screen is useful here. > > Second, aldd &debug=query to the query on both machines and > see what the actual parsed query looks like. > > Comparing those should give you a clue. > > Best, > Erick > > > On Jun 24, 2020, at 9:20 AM, yaswanth kumar > wrote: > > > > "nsallschools":["BostonSchool"] > > > > That's how the data is stored against the field. > > > > We have a functionality where we can do "Starts with, Contains, Ends > with"; > > Also if you look at the above schema we are using > > > > > synonyms="punctuation-whitelist.txt" ignoreCase="true" expand="false"/> > > > > > > > > > > Also the strange part is that its working fine in Solr 5.5 but not in > Solr > > 8.2 any thoughts?? > > > > Thanks, > > > > On Wed, Jun 24, 2020 at 3:15 AM Jörn Franke > wrote: > > > >> I don’t know your data, but could it be that you tokenize differently ? > >> > >> Why do you do the wildcard search at all? Maybe a different tokenizing > >> strategy can bring you more effieciently results? Depends on what you > need > >> to achieve of course ... > >> > >>> Am 24.06.2020 um 05:37 schrieb yaswanth kumar : > >>> > >>> I am using solr 8.2 > >>> > >>> And when trying to do fq=auto_nsallschools:*bostonschool*, the data is > >> not > >>> being returned. But if I do the same in solr 5.5 (which I already have > >> and > >>> we are in process of migrating to 8.2 ) its returning results. > >>> > >>> if I do fq=auto_nsallschools:bostonschool > >>> or > >>> fq=auto_nsallschools:bostonschool* its returning results but when I try > >>> with contains like described above or > fq=auto_nsallschools:*bostonschool > >>> (ends with) it's not returning any results. > >>> > >>> The field which we are already using is a copy field and multi valued, > >> am I > >>> doing something wrong? or does 8.2 need some adjustment in the configs? > >>> > >>> Here is the schema > >>> > >>> >> stored="true" > >>> multiValued="true"/> > >>> indexed="true" > >>> stored="false" multiValued="true"/> > >>> > >>> > >>> > >>> >>> positionIncrementGap="100"> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> > >>> >>> positionIncrementGap="100"> > >>> > >>>>>> pattern="(\&)" replacement="_and_" /> > >>>>>> pattern="(\$)" replacement="_dollar_" /> > >>>>>> pattern="(\*)" replacement="_star_" /> > >>>>>> pattern="(\+)" replacem
Re: solr fq with contains not returning any results
"nsallschools":["BostonSchool"] That's how the data is stored against the field. We have a functionality where we can do "Starts with, Contains, Ends with"; Also if you look at the above schema we are using Also the strange part is that its working fine in Solr 5.5 but not in Solr 8.2 any thoughts?? Thanks, On Wed, Jun 24, 2020 at 3:15 AM Jörn Franke wrote: > I don’t know your data, but could it be that you tokenize differently ? > > Why do you do the wildcard search at all? Maybe a different tokenizing > strategy can bring you more effieciently results? Depends on what you need > to achieve of course ... > > > Am 24.06.2020 um 05:37 schrieb yaswanth kumar : > > > > I am using solr 8.2 > > > > And when trying to do fq=auto_nsallschools:*bostonschool*, the data is > not > > being returned. But if I do the same in solr 5.5 (which I already have > and > > we are in process of migrating to 8.2 ) its returning results. > > > > if I do fq=auto_nsallschools:bostonschool > > or > > fq=auto_nsallschools:bostonschool* its returning results but when I try > > with contains like described above or fq=auto_nsallschools:*bostonschool > > (ends with) it's not returning any results. > > > > The field which we are already using is a copy field and multi valued, > am I > > doing something wrong? or does 8.2 need some adjustment in the configs? > > > > Here is the schema > > > > stored="true" > > multiValued="true"/> > > > stored="false" multiValued="true"/> > > > > > > > > > positionIncrementGap="100"> > > > > > > > > > > > > > > > > > > > positionIncrementGap="100"> > > > > > pattern="(\&)" replacement="_and_" /> > > > pattern="(\$)" replacement="_dollar_" /> > > > pattern="(\*)" replacement="_star_" /> > > > pattern="(\+)" replacement="_plus_" /> > > > pattern="(\-)" replacement="_minus_" /> > > > pattern="(\#)" replacement="_sharp_" /> > > > pattern="(\%)" replacement="_percent_" /> > > > pattern="(\=)" replacement="_equal_" /> > > > pattern="(\<)" replacement="_lessthan_" /> > > > pattern="(\>)" replacement="_greaterthan_" /> > > > pattern="(\€)" replacement="_euro_" /> > > > pattern="(\¢)" replacement="_cent_" /> > > > pattern="(\£)" replacement="_pound_" /> > > > pattern="(\¥)" replacement="_yuan_" /> > > > pattern="(\©)" replacement="_copyright_" /> > > > pattern="(\®)" replacement="_registered_" /> > > > pattern="(\|)" replacement="_pipe_" /> > > > pattern="(\^)" replacement="_caret_" /> > > > pattern="(\~)" replacement="_tilt_" /> > > > pattern="(\™)" replacement="_treadmark_" /> > > > pattern="(\@)" replacement="_at_" /> > > > pattern="(\")" replacement=" _doublequote_ " /> > > > pattern="(\()" replacement=" _leftparentheses_ " /> > > > pattern="(\))" replacement=" _rightparentheses_ " /> > > > pattern="(\{)" replacement="_leftcurlybracket_" /> > > > pattern="(\})" replacement="_rightcurlybracket_" /> > > > pattern="(\[)" replacement="_leftsquarebracket_" /> > > > pattern="(\])" replacement="_rightsquarebracket_" /> > > > synonyms="punctuation-whitelist.txt" ignoreCase="true" expand="false"/> > > > > > > > > > > > > > > Thanks, > > > > -- > > Thanks & Regards, > > Yaswanth Kumar Konathala. > > yaswanth...@gmail.com > -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
solr fq with contains not returning any results
I am using solr 8.2 And when trying to do fq=auto_nsallschools:*bostonschool*, the data is not being returned. But if I do the same in solr 5.5 (which I already have and we are in process of migrating to 8.2 ) its returning results. if I do fq=auto_nsallschools:bostonschool or fq=auto_nsallschools:bostonschool* its returning results but when I try with contains like described above or fq=auto_nsallschools:*bostonschool (ends with) it's not returning any results. The field which we are already using is a copy field and multi valued, am I doing something wrong? or does 8.2 need some adjustment in the configs? Here is the schema Thanks, -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: Solr cloud backup/restore not working
Hi Vinodh, Here is what I see when I tried with requestid, Collection: test operation: restore failed:org.apache.solr.common.SolrException: ADDREPLICA failed to create replica at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler$ShardRequestTracker.processResponses(OverseerCollectionMessageHandler.java:1030) at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler$ShardRequestTracker.processResponses(OverseerCollectionMessageHandler.java:1013) at org.apache.solr.cloud.api.collections.AddReplicaCmd.lambda$addReplica$1(AddReplicaCmd.java:177) at org.apache.solr.cloud.api.collections.AddReplicaCmd$$Lambda$746/.run(Unknown Source) at org.apache.solr.cloud.api.collections.AddReplicaCmd.addReplica(AddReplicaCmd.java:199) at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.addReplica(OverseerCollectionMessageHandler.java:708) at org.apache.solr.cloud.api.collections.RestoreCmd.call(RestoreCmd.java:286) at org.apache.solr.cloud.api.collections.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:264) at org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:505) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209) at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$142/.run(Unknown Source) at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) at java.base/java.lang.Thread.run(Thread.java:834) Caused by: org.apache.solr.common.SolrException: javax.crypto.BadPaddingException: RSA private key operation failed at org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:325) at org.apache.solr.security.PKIAuthenticationPlugin.generateToken(PKIAuthenticationPlugin.java:305) at org.apache.solr.security.PKIAuthenticationPlugin.access$200(PKIAuthenticationPlugin.java:61) at org.apache.solr.security.PKIAuthenticationPlugin$2.onQueued(PKIAuthenticationPlugin.java:239) at org.apache.solr.client.solrj.impl.Http2SolrClient.decorateRequest(Http2SolrClient.java:468) at org.apache.solr.client.solrj.impl.Http2SolrClient.makeRequest(Http2SolrClient.java:455) at org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:364) at org.apache.solr.client.solrj.impl.Http2SolrClient.request(Http2SolrClient.java:746) at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1274) at org.apache.solr.handler.component.HttpShardHandler.request(HttpShardHandler.java:238) at org.apache.solr.handler.component.HttpShardHandler.lambda$submit$0(HttpShardHandler.java:199) at org.apache.solr.handler.component.HttpShardHandler$$Lambda$529/.call(Unknown Source) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515) at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181) ... 5 more Caused by: javax.crypto.BadPaddingException: RSA private key operation failed at java.base/sun.security.rsa.NativeRSACore.crtCrypt_Native(NativeRSACore.java:149) at java.base/sun.security.rsa.NativeRSACore.rsa(NativeRSACore.java:91) at java.base/sun.security.rsa.RSACore.rsa(RSACore.java:149) at java.base/com.sun.crypto.provider.RSACipher.doFinal(RSACipher.java:355) at java.base/com.sun.crypto.provider.RSACipher.engineDoFinal(RSACipher.java:392) at java.base/javax.crypto.Cipher.doFinal(Cipher.java:2260) at org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:323) Thanks, On Wed, Jun 17, 2020 at 8:08 AM Kommu, Vinodh K. wrote: > Hi, > > What is the log level defined for solr nodes? Did you used requestid in > restore command? If so, check the status of the requestid if that points to > any errors. > > Thanks & Regards, > Vinodh > > -Original Message- > From: yaswanth kumar > Sent: Wednesday, June 17, 2020 4:33 PM > To: solr-user@lucene.apache.org > Subject: Re: Solr cloud backup/restore not working > > ATTENTION: External Email – Be Suspicious of Attachments, Links and > Requests for Login Information. > > Can someone please guide me on where can I get more detailed error of the > above exception while doing restore?? All that I see in solr.log was pasted > above > > Thanks, > > On Tue, Jun 16, 2020 at 10:44 AM yaswanth kumar > wrote: > > > I don't see anything related in the solr.log file for the same error. > > Not sure if there is anyother place where I can check for this. > > > > Thanks, > > > > On Tue, Jun 16, 2020 at 10:21 AM Shawn Heisey > wrote: > > > >> On
Re: Solr cloud backup/restore not working
Can someone please guide me on where can I get more detailed error of the above exception while doing restore?? All that I see in solr.log was pasted above Thanks, On Tue, Jun 16, 2020 at 10:44 AM yaswanth kumar wrote: > I don't see anything related in the solr.log file for the same error. Not > sure if there is anyother place where I can check for this. > > Thanks, > > On Tue, Jun 16, 2020 at 10:21 AM Shawn Heisey wrote: > >> On 6/12/2020 8:38 AM, yaswanth kumar wrote: >> > Using solr 8.2.0 and setup a cloud with 2 nodes. (2 replica's for each >> > collection) >> > Enabled basic authentication and gave all access to the admin user >> > >> > Now trying to use solr cloud backup/restore API, backup is working >> great, >> > but when trying to invoke restore API its throwing the below error >> >> > "msg":"ADDREPLICA failed to create replica", >> > "trace":"org.apache.solr.common.SolrException: ADDREPLICA failed to >> > create replica\n\tat >> > >> org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat >> > >> org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:280)\n\tat >> >> The underlying cause of this exception is not recorded here. Are there >> other entries in the Solr log with more detailed information from the >> ADDREPLICA attempt? >> >> Thanks, >> Shawn >> > > > -- > Thanks & Regards, > Yaswanth Kumar Konathala. > yaswanth...@gmail.com > -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: Solr cloud backup/restore not working
I don't see anything related in the solr.log file for the same error. Not sure if there is anyother place where I can check for this. Thanks, On Tue, Jun 16, 2020 at 10:21 AM Shawn Heisey wrote: > On 6/12/2020 8:38 AM, yaswanth kumar wrote: > > Using solr 8.2.0 and setup a cloud with 2 nodes. (2 replica's for each > > collection) > > Enabled basic authentication and gave all access to the admin user > > > > Now trying to use solr cloud backup/restore API, backup is working great, > > but when trying to invoke restore API its throwing the below error > > > "msg":"ADDREPLICA failed to create replica", > > "trace":"org.apache.solr.common.SolrException: ADDREPLICA failed to > > create replica\n\tat > > > org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat > > > org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:280)\n\tat > > The underlying cause of this exception is not recorded here. Are there > other entries in the Solr log with more detailed information from the > ADDREPLICA attempt? > > Thanks, > Shawn > -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: Solr cloud backup/restore not working
Sure I pasted it below from the solr logfiles.. 2020-06-16 14:06:27.000 INFO (qtp1987693491-153) [c:test ] o.a.s.s.HttpSolrCall [admin] webapp=null path=/admin/collections params={name=test&action=RESTORE&location=/opt/$ 2020-06-16 14:06:27.001 ERROR (qtp1987693491-153) [c:test ] o.a.s.s.HttpSolrCall null:org.apache.solr.common.SolrException: ADDREPLICA failed to create replica at org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53) at org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:280) at org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:252) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199) at org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:820) at org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:786) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:546) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255) at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203) at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480) at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678) at org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201) at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144) at org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220) at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.Server.handle(Server.java:505) at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370) at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267) at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427) at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:321) at org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159) at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103) at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168) at org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126) at org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.run(ReservedThreadExecutor.java:366) at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:781) at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917) at java.base/java.lang.Thread.run(Thread.java:834) Can you please review and let me know if I am missing something?? On Tue, Jun 16, 2020 at 3:15 AM Jörn Franke wrote: > Have you looked in the Solr logfiles? > > > Am 16.06.2020 um 05:46 schrieb yaswanth kumar : > > > > Can anyone here help on the posted question pls?? > > > >> On Fri, Jun
Re: Solr cloud backup/restore not working
Can anyone here help on the posted question pls?? On Fri, Jun 12, 2020 at 10:38 AM yaswanth kumar wrote: > Using solr 8.2.0 and setup a cloud with 2 nodes. (2 replica's for each > collection) > Enabled basic authentication and gave all access to the admin user > > Now trying to use solr cloud backup/restore API, backup is working great, > but when trying to invoke restore API its throwing the below error > > { > "responseHeader":{ > "status":500, > "QTime":349}, > "Operation restore caused > exception:":"org.apache.solr.common.SolrException:org.apache.solr.common.SolrException: > ADDREPLICA failed to create replica", > "exception":{ > "msg":"ADDREPLICA failed to create replica", > "rspCode":500}, > "error":{ > "metadata":[ > "error-class","org.apache.solr.common.SolrException", > "root-error-class","org.apache.solr.common.SolrException"], > "msg":"ADDREPLICA failed to create replica", > "trace":"org.apache.solr.common.SolrException: ADDREPLICA failed to > create replica\n\tat > org.apache.solr.client.solrj.SolrResponse.getException(SolrResponse.java:53)\n\tat > org.apache.solr.handler.admin.CollectionsHandler.invokeAction(CollectionsHandler.java:280)\n\tat > org.apache.solr.handler.admin.CollectionsHandler.handleRequestBody(CollectionsHandler.java:252)\n\tat > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)\n\tat > org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:820)\n\tat > org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:786)\n\tat > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:546)\n\tat > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:423)\n\tat > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:350)\n\tat > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602)\n\tat > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540)\n\tat > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)\n\tat > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)\n\tat > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)\n\tat > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1711)\n\tat > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)\n\tat > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1347)\n\tat > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)\n\tat > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:480)\n\tat > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1678)\n\tat > org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)\n\tat > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1249)\n\tat > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)\n\tat > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:220)\n\tat > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:152)\n\tat > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)\n\tat > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)\n\tat > org.eclipse.jetty.server.Server.handle(Server.java:505)\n\tat > org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:370)\n\tat > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:267)\n\tat > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:305)\n\tat > org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)\n\tat > org.eclipse.jetty.io.ssl.SslConnection$DecryptedEndPoint.onFillable(SslConnection.java:427)\n\tat > org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:321)\n\tat > org.eclipse.jetty.io.ssl.SslConnection$2.succeeded(SslConnection.java:159)\n\tat > org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)\n\tat > org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)\n\tat > org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)\n\tat > org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(Eat
Solr cloud backup/restore not working
ty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:917)\n\tat java.base/java.lang.Thread.run(Thread.java:834)\n", "code":500}} Can someone please help me if I am missing something? Or if I need to adjust something in configuration to make it work? -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
Re: solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node
I haven't done any changes on jetty xml , I am just using what it comes with the solr package. just doing it in solr.in.sh but I am still seeing the same issue. Thanks, On Thu, Jun 4, 2020 at 12:23 PM Jörn Franke wrote: > I think you should not do it in the Jetty xml > Follow the official reference guide. > It should be in solr.in.sh > > https://lucene.apache.org/solr/guide/8_4/enabling-ssl.html > > > > > > Am 04.06.2020 um 06:48 schrieb yaswanth kumar : > > > > Hi Franke, > > > > I suspect its because of the certificate encryption ?? But will wait for > > you to confirm the same. We are trying to generate a certs with RSA 2048 > > and finally combining them to a single JKS and that's what we are > referring > > as a keystore and truststore, let me know if it doesn't work or if there > is > > a standard procedure to do this certs. > > > > Thanks, > > > >> On Wed, Jun 3, 2020 at 8:25 AM yaswanth kumar > wrote: > >> > >> thanks Franke, > >> > >> I now made the use of the default jetty-ssl.xml that comes with the solr > >> package, but the issue is still happening when I try to push data to a > >> non-leader node. > >> > >> Do you still think if its something to do with the configurations ?? > >> > >> Thanks, > >> > >>> On Wed, Jun 3, 2020 at 12:29 AM Jörn Franke > wrote: > >>> > >>> Why in the jetty-ssl.xml? > >>> > >>> Should this not be configured in the solr.in.sh? > >>> > >>>> Am 03.06.2020 um 00:38 schrieb yaswanth kumar >: > >>>> > >>>> Thanks Franke, but yes for all these questions I did configured it > >>>> properly, I made sure to include > >>>> > >>>> >>>> default="JKS"/> > >>>> >>>> default="JKS"/> > >>>> in the jetty-ssl.xml along with the path keystore and truststore. > >>>> > >>>> Also I have made sure that trusstore exists on all nodes and also I am > >>>> using the same file for both keystore and truststore as below > >>>> >>>> default="./etc/solr-keystore.jks"/> > >>>> >>>> name="solr.jetty.keystore.password" default=""/> > >>>> >>>> default="./etc/solr-keystore.jks"/> > >>>> >>>> name="solr.jetty.truststore.password" default=""/> > >>>> > >>>> also urlScheme for ZK is set to https > >>>> > >>>> > >>>> Also the main error that I posted is the one that I am seeing as a > >>> return > >>>> response where as the below one is what I see from solr logs > >>>> > >>>> 2020-06-02 22:32:04.472 ERROR (qtp984876512-93) [c:default s:shard1 > >>>> r:core_node3 x:default_shard1_replica_n1] o.a.s.s.HttpSolrCall > >>>> null:org.apache.solr.update.processor.Distr$ > >>>> at > >>>> > >>> > org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDistribFinish(DistributedZkUpdateProcessor.java:1189) > >>>> at > >>>> > >>> > org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1096) > >>>> at > >>>> > >>> > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182) > >>>> at > >>>> > >>> > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >>>> at > >>>> > >>> > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >>>> at > >>>> > >>> > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >>>> at > >>>> > >>> > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >>>> at > >>>> > >>> > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >>>> at > >>>> > >>> > org.apache.solr.update.processor.UpdateRe
Re: solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node
Hi Franke, I suspect its because of the certificate encryption ?? But will wait for you to confirm the same. We are trying to generate a certs with RSA 2048 and finally combining them to a single JKS and that's what we are referring as a keystore and truststore, let me know if it doesn't work or if there is a standard procedure to do this certs. Thanks, On Wed, Jun 3, 2020 at 8:25 AM yaswanth kumar wrote: > thanks Franke, > > I now made the use of the default jetty-ssl.xml that comes with the solr > package, but the issue is still happening when I try to push data to a > non-leader node. > > Do you still think if its something to do with the configurations ?? > > Thanks, > > On Wed, Jun 3, 2020 at 12:29 AM Jörn Franke wrote: > >> Why in the jetty-ssl.xml? >> >> Should this not be configured in the solr.in.sh? >> >> > Am 03.06.2020 um 00:38 schrieb yaswanth kumar : >> > >> > Thanks Franke, but yes for all these questions I did configured it >> > properly, I made sure to include >> > >> > > > default="JKS"/> >> > > > default="JKS"/> >> > in the jetty-ssl.xml along with the path keystore and truststore. >> > >> > Also I have made sure that trusstore exists on all nodes and also I am >> > using the same file for both keystore and truststore as below >> > > > default="./etc/solr-keystore.jks"/> >> > > > name="solr.jetty.keystore.password" default=""/> >> > > > default="./etc/solr-keystore.jks"/> >> > > > name="solr.jetty.truststore.password" default=""/> >> > >> > also urlScheme for ZK is set to https >> > >> > >> > Also the main error that I posted is the one that I am seeing as a >> return >> > response where as the below one is what I see from solr logs >> > >> > 2020-06-02 22:32:04.472 ERROR (qtp984876512-93) [c:default s:shard1 >> > r:core_node3 x:default_shard1_replica_n1] o.a.s.s.HttpSolrCall >> > null:org.apache.solr.update.processor.Distr$ >> >at >> > >> org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDistribFinish(DistributedZkUpdateProcessor.java:1189) >> >at >> > >> org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1096) >> >at >> > >> org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182) >> >at >> > >> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) >> >at >> > >> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) >> >at >> > >> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) >> >at >> > >> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) >> >at >> > >> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) >> >at >> > >> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) >> >at >> > >> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) >> >at >> > >> org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) >> >at >> > >> org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:78) >> >at >> > >> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211) >> >at org.apache.solr.core.SolrCore.execute(SolrCore.java:2596) >> >at >> > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:799) >> >at >> org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:578) >> >at >> > >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419) >> >at >> > >> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351) >> >at >> > >> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) >> >at >> > >> org.eclipse.jetty.servlet.ServletHandler.d
Re: solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node
thanks Franke, I now made the use of the default jetty-ssl.xml that comes with the solr package, but the issue is still happening when I try to push data to a non-leader node. Do you still think if its something to do with the configurations ?? Thanks, On Wed, Jun 3, 2020 at 12:29 AM Jörn Franke wrote: > Why in the jetty-ssl.xml? > > Should this not be configured in the solr.in.sh? > > > Am 03.06.2020 um 00:38 schrieb yaswanth kumar : > > > > Thanks Franke, but yes for all these questions I did configured it > > properly, I made sure to include > > > > > default="JKS"/> > > > default="JKS"/> > > in the jetty-ssl.xml along with the path keystore and truststore. > > > > Also I have made sure that trusstore exists on all nodes and also I am > > using the same file for both keystore and truststore as below > > > default="./etc/solr-keystore.jks"/> > > > name="solr.jetty.keystore.password" default=""/> > > > default="./etc/solr-keystore.jks"/> > > > name="solr.jetty.truststore.password" default=""/> > > > > also urlScheme for ZK is set to https > > > > > > Also the main error that I posted is the one that I am seeing as a return > > response where as the below one is what I see from solr logs > > > > 2020-06-02 22:32:04.472 ERROR (qtp984876512-93) [c:default s:shard1 > > r:core_node3 x:default_shard1_replica_n1] o.a.s.s.HttpSolrCall > > null:org.apache.solr.update.processor.Distr$ > >at > > > org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDistribFinish(DistributedZkUpdateProcessor.java:1189) > >at > > > org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1096) > >at > > > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182) > >at > > > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >at > > > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >at > > > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >at > > > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >at > > > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >at > > > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >at > > > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >at > > > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > >at > > > org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:78) > >at > > > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211) > >at org.apache.solr.core.SolrCore.execute(SolrCore.java:2596) > >at > > org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:799) > >at > org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:578) > >at > > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419) > >at > > > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351) > >at > > > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) > >at > > > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) > >at > > > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) > >at > > > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > >at > > > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) > >at > > > org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) > > > > > > One strange observation is that when I hit update api on the leader node > > its working without any error, and now immediately if I hit non-leader > its > > working fine (only once or twice), but if I keep on trying to hit this > node > > again and again its then throwing the above error and once the error
Re: solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node
also forgot to update before that I have enabled basicauthentication and provided the details in security.json and uploaded it via zookeeper. Thanks, On Tue, Jun 2, 2020 at 6:42 PM yaswanth kumar wrote: > also I am seeing the below error as a parent one from solr.log > > at org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:366) > org.apache.solr.common.SolrException: javax.crypto.BadPaddingException: > RSA private key operation failed > at > org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:366) > ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 - > ishan - 2020-01-10 1$ > at > org.apache.solr.security.PKIAuthenticationPlugin.generateToken(PKIAuthenticationPlugin.java:305) > ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d0$ > at > org.apache.solr.security.PKIAuthenticationPlugin.access$200(PKIAuthenticationPlugin.java:61) > ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 -$ > at > org.apache.solr.security.PKIAuthenticationPlugin$2.onQueued(PKIAuthenticationPlugin.java:239) > ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 $ > at > org.apache.solr.client.solrj.impl.Http2SolrClient.decorateRequest(Http2SolrClient.java:469) > ~[solr-solrj-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 -$ > at > org.apache.solr.client.solrj.impl.Http2SolrClient.initOutStream(Http2SolrClient.java:324) > ~[solr-solrj-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 - i$ > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:227) > ~[solr-solrj-8.4.1.jar:8.4.1 83$ > at > org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:181) > ~[solr-solrj-8.4.1.jar:8.4.1 832bf13dd918709$ > at > com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181) > ~[metrics-core-4.0.5.jar:4.0.5] > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210) > ~[solr-solrj-8.4.1.jar:8.4.1 832bf13dd9187095831caf6978$ > at > org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$142/.run(Unknown > Source) ~[?:?] > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) > ~[?:?] > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) > ~[?:?] > at java.lang.Thread.run(Thread.java:834) [?:?] > Caused by: javax.crypto.BadPaddingException: RSA private key operation > failed > at > sun.security.rsa.NativeRSACore.crtCrypt_Native(NativeRSACore.java:149) > ~[?:?] > at sun.security.rsa.NativeRSACore.rsa(NativeRSACore.java:91) ~[?:?] > at sun.security.rsa.RSACore.rsa(RSACore.java:149) ~[?:?] > at com.sun.crypto.provider.RSACipher.doFinal(RSACipher.java:355) > ~[?:?] > at > com.sun.crypto.provider.RSACipher.engineDoFinal(RSACipher.java:392) ~[?:?] > at javax.crypto.Cipher.doFinal(Cipher.java:2260) ~[?:?] > at > org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:364) > ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 - > ishan - 2020-01-10 1$ > ... 13 more > > On Tue, Jun 2, 2020 at 6:37 PM yaswanth kumar > wrote: > >> Thanks Franke, but yes for all these questions I did configured it >> properly, I made sure to include >> >> > default="JKS"/> >> > default="JKS"/> >> in the jetty-ssl.xml along with the path keystore and truststore. >> >> Also I have made sure that trusstore exists on all nodes and also I am >> using the same file for both keystore and truststore as below >> > default="./etc/solr-keystore.jks"/> >> > name="solr.jetty.keystore.password" default=""/> >> > default="./etc/solr-keystore.jks"/> >> > name="solr.jetty.truststore.password" default=""/> >> >> also urlScheme for ZK is set to https >> >> >> Also the main error that I posted is the one that I am seeing as a return >> response where as the below one is what I see from solr logs >> >> 2020-06-02 22:32:04.472 ERROR (qtp984876512-93) [c:default s:shard1 >> r:core_node3 x:default_shard1_replica_n1] o.a.s.s.HttpSolrCall >> null:org.apache.solr.update.processor.Distr$ >> at >> org.apache.solr.update.processor.Dis
Re: solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node
also I am seeing the below error as a parent one from solr.log at org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:366) org.apache.solr.common.SolrException: javax.crypto.BadPaddingException: RSA private key operation failed at org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:366) ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 - ishan - 2020-01-10 1$ at org.apache.solr.security.PKIAuthenticationPlugin.generateToken(PKIAuthenticationPlugin.java:305) ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d0$ at org.apache.solr.security.PKIAuthenticationPlugin.access$200(PKIAuthenticationPlugin.java:61) ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 -$ at org.apache.solr.security.PKIAuthenticationPlugin$2.onQueued(PKIAuthenticationPlugin.java:239) ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 $ at org.apache.solr.client.solrj.impl.Http2SolrClient.decorateRequest(Http2SolrClient.java:469) ~[solr-solrj-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 -$ at org.apache.solr.client.solrj.impl.Http2SolrClient.initOutStream(Http2SolrClient.java:324) ~[solr-solrj-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 - i$ at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.sendUpdateStream(ConcurrentUpdateHttp2SolrClient.java:227) ~[solr-solrj-8.4.1.jar:8.4.1 83$ at org.apache.solr.client.solrj.impl.ConcurrentUpdateHttp2SolrClient$Runner.run(ConcurrentUpdateHttp2SolrClient.java:181) ~[solr-solrj-8.4.1.jar:8.4.1 832bf13dd918709$ at com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:181) ~[metrics-core-4.0.5.jar:4.0.5] at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:210) ~[solr-solrj-8.4.1.jar:8.4.1 832bf13dd9187095831caf6978$ at org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor$$Lambda$142/.run(Unknown Source) ~[?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?] at java.lang.Thread.run(Thread.java:834) [?:?] Caused by: javax.crypto.BadPaddingException: RSA private key operation failed at sun.security.rsa.NativeRSACore.crtCrypt_Native(NativeRSACore.java:149) ~[?:?] at sun.security.rsa.NativeRSACore.rsa(NativeRSACore.java:91) ~[?:?] at sun.security.rsa.RSACore.rsa(RSACore.java:149) ~[?:?] at com.sun.crypto.provider.RSACipher.doFinal(RSACipher.java:355) ~[?:?] at com.sun.crypto.provider.RSACipher.engineDoFinal(RSACipher.java:392) ~[?:?] at javax.crypto.Cipher.doFinal(Cipher.java:2260) ~[?:?] at org.apache.solr.util.CryptoKeys$RSAKeyPair.encrypt(CryptoKeys.java:364) ~[solr-core-8.4.1.jar:8.4.1 832bf13dd9187095831caf69783179d41059d013 - ishan - 2020-01-10 1$ ... 13 more On Tue, Jun 2, 2020 at 6:37 PM yaswanth kumar wrote: > Thanks Franke, but yes for all these questions I did configured it > properly, I made sure to include > > default="JKS"/> >default="JKS"/> > in the jetty-ssl.xml along with the path keystore and truststore. > > Also I have made sure that trusstore exists on all nodes and also I am > using the same file for both keystore and truststore as below > default="./etc/solr-keystore.jks"/> >name="solr.jetty.keystore.password" default=""/> >default="./etc/solr-keystore.jks"/> >name="solr.jetty.truststore.password" default=""/> > > also urlScheme for ZK is set to https > > > Also the main error that I posted is the one that I am seeing as a return > response where as the below one is what I see from solr logs > > 2020-06-02 22:32:04.472 ERROR (qtp984876512-93) [c:default s:shard1 > r:core_node3 x:default_shard1_replica_n1] o.a.s.s.HttpSolrCall > null:org.apache.solr.update.processor.Distr$ > at > org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDistribFinish(DistributedZkUpdateProcessor.java:1189) > at > org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1096) > at > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182) > at > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > at > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) > at > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestPro
Re: solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node
Thanks Franke, but yes for all these questions I did configured it properly, I made sure to include in the jetty-ssl.xml along with the path keystore and truststore. Also I have made sure that trusstore exists on all nodes and also I am using the same file for both keystore and truststore as below also urlScheme for ZK is set to https Also the main error that I posted is the one that I am seeing as a return response where as the below one is what I see from solr logs 2020-06-02 22:32:04.472 ERROR (qtp984876512-93) [c:default s:shard1 r:core_node3 x:default_shard1_replica_n1] o.a.s.s.HttpSolrCall null:org.apache.solr.update.processor.Distr$ at org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDistribFinish(DistributedZkUpdateProcessor.java:1189) at org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1096) at org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182) at org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) at org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) at org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) at org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) at org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) at org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) at org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) at org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80) at org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:78) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:211) at org.apache.solr.core.SolrCore.execute(SolrCore.java:2596) at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:799) at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:578) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:419) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:351) at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1602) at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:540) at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146) at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132) at org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257) One strange observation is that when I hit update api on the leader node its working without any error, and now immediately if I hit non-leader its working fine (only once or twice), but if I keep on trying to hit this node again and again its then throwing the above error and once the error started happening , its consistent again. Please let me know if you need more information or if I am missing something else Thanks, On Tue, Jun 2, 2020 at 4:59 PM Jörn Franke wrote: > Have you looked in the logfiles? > > Keystore Type correctly defined on all nodes? > > Have you configured the truststore on all nodes correctly? > > Have you set clusterprop urlScheme to htttps in ZK? > > > https://lucene.apache.org/solr/guide/7_5/enabling-ssl.html#configure-zookeeper > > > > > Am 02.06.2020 um 18:57 schrieb yaswanth kumar : > > > > team, can someone help me on the above topic? > > > >> On Mon, Jun 1, 2020 at 10:00 PM yaswanth kumar > >> wrote: > >> > >> Trying to setup solr 8.4.1 + open jdk 11 on centos , enabled the ssl > >> configurations with all the certs in place, but the issue what I am > seeing > >> is when trying to hit /update api on non-leader solr node , its > throwing an > >> error > >> > >> configured 2 solr nodes with 1 zookeeper. > >> > >> metadata":[ > >> > >> > "error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException", > >> > >> > "root-error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException"], > >> "msg":"Async exception during distributed update: > >> javax.crypto.Bad
Re: solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node
team, can someone help me on the above topic? On Mon, Jun 1, 2020 at 10:00 PM yaswanth kumar wrote: > Trying to setup solr 8.4.1 + open jdk 11 on centos , enabled the ssl > configurations with all the certs in place, but the issue what I am seeing > is when trying to hit /update api on non-leader solr node , its throwing an > error > > configured 2 solr nodes with 1 zookeeper. > > metadata":[ > > "error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException", > > "root-error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException"], > "msg":"Async exception during distributed update: > javax.crypto.BadPaddingException: RSA private key operation failed", > "trace":"org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException: > Async exception during distributed update: > javax.crypto.BadPaddingException: RSA private key operation failed\n\tat > org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDistribFinish(DistributedZkUpdateProcessor.java:1189)\n\tat > org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1096)\n\tat > org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182)\n\tat > org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)\n\tat > org.apache.solr.update.processor.UpdateRequestProcessor.finish > > Strangely this is happening when we try to hit a non-leader node, hitting > leader node its working fine without any issue and getting the data indexed. > > Not able to track down where the exact issue is happening. > > Thanks, > > -- > Thanks & Regards, > Yaswanth Kumar Konathala. > yaswanth...@gmail.com > -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com
solr 8.4.1 with ssl tls1.2 creating an issue with non-leader node
Trying to setup solr 8.4.1 + open jdk 11 on centos , enabled the ssl configurations with all the certs in place, but the issue what I am seeing is when trying to hit /update api on non-leader solr node , its throwing an error configured 2 solr nodes with 1 zookeeper. metadata":[ "error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException", "root-error-class","org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException"], "msg":"Async exception during distributed update: javax.crypto.BadPaddingException: RSA private key operation failed", "trace":"org.apache.solr.update.processor.DistributedUpdateProcessor$DistributedUpdatesAsyncException: Async exception during distributed update: javax.crypto.BadPaddingException: RSA private key operation failed\n\tat org.apache.solr.update.processor.DistributedZkUpdateProcessor.doDistribFinish(DistributedZkUpdateProcessor.java:1189)\n\tat org.apache.solr.update.processor.DistributedUpdateProcessor.finish(DistributedUpdateProcessor.java:1096)\n\tat org.apache.solr.update.processor.LogUpdateProcessorFactory$LogUpdateProcessor.finish(LogUpdateProcessorFactory.java:182)\n\tat org.apache.solr.update.processor.UpdateRequestProcessor.finish(UpdateRequestProcessor.java:80)\n\tat org.apache.solr.update.processor.UpdateRequestProcessor.finish Strangely this is happening when we try to hit a non-leader node, hitting leader node its working fine without any issue and getting the data indexed. Not able to track down where the exact issue is happening. Thanks, -- Thanks & Regards, Yaswanth Kumar Konathala. yaswanth...@gmail.com