[jira] [Updated] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-8389: --- Attachment: SOLR-8389.patch > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > SOLR-8389.patch, Screen Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-8389: --- Attachment: SOLR-8389.patch > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, Screen > Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16333516#comment-16333516 ] Amrit Sarkar commented on SOLR-8389: Meanwhile while writing tests, CDCR API have to move outside {{CdcrRequestHandler}} to the collection api, where Collection Reload is imminent. Need some advice on this. > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, Screen Shot 2017-12-21 at > 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-8389: --- Attachment: SOLR-8389.patch > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, Screen Shot 2017-12-21 at > 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-8389: --- Attachment: SOLR-8389.patch > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > SOLR-8389.patch, SOLR-8389.patch, Screen Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-8389: --- Attachment: SOLR-8389.patch > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > SOLR-8389.patch, Screen Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16331057#comment-16331057 ] Amrit Sarkar commented on SOLR-8389: Fully working clean code. Entire CDCR is working perfectly with back-combat. I am finishing up documentation and tests for this. > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > Screen Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-9272: --- Attachment: SOLR-9272.patch > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16329385#comment-16329385 ] Amrit Sarkar commented on SOLR-9272: Patch uploaded. Though I am not able to test the commands on windows machine, solr.cmd, but I followed the notions. > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328725#comment-16328725 ] Amrit Sarkar commented on SOLR-9272: [~janhoy], Thank you for the feedback, and yes not elegant :) Sorry about the debug lines, by bad. I like default "-p 8983", when both -z and -p are not specified, I will improve and clean the current patch, thank you. > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl >Priority: Major > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query
[ https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328712#comment-16328712 ] Amrit Sarkar commented on SOLR-7964: Implemented the same patch by [~arcadius] on trunk and uploaded. All tests running successfully, verified via beast round of 100. > suggest.highlight=true does not work when using context filter query > > > Key: SOLR-7964 > URL: https://issues.apache.org/jira/browse/SOLR-7964 > Project: Solr > Issue Type: Improvement > Components: Suggester >Affects Versions: 5.4 >Reporter: Arcadius Ahouansou >Priority: Minor > Labels: suggester > Attachments: SOLR-7964.patch, SOLR_7964.patch, SOLR_7964.patch > > > When using the new suggester context filtering query param > {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param > {{suggest.highlight=true}} has no effect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-7964) suggest.highlight=true does not work when using context filter query
[ https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-7964: --- Attachment: SOLR-7964.patch > suggest.highlight=true does not work when using context filter query > > > Key: SOLR-7964 > URL: https://issues.apache.org/jira/browse/SOLR-7964 > Project: Solr > Issue Type: Improvement > Components: Suggester >Affects Versions: 5.4 >Reporter: Arcadius Ahouansou >Priority: Minor > Labels: suggester > Attachments: SOLR-7964.patch, SOLR_7964.patch, SOLR_7964.patch > > > When using the new suggester context filtering query param > {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param > {{suggest.highlight=true}} has no effect. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16328620#comment-16328620 ] Amrit Sarkar commented on SOLR-11712: - [~varunthacker], As per our offline discussion, I tried optimising the tests as much I could have, moved helper functions into utils class. Since TestStreamErrorHandling needs more than one collection, {{configureCluster}} method is overridden. Let me know if I OVERDID the optimisation. > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at >
[jira] [Updated] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11712: Attachment: SOLR-11712.patch > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[jira] [Updated] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11712: Attachment: SOLR-11712.patch > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[jira] [Updated] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11712: Attachment: (was: SOLR-11712.patch) > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[jira] [Updated] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11712: Attachment: SOLR-11712.patch > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[jira] [Commented] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327848#comment-16327848 ] Amrit Sarkar commented on SOLR-11712: - [~varunthacker], I should have been clear. Sorry about the confusion. with-fix and without-fix are two different tests which are suppose to pass every time. {{with-fix}}, test class validates, it receives SolrException and the updated log error. {{without-fix}}, test class validates, itr receives IndexOutOfBoundException and old error. Since I am catching the error and not throwing again, there is no mark of error on logs. {{with-fix}} patch: 176-182 {code} try { getTuples(pstream); } catch (Exception e) { e.printStackTrace(); assertTrue(e.getCause() instanceof SolrException); assertTrue(e.getMessage().contains("No active nodes for shard:")); } {code} {{without-fix}} patch: 181-187 {code} try { getTuples(pstream); } catch (Exception e) { assertFalse(e.getCause() instanceof SolrException); // TODO - important assertions assertFalse(e.getMessage().contains("No active nodes for shard:")); assertTrue(e.getCause() instanceof IndexOutOfBoundsException); } {code} also, this tests executed for round about 100 seconds on my system, but if it is executing within second on yours. I will upload fresh patch with no nightly. > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at >
[jira] [Updated] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11718: Attachment: SOLR-11718.patch > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar >Priority: Major > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718-v3.patch, SOLR-11718.patch, SOLR-11718.patch, > SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16327087#comment-16327087 ] Amrit Sarkar commented on SOLR-11718: - Patch attached considering all the above points. Hope these tests are enough. > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar >Priority: Major > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718-v3.patch, SOLR-11718.patch, SOLR-11718.patch, > SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10734) Multithreaded test/support for AtomicURP broken
[ https://issues.apache.org/jira/browse/SOLR-10734?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326871#comment-16326871 ] Amrit Sarkar commented on SOLR-10734: - Improvements listed in SOLR-11311. > Multithreaded test/support for AtomicURP broken > --- > > Key: SOLR-10734 > URL: https://issues.apache.org/jira/browse/SOLR-10734 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Ishan Chattopadhyaya >Assignee: Noble Paul >Priority: Major > Fix For: master (8.0), 7.3 > > Attachments: SOLR-10734.patch, SOLR-10734.patch, SOLR-10734.patch, > Screen Shot 2017-05-31 at 4.50.23 PM.png, log-snippet, testMaster_2500, > testResults7_10, testResultsMaster_10 > > > The multithreaded test doesn't actually start the threads, but only invokes > the run directly. The join afterwards doesn't do anything, hence. > {code} > diff --git > a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java > > b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java > index f3f833d..10b7770 100644 > --- > a/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java > +++ > b/solr/core/src/test/org/apache/solr/update/processor/AtomicUpdateProcessorFactoryTest.java > @@ -238,7 +238,7 @@ public class AtomicUpdateProcessorFactoryTest extends > SolrTestCaseJ4 { >} > } >}; > - t.run(); > + t.run(); // red flag, shouldn't this be t.start? >threads.add(t); >finalCount += index; //int_i > } > {code} -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10866) Make TimestampUpdateProcessorFactory as Runtime URP; take params(s) with request
[ https://issues.apache.org/jira/browse/SOLR-10866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326870#comment-16326870 ] Amrit Sarkar commented on SOLR-10866: - Requesting feedback on this. I will add documentation changes too. > Make TimestampUpdateProcessorFactory as Runtime URP; take params(s) with > request > > > Key: SOLR-10866 > URL: https://issues.apache.org/jira/browse/SOLR-10866 > Project: Solr > Issue Type: Sub-task > Security Level: Public(Default Security Level. Issues are Public) > Components: update >Reporter: Amrit Sarkar >Priority: Minor > Attachments: SOLR-10866.patch, SOLR-10866.patch, SOLR-10866.patch, > SOLR-10866.patch, SOLR-10866.patch, SOLR-10866.patch > > > We are trying to get rid of processor definitions in SolrConfig for all URPs > and take parameters in the request itself. > TimestampUpdateProcessorFactory will be able to execute by sample curl like > below: > {code} > curl -X POST -H Content-Type: application/json > http://localhost:8983/solr/test/update/json/docs?processor=timestamp=timestamp_tdt=true > --data-binary { "id" : "1" , "title_s" : "titleA" } > {code} > Configuration for TimestampUpdateProcessorFactory in solrconfig.xml is > optional. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326644#comment-16326644 ] Amrit Sarkar commented on SOLR-11718: - [~varunthacker], bq. We should remove the checkpoint asserts that are currently commented out in CdcrRequestHandlerTest right? Right. We decided to remove the commneted lines at the time of commit patch. bq. How about this in the deprecation message in the docs "ENABLEBUFFER API has been deprecated. Solr now uses replication to catch up with the source if the target is down for an extended period of time." and the same message for DISABLEBUFFER as well Certainly more human. But warning for enabling Buffer should be posted too right? maybe a common note and not in both DISABLE and ENABLE buffer apis. bq. We should add a test which explicitly enabled buffering and does a few sanity tests. This feature still needs to be tested till it's removed Sure. We can have old test cases in place (few of them) for buffer enabled. I will write the updated patch for above three if agreed upon. > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar >Priority: Major > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718-v3.patch, SOLR-11718.patch, SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326642#comment-16326642 ] Amrit Sarkar commented on SOLR-11712: - [~varunthacker]: please note the {{StreamingAliasColTest}} is Nightly test and the with-fix patch successfully failed for me when changes in TupleStream was removed. Regarding the stack trace, please see the entire stack trace below: {code} [junit4] 2> java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 [junit4] 2>at org.apache.solr.client.solrj.io.stream.ParallelStream.constructStreams(ParallelStream.java:276) [junit4] 2>at org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:274) [junit4] 2>at org.apache.solr.client.solrj.io.stream.eval.StreamingAliasColTest.getTuples(StreamingAliasColTest.java:216) [junit4] 2>at org.apache.solr.client.solrj.io.stream.eval.StreamingAliasColTest.testParallelUniqueStreamWithNoShards(StreamingAliasColTest.java:177) [junit4] 2>at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) [junit4] 2>at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) [junit4] 2>at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) [junit4] 2>at java.lang.reflect.Method.invoke(Method.java:498) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984) [junit4] 2>at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) [junit4] 2>at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49) [junit4] 2>at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) [junit4] 2>at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48) [junit4] 2>at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64) [junit4] 2>at org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47) [junit4] 2>at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) [junit4] 2>at com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368) [junit4] 2>at com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817) [junit4] 2>at com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879) [junit4] 2>at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890) [junit4] 2>at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) [junit4] 2>at com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57) [junit4] 2>at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45) [junit4] 2>at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) [junit4] 2>at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41) [junit4] 2>at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) [junit4] 2>at com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40) [junit4] 2>at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) [junit4] 2>at com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36) [junit4] 2>at
[jira] [Comment Edited] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326585#comment-16326585 ] Amrit Sarkar edited comment on SOLR-11712 at 1/15/18 9:35 PM: -- [~varunthacker], right. I wrote tests for once case and copied over to StreamingTests and never validated it fails without the patch or not. Sorry about that. Attached two patches: with-fix and without-fix which validate {{IndexOutOfBoundException}} and its corresponding fix, with new test class: \{{StreamingAliasColTest}} was (Author: sarkaramr...@gmail.com): [~varunthacker], right. I wrote tests for once case and copied over to StreamingTests and never validated it fails without the patch or not. Sorry about that. Attached two patches: with-fix and without-fix which validate {{IndexOutOfBoundException}} and its corresponding fix, with new test class: \{{StreamingAliasColTest}} > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at >
[jira] [Commented] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326585#comment-16326585 ] Amrit Sarkar commented on SOLR-11712: - [~varunthacker], right. I wrote tests for once case and copied over to StreamingTests and never validated it fails without the patch or not. Sorry about that. Attached two patches: with-fix and without-fix which validate {{IndexOutOfBoundException}} and its corresponding fix, with new test class: \{{StreamingAliasColTest}} > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at >
[jira] [Updated] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11712: Attachment: SOLR-11712-without-fix.patch SOLR-11712-with-fix.patch > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at >
[jira] [Comment Edited] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326585#comment-16326585 ] Amrit Sarkar edited comment on SOLR-11712 at 1/15/18 9:35 PM: -- [~varunthacker], right. I wrote tests for once case and copied over to StreamingTests and never validated it fails without the patch or not. Sorry about that. Attached two patches: with-fix and without-fix which validate {{IndexOutOfBoundException}} and its corresponding fix, with new test class: {{StreamingAliasColTest}} was (Author: sarkaramr...@gmail.com): [~varunthacker], right. I wrote tests for once case and copied over to StreamingTests and never validated it fails without the patch or not. Sorry about that. Attached two patches: with-fix and without-fix which validate {{IndexOutOfBoundException}} and its corresponding fix, with new test class: \{{StreamingAliasColTest}} > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker >Assignee: Varun Thacker >Priority: Major > Attachments: SOLR-11712-with-fix.patch, SOLR-11712-without-fix.patch, > SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at >
[jira] [Commented] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16326208#comment-16326208 ] Amrit Sarkar commented on SOLR-8389: Fresh patch uploaded with no tests and documentation to have clean code for review and feedback. > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > Screen Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-8389: --- Attachment: SOLR-8389.patch > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar >Priority: Major > Attachments: SOLR-8389.patch, SOLR-8389.patch, SOLR-8389.patch, > Screen Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7964) suggest.highlight=true does not work when using context filter query
[ https://issues.apache.org/jira/browse/SOLR-7964?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323981#comment-16323981 ] Amrit Sarkar commented on SOLR-7964: Checking in, is this still an issue in Solr 7.x versions? > suggest.highlight=true does not work when using context filter query > > > Key: SOLR-7964 > URL: https://issues.apache.org/jira/browse/SOLR-7964 > Project: Solr > Issue Type: Improvement > Components: Suggester >Affects Versions: 5.4 >Reporter: Arcadius Ahouansou >Priority: Minor > Labels: suggester > Attachments: SOLR_7964.patch, SOLR_7964.patch > > > When using the new suggester context filtering query param > {{suggest.contextFilterQuery}} introduced in SOLR-7888, the param > {{suggest.highlight=true}} has no effect. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11712: Attachment: SOLR-11712.patch [~varunthacker], as per your feedback. Test added. Beasts of 100 rounds passed. > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker > Attachments: SOLR-11712.patch, SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) > at >
[jira] [Commented] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16323498#comment-16323498 ] Amrit Sarkar commented on SOLR-11718: - Modified patch with Varun's recommendation: {{SOLR-11718-v3.patch}}. Improved documentation and tests. There is one test method in {{CdcrReplicationHandlerTest}}::{{testReplicationWithBufferedUpdates}} which is failing at the moment as: {code} [beaster] [00:04:50.322] FAILURE 353s | CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates <<< [beaster]> Throwable #1: java.lang.AssertionError: There are still nodes recoverying - waited for 330 seconds [beaster]>at __randomizedtesting.SeedInfo.seed([25F2AEF0CD93CBA3:F6FBFEEE88005734]:0) [beaster]>at org.junit.Assert.fail(Assert.java:93) [beaster]>at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:185) [beaster]>at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140) [beaster]>at org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135) [beaster]>at org.apache.solr.cloud.cdcr.BaseCdcrDistributedZkTest.waitForRecoveriesToFinish(BaseCdcrDistributedZkTest.java:522) [beaster]>at org.apache.solr.cloud.cdcr.BaseCdcrDistributedZkTest.restartServer(BaseCdcrDistributedZkTest.java:563) [beaster]>at org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates(CdcrReplicationHandlerTest.java:228) {code} We test in this method that when leader is still receiving updates, follower if restarted will buffer the updates and then replay while recovering. In this scenario with buffering being disabled, the follower node is always on recovery and never becomes active as indexing never stops and follower is always behind X no of documents from leader. This is a typical situation where we wait for indexing to complete and then restart follower to fetch index from leader and become active. I am still writing smart test for this according to current design, but seems like this scenario is no longer valid. Looking forward to thoughts and recommendation. > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718-v3.patch, SOLR-11718.patch, SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11718: Attachment: SOLR-11718-v3.patch > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718-v3.patch, SOLR-11718.patch, SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322103#comment-16322103 ] Amrit Sarkar edited comment on SOLR-11718 at 1/11/18 12:02 PM: --- [~varunthacker] Thank you for the feedback. bq. 1. In CdcrRequestHandlerTest#testCheckpointActions why have the asserts been commented out? CDCR has a typical behavior when Buffering is enabled, LastProcessedVersion emits out {{-1}} which is picked as minimum for checkpoint, resulting -1 for all shards' Checkpoints. The assertion is checkpoint to -1, which won't be the case NOW as buffering is disabled by default and will emit rightful tlog_version. bq. 2. "Since the CdcrReplicationHandlerTest was failing, suggesting typical Index Replication will take place when followers are numRecordsToKeep count behind." - Maybe we should modify the test to assert document count instead of just commenting it out? Considering this is the default behavior of follower nodes when they fall back by {{numRecordsToKeep}} w.r.t leader, I didn't write them up. I will add those tests too in the same {{CdcrReplicationHandlerTest}}. bq. 3. I don't quite understand the doc changes - "ENABLEBUFFER API has been deprecated in favor of when buffering is enabled, the Update Logs will grow without limit; they will never be purged." Yeah, english. "ENABLEBUFFER API has been deprecated in favor of buffering is disabled by default. And when buffering is enabled, the Update Logs will grow without limit; they will never be purged". Is this better? makes sense? was (Author: sarkaramr...@gmail.com): [~varunthacker] Thank you for the feedback. bq. 1. In CdcrRequestHandlerTest#testCheckpointActions why have the asserts been commented out? CDCR has a typical behavior when Buffering is enabled, LastProcessedVersion emits out {{-1}} which is picked as minimum for checkpoint, resulting -1 for all shards' Checkpoints. The assertion is checkpoint to -1, which won't be the case NOW as buffering is disabled by default and will emit rightful tlog_version. bq. 2. "Since the CdcrReplicationHandlerTest was failing, suggesting typical Index Replication will take place when followers are numRecordsToKeep count behind." - Maybe we should modify the test to assert document count instead of just commenting it out? Considering this is the default behavior of follower nodes when they fall back by {{numRecordsToKeep}} w.r.t leader, I didn't write them up. I will add those tests too in the same {{CdcrReplicationHandlerTest}}. bq. 3. I don't quite understand the doc changes - "ENABLEBUFFER API has been deprecated in favor of when buffering is enabled, the Update Logs will grow without limit; they will never be purged." Yeah, english. ENABLEBUFFER API has been deprecated in favor of buffering is disabled by default. And when buffering is enabled, the Update Logs will grow without limit; they will never be purged. Is this better? makes sense? > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718.patch, SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16322103#comment-16322103 ] Amrit Sarkar commented on SOLR-11718: - [~varunthacker] Thank you for the feedback. bq. 1. In CdcrRequestHandlerTest#testCheckpointActions why have the asserts been commented out? CDCR has a typical behavior when Buffering is enabled, LastProcessedVersion emits out {{-1}} which is picked as minimum for checkpoint, resulting -1 for all shards' Checkpoints. The assertion is checkpoint to -1, which won't be the case NOW as buffering is disabled by default and will emit rightful tlog_version. bq. 2. "Since the CdcrReplicationHandlerTest was failing, suggesting typical Index Replication will take place when followers are numRecordsToKeep count behind." - Maybe we should modify the test to assert document count instead of just commenting it out? Considering this is the default behavior of follower nodes when they fall back by {{numRecordsToKeep}} w.r.t leader, I didn't write them up. I will add those tests too in the same {{CdcrReplicationHandlerTest}}. bq. 3. I don't quite understand the doc changes - "ENABLEBUFFER API has been deprecated in favor of when buffering is enabled, the Update Logs will grow without limit; they will never be purged." Yeah, english. ENABLEBUFFER API has been deprecated in favor of buffering is disabled by default. And when buffering is enabled, the Update Logs will grow without limit; they will never be purged. Is this better? makes sense? > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718.patch, SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-8389: --- Attachment: SOLR-8389.patch Looking forward to feedback on previous comment on designing Modify API. Attached patch, Modify API is incomplete, half of the tests are written. > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar > Attachments: SOLR-8389.patch, SOLR-8389.patch, Screen Shot 2017-12-21 > at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11601) geodist fails for some fields when field is in parenthesis instead of sfield param
[ https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16320555#comment-16320555 ] Amrit Sarkar commented on SOLR-11601: - [~dsmiley], Sorry I missed your update. I would like to get your consensus on should we go after supporting spatial field in geodist() or just improving the error message is sufficient for now. I can work on either. > geodist fails for some fields when field is in parenthesis instead of sfield > param > -- > > Key: SOLR-11601 > URL: https://issues.apache.org/jira/browse/SOLR-11601 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: spatial >Affects Versions: 6.6 >Reporter: Clemens Wyss >Priority: Minor > > Im switching my schemas from derprecated solr.LatLonType to > solr.LatLonPointSpatialField. > Now my sortquery (which used to work with solr.LatLonType): > *sort=geodist(b4_location__geo_si,47.36667,8.55) asc* > raises the error > {color:red}*"sort param could not be parsed as a query, and is not a field > that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color} > Invoking sort using syntax > {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc > works as expected though...{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11712) Streaming throws IndexOutOfBoundsException against an alias when a shard is down
[ https://issues.apache.org/jira/browse/SOLR-11712?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11712: Attachment: SOLR-11712.patch Attached patch with improved exception log line. Let me know if we need to write test for this, we may need to create a new test class. > Streaming throws IndexOutOfBoundsException against an alias when a shard is > down > > > Key: SOLR-11712 > URL: https://issues.apache.org/jira/browse/SOLR-11712 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker > Attachments: SOLR-11712.patch > > > I have an alias against multiple collections. If any one of the shards the > underlying collection is down then the stream handler throws an > IndexOutOfBoundsException > {code} > {"result-set":{"docs":[{"EXCEPTION":"java.lang.IndexOutOfBoundsException: > Index: 0, Size: 0","EOF":true,"RESPONSE_TIME":11}]}} > {code} > From the Solr logs: > {code} > 2017-12-01 01:42:07.573 ERROR (qtp736709391-29) [c:collection s:shard1 > r:core_node13 x:collection_shard1_replica1] o.a.s.c.s.i.s.ExceptionStream > java.io.IOException: java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.constructStreams(CloudSolrStream.java:414) > at > org.apache.solr.client.solrj.io.stream.CloudSolrStream.open(CloudSolrStream.java:305) > at > org.apache.solr.client.solrj.io.stream.ExceptionStream.open(ExceptionStream.java:51) > at > org.apache.solr.handler.StreamHandler$TimerStream.open(StreamHandler.java:535) > at > org.apache.solr.client.solrj.io.stream.TupleStream.writeMap(TupleStream.java:83) > at > org.apache.solr.response.JSONWriter.writeMap(JSONResponseWriter.java:547) > at > org.apache.solr.response.TextResponseWriter.writeVal(TextResponseWriter.java:193) > at > org.apache.solr.response.JSONWriter.writeNamedListAsMapWithDups(JSONResponseWriter.java:209) > at > org.apache.solr.response.JSONWriter.writeNamedList(JSONResponseWriter.java:325) > at > org.apache.solr.response.JSONWriter.writeResponse(JSONResponseWriter.java:120) > at > org.apache.solr.response.JSONResponseWriter.write(JSONResponseWriter.java:71) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:65) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at > org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335) > at > org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134) > at org.eclipse.jetty.server.Server.handle(Server.java:534) > at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320) > at > org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251) > at > org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273) > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95) > at > org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93) >
[jira] [Updated] (SOLR-11836) Use -1 in bucketSizeLimit to get all facets, analogous to the JSON facet API
[ https://issues.apache.org/jira/browse/SOLR-11836?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11836: Attachment: SOLR-11836.patch [~amunoz], Correct, the behavior should be analogous. I have attached a small patch which do the same. Test and documentation updated. [~joel.bernstein] requesting your feedback and thoughts. > Use -1 in bucketSizeLimit to get all facets, analogous to the JSON facet API > > > Key: SOLR-11836 > URL: https://issues.apache.org/jira/browse/SOLR-11836 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 7.2 >Reporter: Alfonso Muñoz-Pomer Fuentes > Labels: facet, streaming > Attachments: SOLR-11836.patch > > > Currently, to retrieve all buckets using the streaming expressions facet > function, the {{bucketSizeLimit}} parameter must have a high enough value so > that all results will be included. Compare this with the JSON facet API, > where you can use {{"limit": -1}} to achieve this. It would help if such a > possibility existed. > [Issue 11236|https://issues.apache.org/jira/browse/SOLR-11236] is related. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16319996#comment-16319996 ] Amrit Sarkar commented on SOLR-9272: [~janhoy], I have fixed the SSL issue by smartly retrying whenever HTTP schema fails and SSL is configured. Attached updated patch. Requesting feedback and comments. > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr
[ https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-9272: --- Attachment: SOLR-9272.patch > Auto resolve zkHost for bin/solr zk for running Solr > > > Key: SOLR-9272 > URL: https://issues.apache.org/jira/browse/SOLR-9272 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: scripts and tools >Affects Versions: 6.2 >Reporter: Jan Høydahl >Assignee: Jan Høydahl > Labels: newdev > Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, > SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch > > > Spinoff from SOLR-9194: > We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already > running. We can optionally accept the {{-p}} parameter instead, and with that > use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's > easier to remember solr port than zk string. > Example: > {noformat} > bin/solr start -c -p 9090 > bin/solr zk ls / -p 9090 > {noformat} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16316288#comment-16316288 ] Amrit Sarkar commented on SOLR-11598: - [~aroopganguly], Do you have some metrics and numbers on Export writer with more than 4 sort fields. We are looking forward to them. > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at > org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185) > at > org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141) > at > org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213) > at > org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119) >
[jira] [Commented] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16316209#comment-16316209 ] Amrit Sarkar commented on SOLR-11718: - PFA updated SOLR-11718.patch which takes care for above: {{CdcrReplicationHandlerTest}} failures by simply removing them. The tests written for {{CdcrReplicationHandlerTest}} are specific for "source" cluster and verifies if the {{tlogs}} are getting copied over to follower nodes from leader when followers fail in b/w indexing, commits and etc. Since the {{CdcrReplicationHandlerTest}} was failing, suggesting typical Index Replication will take place when followers are numRecordsToKeep count behind. "target" cluster has nothing to do with it and is never referenced. [~varunthacker] this ready to ship with appropriate comments and documentation intact. > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718.patch, SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11718: Attachment: SOLR-11718.patch > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718.patch, SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312936#comment-16312936 ] Amrit Sarkar edited comment on SOLR-11718 at 1/8/18 12:17 PM: -- Patch attached with documentation changes but {{CdcrReplicationHandlerTest}} is failing due to changed behavior of disabling Buffer permanently. 7 tests failed. FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates FAILED: junit.framework.TestSuite.org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest FAILED: junit.framework.TestSuite.org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testPartialReplication FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testFullReplication FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testPartialReplicationAfterPeerSync FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testPartialReplicationWithTruncatedTlog Looking into studying and implement what should be intended behavior now. was (Author: sarkaramr...@gmail.com): Patch attached with documentation changes but {{CdcrReplicationHandlerTest}} is failing due to changed behavior of disabling Tlogs permanently. 7 tests failed. FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates FAILED: junit.framework.TestSuite.org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest FAILED: junit.framework.TestSuite.org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testPartialReplication FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testFullReplication FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testPartialReplicationAfterPeerSync FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testPartialReplicationWithTruncatedTlog Looking into studying and implement what should be intended behavior now. > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11718: Attachment: (was: SOLR-11652.patch) > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-9675) Sorting on field in JSON Facet API which is not part of JSON Facet.
[ https://issues.apache.org/jira/browse/SOLR-9675?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar resolved SOLR-9675. Resolution: Not A Problem > Sorting on field in JSON Facet API which is not part of JSON Facet. > --- > > Key: SOLR-9675 > URL: https://issues.apache.org/jira/browse/SOLR-9675 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Amrit Sarkar >Priority: Minor > > Here's a sample example: > There is a requirement to facet on a particular field but sort on another > field which is not part of json facet. > For example, consider schema with fields : sl1, sl2, product_bkgs, gc_2 > Solr query & facet : q=sl1 : ("abc") AND sl2 : ("xyz")=sl1 desc=0 > & json.facet={ > "group_column_level" : > { > "type" : "terms", > "field" : "gc_2", > "offset" : 0, > "limit" :25, > "sort" : { "product_bkgs" : "desc"}, > "facet" : > { > "product_bkgs" :"sum(product_bkgs)" > } > } > } > Sort on product_bkgs is possible but not on sl1 in the facet. > Let me know if anything can be done to achieve the same. > Thanks in advance. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Resolved] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API
[ https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar resolved SOLR-11652. - Resolution: Later > Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is > disabled from CDCR API > > > Key: SOLR-11652 > URL: https://issues.apache.org/jira/browse/SOLR-11652 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Amrit Sarkar > Attachments: SOLR-11652.patch > > > Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED > from CDCR API. > Steps to reproduce: > 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED. > 2. Index bunch of documents into source; make sure we have generated tlogs in > decent numbers (>20) > 3. Disable BUFFER via API on source and keep on indexing > 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps > on accumulating ever. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11671) CdcrUpdateLog should be enabled smartly for Cdcr configured collection
[ https://issues.apache.org/jira/browse/SOLR-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16313013#comment-16313013 ] Amrit Sarkar commented on SOLR-11671: - This will be part of SOLR-8389. > CdcrUpdateLog should be enabled smartly for Cdcr configured collection > -- > > Key: SOLR-11671 > URL: https://issues.apache.org/jira/browse/SOLR-11671 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Amrit Sarkar > Attachments: SOLR-11671.patch > > > {{CdcrUpdateLog}} should be configured smartly by itself when collection > config has *CDCR Request Handler* specified. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
[ https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11724: Description: Please find the discussion on: http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html If we index significant documents in to Source, stop indexing and then start CDCR; bootstrapping only copies the index to leader node of shards of the collection, and followers never receive the documents / index until and unless atleast one document is inserted again on source; which propels to target and target collection trigger index replication to followers. This behavior needs to be addressed in proper manner, either at target collection or while bootstrapping. was: Please find the discussion on: http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html If we index significant documents in to Source, stop indexing and then start CDCR; bootstrapping only copies the index to leader node of shards of the collection, and followers never receive the documents / index until and unless atleast document is inserted again on source; which propels to target and target collection trigger index replication to followers. This behavior needs to be addressed in proper manner, either at target collection or while bootstrapping. > Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target > - > > Key: SOLR-11724 > URL: https://issues.apache.org/jira/browse/SOLR-11724 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > > Please find the discussion on: > http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html > If we index significant documents in to Source, stop indexing and then start > CDCR; bootstrapping only copies the index to leader node of shards of the > collection, and followers never receive the documents / index until and > unless atleast one document is inserted again on source; which propels to > target and target collection trigger index replication to followers. > This behavior needs to be addressed in proper manner, either at target > collection or while bootstrapping. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
[ https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312939#comment-16312939 ] Amrit Sarkar commented on SOLR-11724: - [~shalinmangar]: checking in again to understand if the above is intended behavior and we can trigger Index Replication by follower nodes in Target collection once BS is done. > Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target > - > > Key: SOLR-11724 > URL: https://issues.apache.org/jira/browse/SOLR-11724 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > > Please find the discussion on: > http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html > If we index significant documents in to Source, stop indexing and then start > CDCR; bootstrapping only copies the index to leader node of shards of the > collection, and followers never receive the documents / index until and > unless atleast document is inserted again on source; which propels to target > and target collection trigger index replication to followers. > This behavior needs to be addressed in proper manner, either at target > collection or while bootstrapping. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11718: Attachment: SOLR-11718.patch > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11652.patch, SOLR-11718.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16312936#comment-16312936 ] Amrit Sarkar commented on SOLR-11718: - Patch attached with documentation changes but {{CdcrReplicationHandlerTest}} is failing due to changed behavior of disabling Tlogs permanently. 7 tests failed. FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testReplicationWithBufferedUpdates FAILED: junit.framework.TestSuite.org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest FAILED: junit.framework.TestSuite.org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testPartialReplication FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testFullReplication FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testPartialReplicationAfterPeerSync FAILED: org.apache.solr.cloud.cdcr.CdcrReplicationHandlerTest.testPartialReplicationWithTruncatedTlog Looking into studying and implement what should be intended behavior now. > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: master (8.0), 7.3 > > Attachments: SOLR-11652.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16304519#comment-16304519 ] Amrit Sarkar commented on SOLR-8389: I would need some advice on designing the new APIs for CDCR specially adding / modifying target configs. this is sample API I have designed for now which is very ineffective: {{/cdcr/action=MODIFY=zkhost:zkpost/chroot=targetColName}} and add them sequentially to add all the target collection information ONE BY ONE. *Should I configure JSON payload as request or use V2 API to pass multiple target configs at once?* I know this configuration will be passed just one, and wouldn't hurt if target configs be passed one by one. Looking forward to suggestions, I am still cleaning code to support this and then will start modifying tests around all components. > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar > Attachments: SOLR-8389.patch, Screen Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-8389: --- Attachment: Screen Shot 2017-12-21 at 5.44.36 PM.png > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar > Attachments: SOLR-8389.patch, Screen Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16299924#comment-16299924 ] Amrit Sarkar commented on SOLR-8389: I started working on extending the patch and collaborated with CDCR module, but was pulled away to professional duties and was not able to work further. I wrote a very rough patch, cleaning it now and should be able to get it ready soon. *PFA screenshot* of how {{cdcr.json}} looks like and sample CREATE collection command: {code} http://localhost:8983/solr/admin/collections?action=CREATE=source_col=1=1=true=source_col=target_col=localhost:8574 {code} [~prusko], I have extended your code to support nested json formatted properties like below: {code} { "replica":[{ "source":"source_col", "zkHost":"localhost:8574", "target":"target_col"}], "replicator":{ "schedule":1000, "threadPoolSize":2, "batchSize":128}, "buffer":{"defaultState":"disabled"}, "updateLogSynchronizer":{"schedule":6}} {code} In this example, not a single cdcr property mention is required in {{solrconfig.xml}} and using default configuration, a very significant and long time improvement coming. [~erickerickson], I think I will be able to incorporate the target collection being optional and use the same source collection name as target if not specified quite easily. [~prusko] [~erickerickson] [~varunthacker] [~shalinmangar], I will be posting the updated patch real soon, hopefully before the year end and will be looking forward to your feedback and comments. > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar > Attachments: SOLR-8389.patch, Screen Shot 2017-12-21 at 5.44.36 PM.png > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support
[ https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278528#comment-16278528 ] Amrit Sarkar commented on SOLR-11412: - Thank you [~ctargett] for curating and commiting. > Documentation changes for SOLR-11003: Bi-directional CDCR support > - > > Key: SOLR-11412 > URL: https://issues.apache.org/jira/browse/SOLR-11412 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, documentation >Reporter: Amrit Sarkar >Assignee: Cassandra Targett > Fix For: 7.2, master (8.0) > > Attachments: CDCR_bidir.png, SOLR-11412-split.patch, > SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, > SOLR-11412.patch > > > Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its > conclusion. The relevant changes in documentation needs to be done. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
[ https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16278527#comment-16278527 ] Amrit Sarkar commented on SOLR-11724: - [~shalinmangar] wanted to check with you whether this is the intended behavior. > Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target > - > > Key: SOLR-11724 > URL: https://issues.apache.org/jira/browse/SOLR-11724 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > > Please find the discussion on: > http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html > If we index significant documents in to Source, stop indexing and then start > CDCR; bootstrapping only copies the index to leader node of shards of the > collection, and followers never receive the documents / index until and > unless atleast document is inserted again on source; which propels to target > and target collection trigger index replication to followers. > This behavior needs to be addressed in proper manner, either at target > collection or while bootstrapping. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
Amrit Sarkar created SOLR-11724: --- Summary: Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target Key: SOLR-11724 URL: https://issues.apache.org/jira/browse/SOLR-11724 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: CDCR Affects Versions: 7.1 Reporter: Amrit Sarkar Please find the discussion on: http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html If we index significant documents in to Source, stop indexing and then start CDCR; bootstrapping only copies the index to leader node of shards of the collection, and followers never receive the documents / index until and unless atleast document is inserted again on source; which propels to target and target collection trigger index replication to followers. This behavior needs to be addressed in proper manner, either at target collection or while bootstrapping. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11718) Deprecate CDCR Buffer APIs
[ https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11718: Attachment: SOLR-11652.patch Please mind on the patch, I have commented out the relevant code from the module. I can remove them completely if that is how deprecation of APIs are done. > Deprecate CDCR Buffer APIs > -- > > Key: SOLR-11718 > URL: https://issues.apache.org/jira/browse/SOLR-11718 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.1 >Reporter: Amrit Sarkar > Fix For: 7.2 > > Attachments: SOLR-11652.patch > > > Kindly see the discussion on SOLR-11652. > Today, if we see the current CDCR documentation page, buffering is "disabled" > by default in both source and target. We don't see any purpose served by Cdcr > buffering and it is quite an overhead considering it can take a lot heap > space (tlogs ptr) and forever retention of tlogs on the disk when enabled. > Also today, even if we disable buffer from API on source , considering it was > enabled at startup, tlogs are never purged on leader node of shards of > source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11718) Deprecate CDCR Buffer APIs
Amrit Sarkar created SOLR-11718: --- Summary: Deprecate CDCR Buffer APIs Key: SOLR-11718 URL: https://issues.apache.org/jira/browse/SOLR-11718 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: CDCR Affects Versions: 7.1 Reporter: Amrit Sarkar Fix For: 7.2 Kindly see the discussion on SOLR-11652. Today, if we see the current CDCR documentation page, buffering is "disabled" by default in both source and target. We don't see any purpose served by Cdcr buffering and it is quite an overhead considering it can take a lot heap space (tlogs ptr) and forever retention of tlogs on the disk when enabled. Also today, even if we disable buffer from API on source , considering it was enabled at startup, tlogs are never purged on leader node of shards of source, refer jira: SOLR-11652 -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11676) nrt replicas is always 1 when not specified
[ https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272856#comment-16272856 ] Amrit Sarkar edited comment on SOLR-11676 at 11/30/17 4:07 PM: --- Figured out. Attached patch, verified its working. {{ClusterStateTest}} is very poorly written in terms of verifying actual collection properties passed. {code} modified: solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java modified: solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java {code} If we decide to write tests for the same, it will be tad difficult. was (Author: sarkaramr...@gmail.com): Figured out. Attached patch, verified its working. {{ClusterStateTest}} is very poorly written in terms of verifying actual collection properties passed. {code} modified: solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java modified: solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java {code} > nrt replicas is always 1 when not specified > --- > > Key: SOLR-11676 > URL: https://issues.apache.org/jira/browse/SOLR-11676 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker > Attachments: SOLR-11676.patch > > > I created 1 2 shard X 2 replica collection . Here's the log entry for it > {code} > 2017-11-27 06:43:47.071 INFO (qtp159259014-22) [ ] > o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params > replicationFactor=2=compositeId=_default=2=test_recovery=compositeId=CREATE=2=json&_=1511764995711 > and sendToOCPQueue=true > {code} > And then when I look at the state.json file I see nrtReplicas is set to 1. > Any combination of numShards and replicationFactor without explicitly > specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of > using the replicationFactor value > {code} > {"test_recovery":{ > "pullReplicas":"0", > "replicationFactor":"2", > ... > "nrtReplicas":"1", > "tlogReplicas":"0", > .. > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11676) nrt replicas is always 1 when not specified
[ https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11676: Attachment: SOLR-11676.patch > nrt replicas is always 1 when not specified > --- > > Key: SOLR-11676 > URL: https://issues.apache.org/jira/browse/SOLR-11676 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker > Attachments: SOLR-11676.patch > > > I created 1 2 shard X 2 replica collection . Here's the log entry for it > {code} > 2017-11-27 06:43:47.071 INFO (qtp159259014-22) [ ] > o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params > replicationFactor=2=compositeId=_default=2=test_recovery=compositeId=CREATE=2=json&_=1511764995711 > and sendToOCPQueue=true > {code} > And then when I look at the state.json file I see nrtReplicas is set to 1. > Any combination of numShards and replicationFactor without explicitly > specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of > using the replicationFactor value > {code} > {"test_recovery":{ > "pullReplicas":"0", > "replicationFactor":"2", > ... > "nrtReplicas":"1", > "tlogReplicas":"0", > .. > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11676) nrt replicas is always 1 when not specified
[ https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272856#comment-16272856 ] Amrit Sarkar commented on SOLR-11676: - Figured out. Attached patch, verified its working. {{ClusterStateTest}} is very poorly written in terms of verifying actual collection properties passed. {code} modified: solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java modified: solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java {code} > nrt replicas is always 1 when not specified > --- > > Key: SOLR-11676 > URL: https://issues.apache.org/jira/browse/SOLR-11676 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker > > I created 1 2 shard X 2 replica collection . Here's the log entry for it > {code} > 2017-11-27 06:43:47.071 INFO (qtp159259014-22) [ ] > o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params > replicationFactor=2=compositeId=_default=2=test_recovery=compositeId=CREATE=2=json&_=1511764995711 > and sendToOCPQueue=true > {code} > And then when I look at the state.json file I see nrtReplicas is set to 1. > Any combination of numShards and replicationFactor without explicitly > specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of > using the replicationFactor value > {code} > {"test_recovery":{ > "pullReplicas":"0", > "replicationFactor":"2", > ... > "nrtReplicas":"1", > "tlogReplicas":"0", > .. > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11676) nrt replicas is always 1 when not specified
[ https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272773#comment-16272773 ] Amrit Sarkar commented on SOLR-11676: - Varun I can see what are you saying: {{CreateCollectionCmd}}:: {code} int numNrtReplicas = message.getInt(NRT_REPLICAS, message.getInt(REPLICATION_FACTOR, numTlogReplicas>0?0:1)); {code} But this code suggests, it will pick {{replicationFactor}} positively. I will put a debugger and test. > nrt replicas is always 1 when not specified > --- > > Key: SOLR-11676 > URL: https://issues.apache.org/jira/browse/SOLR-11676 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Varun Thacker > > I created 1 2 shard X 2 replica collection . Here's the log entry for it > {code} > 2017-11-27 06:43:47.071 INFO (qtp159259014-22) [ ] > o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params > replicationFactor=2=compositeId=_default=2=test_recovery=compositeId=CREATE=2=json&_=1511764995711 > and sendToOCPQueue=true > {code} > And then when I look at the state.json file I see nrtReplicas is set to 1. > Any combination of numShards and replicationFactor without explicitly > specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of > using the replicationFactor value > {code} > {"test_recovery":{ > "pullReplicas":"0", > "replicationFactor":"2", > ... > "nrtReplicas":"1", > "tlogReplicas":"0", > .. > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11705) Java Class Cast Exception while loading custom plugin
[ https://issues.apache.org/jira/browse/SOLR-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16272310#comment-16272310 ] Amrit Sarkar commented on SOLR-11705: - Details? > Java Class Cast Exception while loading custom plugin > - > > Key: SOLR-11705 > URL: https://issues.apache.org/jira/browse/SOLR-11705 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: clients - java >Affects Versions: 7.1 >Reporter: As Ma > -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API
[ https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11652: Attachment: SOLR-11652.patch Please mind on the patch, I have commented out the relevant code from the module. I can remove them completely if that is how deprecation of APIs are done. > Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is > disabled from CDCR API > > > Key: SOLR-11652 > URL: https://issues.apache.org/jira/browse/SOLR-11652 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Amrit Sarkar > Attachments: SOLR-11652.patch > > > Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED > from CDCR API. > Steps to reproduce: > 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED. > 2. Index bunch of documents into source; make sure we have generated tlogs in > decent numbers (>20) > 3. Disable BUFFER via API on source and keep on indexing > 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps > on accumulating ever. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11671) CdcrUpdateLog should be enabled smartly for Cdcr configured collection
[ https://issues.apache.org/jira/browse/SOLR-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11671: Attachment: SOLR-11671.patch > CdcrUpdateLog should be enabled smartly for Cdcr configured collection > -- > > Key: SOLR-11671 > URL: https://issues.apache.org/jira/browse/SOLR-11671 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Amrit Sarkar > Attachments: SOLR-11671.patch > > > {{CdcrUpdateLog}} should be configured smartly by itself when collection > config has *CDCR Request Handler* specified. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11671) CdcrUpdateLog should be enabled smartly for Cdcr configured collection
[ https://issues.apache.org/jira/browse/SOLR-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265240#comment-16265240 ] Amrit Sarkar commented on SOLR-11671: - Patch attached where UpdateHandler looks through all its request handlers, and if found implementation for {{CdcrRequestHandler}}, assigns CdcrUpdateLog with the passed arguments in solrconfig.xml. I understand {{UpdateHandler}} is an abstract class, but the implementation for check-for-Cdcr would be there and there itself. If given +1 on approach, will change cdcr related tests everywhere binding to it. > CdcrUpdateLog should be enabled smartly for Cdcr configured collection > -- > > Key: SOLR-11671 > URL: https://issues.apache.org/jira/browse/SOLR-11671 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Amrit Sarkar > > {{CdcrUpdateLog}} should be configured smartly by itself when collection > config has *CDCR Request Handler* specified. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11671) CdcrUpdateLog should be enabled smartly for Cdcr configured collection
[ https://issues.apache.org/jira/browse/SOLR-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11671: Description: {{CdcrUpdateLog}} should be configured smartly by itself when collection config has *CDCR Request Handler* specified. (was: {{CdcrUpdateLog}} should be configured smartly by itself collection config has CDCR Request Handler specified.) > CdcrUpdateLog should be enabled smartly for Cdcr configured collection > -- > > Key: SOLR-11671 > URL: https://issues.apache.org/jira/browse/SOLR-11671 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR >Affects Versions: 7.2 >Reporter: Amrit Sarkar > > {{CdcrUpdateLog}} should be configured smartly by itself when collection > config has *CDCR Request Handler* specified. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11671) CdcrUpdateLog should be enabled smartly for Cdcr configured collection
Amrit Sarkar created SOLR-11671: --- Summary: CdcrUpdateLog should be enabled smartly for Cdcr configured collection Key: SOLR-11671 URL: https://issues.apache.org/jira/browse/SOLR-11671 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Components: CDCR Affects Versions: 7.2 Reporter: Amrit Sarkar {{CdcrUpdateLog}} should be configured smartly by itself collection config has CDCR Request Handler specified. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API
[ https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16265208#comment-16265208 ] Amrit Sarkar commented on SOLR-11652: - I had a chance to chat with [~erickerickson], [~varunthacker] to discuss the significance of "buffering" in CDC replication. Motivation for buffering in CDCR: listed on SOLR-11069 by Renaud: _The original goal of the buffer on cdcr is to indeed keep indefinitely the tlogs until the buffer is deactivated (https://lucene.apache.org/solr/guide/7_1/cross-data-center-replication-cdcr.html#the-buffer-element. This was useful for example during maintenance operations, to ensure that the source cluster will keep all the tlogs until the target clsuter is properly initialised. In this scenario, one will activate the buffer on the source. The source will start to store all the tlogs (and does not purge them). Once the target cluster is initialised, and has register a tlog pointer on the source, one can deactivate the buffer on the source and the tlog will start to be purged once they are read by the target cluster._ What I understood looking at the code besides what Renaud explained: _Buffer is always enabled on non-leader nodes of source. In source DC, sync b/w leaders and followers is maintained by buffer. If leader goes down, and someone else picks up, it uses bufferLog to determine the current version point._ Essentially buffering was introduced to remind source that no updates has been sent over, because target is not ready, or CDCR is not started. The LastProcessedVersion for source is -1 when buffer enabled, suggesting no updates has been forwarded and it has to keep track of all tlogs. Once disabled, it starts to show the correct version which has been replicated to target. In Solr 6.2, Bootstrapping is introduced which very well takes care of the above use-case, i.e. Source is up and running and have already received bunch of updates / documents and either we have not started CDCR or target is not available only until now. Whenever CDC replication is started (action=START invoked), Bootstrap is called implicitly, which copies the entire index folder (not tlogs) to the target. This is much faster and effective than earlier setup where all the updates from the beginning were sent to target linearly in batch size defined in the cdcr config. This earlier setup was achieved by Buffering (the tlogs from beginning). Today, if we see the current CDCR documentation page, buffering is "disabled" by default in both source and target. We don't see any purpose served by Cdcr buffering and it is quite an overhead considering it can take a lot heap space (tlogs ptr) and forever retention of tlogs on the disk when enabled. Also today, even if we disable buffer from API on source , considering it was enabled at startup, tlogs are never purged on leader node of shards of source, refer jira: SOLR-11652 We propose to make Buffer state default "DISABLED" in the code (CdcrBufferManager) and deprecate its APIs (ENABLE / DISABLE buffer). It will still be running for non-leader nodes on source implicitly and no user intervention is required whatsoever. > Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is > disabled from CDCR API > > > Key: SOLR-11652 > URL: https://issues.apache.org/jira/browse/SOLR-11652 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Amrit Sarkar > > Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED > from CDCR API. > Steps to reproduce: > 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED. > 2. Index bunch of documents into source; make sure we have generated tlogs in > decent numbers (>20) > 3. Disable BUFFER via API on source and keep on indexing > 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps > on accumulating ever. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11601) solr.LatLonPointSpatialField : sorting by geodist fails
[ https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262493#comment-16262493 ] Amrit Sarkar commented on SOLR-11601: - I am using SolrJ 6.6: How about this: {code} query.set("sfield","b4_location_geo_si"); query.set("pt","47.36667,8.55"); query.setSort( "geodist()", SolrQuery.ORDER.asc); {code} I don't any other way, to be honest. > solr.LatLonPointSpatialField : sorting by geodist fails > --- > > Key: SOLR-11601 > URL: https://issues.apache.org/jira/browse/SOLR-11601 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Clemens Wyss >Priority: Blocker > > Im switching my schemas from derprecated solr.LatLonType to > solr.LatLonPointSpatialField. > Now my sortquery (which used to work with solr.LatLonType): > *sort=geodist(b4_location__geo_si,47.36667,8.55) asc* > raises the error > {color:red}*"sort param could not be parsed as a query, and is not a field > that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color} > Invoking sort using syntax > {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc > works as expected though...{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support
[ https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16262486#comment-16262486 ] Amrit Sarkar commented on SOLR-11412: - +1. It is lot of scrolling up and down right now. Happy with the 4 seb-sections too. > Documentation changes for SOLR-11003: Bi-directional CDCR support > - > > Key: SOLR-11412 > URL: https://issues.apache.org/jira/browse/SOLR-11412 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, documentation >Reporter: Amrit Sarkar >Assignee: Varun Thacker > Attachments: CDCR_bidir.png, SOLR-11412.patch, SOLR-11412.patch, > SOLR-11412.patch, SOLR-11412.patch > > > Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its > conclusion. The relevant changes in documentation needs to be done. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only
[ https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260797#comment-16260797 ] Amrit Sarkar edited comment on SOLR-11600 at 11/21/17 2:34 PM: --- Thank you [~joel.bernstein] for the explanation; bq. Each expression has it's own set of rules for the parameters that it accepts so we can get very specific with how type safety is handled I completely understand this by the following example {code} replace( fieldA, add( fieldB, if( eq(fieldC,0), 0, 1))) {code} This nested evaluation and operation is not possible to create with current Java constructors available, as the constructors of evaluators and operations have most just one type of constructor with {{StreamExpression}} (StreamExpressionParameter interface) parameter which the evaluators or operators doesn't implement (they implement Expressible interface). {code} public AddEvaluator(StreamExpression expression, StreamFactory factory) throws IOException{ super(expression, factory); if(containedEvaluators.size() < 1){ throw new IOException(String.format(Locale.ROOT,"Invalid expression %s - expecting at least one value but found %d",expression,containedEvaluators.size())); } } {code} To accomodate the above request, strongly types java objects for all, we need to create rule-based constructors for all the evaluators and operators, so that those can be used in {{SelectStream}}. was (Author: sarkaramr...@gmail.com): Thank you [~joel.bernstein] for the explanation; > Each expression has it's own set of rules for the parameters that it accepts > so we can get very specific with how type safety is handled I completely understand this by the following example {code} replace( fieldA, add( fieldB, if( eq(fieldC,0), 0, 1))) {code} This nested evaluation and operation is not possible to create with current Java constructors available, as the constructors of evaluators and operations have most just one type of constructor with {{StreamExpression}} (StreamExpressionParameter interface) parameter which the evaluators or operators doesn't implement (they implement Expressible interface). {code} public AddEvaluator(StreamExpression expression, StreamFactory factory) throws IOException{ super(expression, factory); if(containedEvaluators.size() < 1){ throw new IOException(String.format(Locale.ROOT,"Invalid expression %s - expecting at least one value but found %d",expression,containedEvaluators.size())); } } {code} To accomodate the above request, strongly types java objects for all, we need to create rule-based constructors for all the evaluators and operators, so that those can be used in {{SelectStream}}. > Add Constructor to SelectStream which takes StreamEvaluators as argument. > Current schema forces one to enter a stream expression string only > - > > Key: SOLR-11600 > URL: https://issues.apache.org/jira/browse/SOLR-11600 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ, streaming expressions >Affects Versions: 6.6.1, 7.1 >Reporter: Aroop >Priority: Trivial > Labels: easyfix > Attachments: SOLR-11600.patch > > > The use case is to be able able to supply stream evaluators over a rollup > stream in the following manner, but with instead with Strongly typed objects > and not steaming-expression strings. > {code:bash} > curl --data-urlencode 'expr=select( > id, > div(sum(cat1_i),sum(cat2_i)) as metric1, > coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as > metric2, > rollup( > search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s > asc"), > over="cat_s",sum(cat1_i),sum(cat2_i) > ))' http://localhost:8983/solr/col1/stream > {code} > the current code base does not allow one to provide selectedEvaluators in a > constructor, so one cannot prepare their select stream via java code: > {code:java} > public class SelectStream extends TupleStream implements Expressible { > private static final long serialVersionUID = 1L; > private TupleStream stream; > private StreamContext streamContext; > private MapselectedFields; > private Map selectedEvaluators; > private List operations; > public SelectStream(TupleStream stream, List selectedFields) > throws IOException { > this.stream = stream; > this.selectedFields = new HashMap(); > Iterator var3 = selectedFields.iterator(); > while(var3.hasNext()) { > String selectedField = (String)var3.next(); >
[jira] [Commented] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only
[ https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260797#comment-16260797 ] Amrit Sarkar commented on SOLR-11600: - Thank you [~joel.bernstein] for the explanation; > Each expression has it's own set of rules for the parameters that it accepts > so we can get very specific with how type safety is handled I completely understand this by the following example {code} replace( fieldA, add( fieldB, if( eq(fieldC,0), 0, 1))) {code} This nested evaluation and operation is not possible to create with current Java constructors available, as the constructors of evaluators and operations have most just one type of constructor with {{StreamExpression}} (StreamExpressionParameter interface) parameter which the evaluators or operators doesn't implement (they implement Expressible interface). {code} public AddEvaluator(StreamExpression expression, StreamFactory factory) throws IOException{ super(expression, factory); if(containedEvaluators.size() < 1){ throw new IOException(String.format(Locale.ROOT,"Invalid expression %s - expecting at least one value but found %d",expression,containedEvaluators.size())); } } {code} To accomodate the above request, strongly types java objects for all, we need to create rule-based constructors for all the evaluators and operators, so that those can be used in {{SelectStream}}. > Add Constructor to SelectStream which takes StreamEvaluators as argument. > Current schema forces one to enter a stream expression string only > - > > Key: SOLR-11600 > URL: https://issues.apache.org/jira/browse/SOLR-11600 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ, streaming expressions >Affects Versions: 6.6.1, 7.1 >Reporter: Aroop >Priority: Trivial > Labels: easyfix > Attachments: SOLR-11600.patch > > > The use case is to be able able to supply stream evaluators over a rollup > stream in the following manner, but with instead with Strongly typed objects > and not steaming-expression strings. > {code:bash} > curl --data-urlencode 'expr=select( > id, > div(sum(cat1_i),sum(cat2_i)) as metric1, > coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as > metric2, > rollup( > search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s > asc"), > over="cat_s",sum(cat1_i),sum(cat2_i) > ))' http://localhost:8983/solr/col1/stream > {code} > the current code base does not allow one to provide selectedEvaluators in a > constructor, so one cannot prepare their select stream via java code: > {code:java} > public class SelectStream extends TupleStream implements Expressible { > private static final long serialVersionUID = 1L; > private TupleStream stream; > private StreamContext streamContext; > private MapselectedFields; > private Map selectedEvaluators; > private List operations; > public SelectStream(TupleStream stream, List selectedFields) > throws IOException { > this.stream = stream; > this.selectedFields = new HashMap(); > Iterator var3 = selectedFields.iterator(); > while(var3.hasNext()) { > String selectedField = (String)var3.next(); > this.selectedFields.put(selectedField, selectedField); > } > this.operations = new ArrayList(); > this.selectedEvaluators = new HashMap(); > } > public SelectStream(TupleStream stream, Map > selectedFields) throws IOException { > this.stream = stream; > this.selectedFields = selectedFields; > this.operations = new ArrayList(); > this.selectedEvaluators = new HashMap(); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11635) CDCR Source configuration example in the ref guide leaves out important settings
[ https://issues.apache.org/jira/browse/SOLR-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16260004#comment-16260004 ] Amrit Sarkar commented on SOLR-11635: - [~varunthacker] Not really, the action: Start triggers the processStateManager, bufferManager, replicator and other cdcr components to get in sync and start doing replication to target with parameters available. On target, since no replication to other DC needs to be done, the casual language "There is no need to run the /cdcr?action=START command on the Target" is used maybe. > CDCR Source configuration example in the ref guide leaves out important > settings > > > Key: SOLR-11635 > URL: https://issues.apache.org/jira/browse/SOLR-11635 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Fix For: 7.2, master (8.0) > > Attachments: cdcr-doc.patch > > > If you blindly copy/paste the Source config from the example, your > transaction logs on the Source replicas will not be managed correctly. > Plus another couple of improvements, in particular a caution about why > buffering should be disabled most of the time. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11635) CDCR Source configuration example in the ref guide leaves out important settings
[ https://issues.apache.org/jira/browse/SOLR-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11635: Attachment: cdcr-doc.patch I am attaching a patch for the CDCR doc. Correcting "Initial Startup" section; to issue Cdcr START on target too. > CDCR Source configuration example in the ref guide leaves out important > settings > > > Key: SOLR-11635 > URL: https://issues.apache.org/jira/browse/SOLR-11635 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Erick Erickson >Assignee: Erick Erickson >Priority: Minor > Fix For: 7.2, master (8.0) > > Attachments: cdcr-doc.patch > > > If you blindly copy/paste the Source config from the example, your > transaction logs on the Source replicas will not be managed correctly. > Plus another couple of improvements, in particular a caution about why > buffering should be disabled most of the time. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API
[ https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16259966#comment-16259966 ] Amrit Sarkar commented on SOLR-11652: - More details on the behavior: 1. Cdcr target leader's tlogs doesn't get purged unless issues action=START at target 2. Cdcr source leader's tlogs doesn't get purged when DISABLEBUFFER from API. 3. Cdcr source if restarted, with DISABLEBUFFER from API earlier, behaves normally. 4. The #3 point is expected, as source will read buffer config from ZK and load and mark tlogs at beginning, similarly reading from solrconfig.xml > Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is > disabled from CDCR API > > > Key: SOLR-11652 > URL: https://issues.apache.org/jira/browse/SOLR-11652 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Amrit Sarkar > > Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED > from CDCR API. > Steps to reproduce: > 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED. > 2. Index bunch of documents into source; make sure we have generated tlogs in > decent numbers (>20) > 3. Disable BUFFER via API on source and keep on indexing > 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps > on accumulating ever. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only
[ https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16259537#comment-16259537 ] Amrit Sarkar commented on SOLR-11600: - Meanwhile I had a second look on the description of yours again; you are aspiring proper Java constructors. Well it is bit challenging considering it {{StreamOperation}} is an interface and not exactly class which we can pass incoming raw string value. I will see what can be done. > Add Constructor to SelectStream which takes StreamEvaluators as argument. > Current schema forces one to enter a stream expression string only > - > > Key: SOLR-11600 > URL: https://issues.apache.org/jira/browse/SOLR-11600 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ, streaming expressions >Affects Versions: 6.6.1, 7.1 >Reporter: Aroop >Priority: Trivial > Labels: easyfix > Attachments: SOLR-11600.patch > > > The use case is to be able able to supply stream evaluators over a rollup > stream in the following manner, but with instead with Strongly typed objects > and not steaming-expression strings. > {code:bash} > curl --data-urlencode 'expr=select( > id, > div(sum(cat1_i),sum(cat2_i)) as metric1, > coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as > metric2, > rollup( > search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s > asc"), > over="cat_s",sum(cat1_i),sum(cat2_i) > ))' http://localhost:8983/solr/col1/stream > {code} > the current code base does not allow one to provide selectedEvaluators in a > constructor, so one cannot prepare their select stream via java code: > {code:java} > public class SelectStream extends TupleStream implements Expressible { > private static final long serialVersionUID = 1L; > private TupleStream stream; > private StreamContext streamContext; > private MapselectedFields; > private Map selectedEvaluators; > private List operations; > public SelectStream(TupleStream stream, List selectedFields) > throws IOException { > this.stream = stream; > this.selectedFields = new HashMap(); > Iterator var3 = selectedFields.iterator(); > while(var3.hasNext()) { > String selectedField = (String)var3.next(); > this.selectedFields.put(selectedField, selectedField); > } > this.operations = new ArrayList(); > this.selectedEvaluators = new HashMap(); > } > public SelectStream(TupleStream stream, Map > selectedFields) throws IOException { > this.stream = stream; > this.selectedFields = selectedFields; > this.operations = new ArrayList(); > this.selectedEvaluators = new HashMap(); > } > {code} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API
[ https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11652: Description: Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED from CDCR API. Steps to reproduce: 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED. 2. Index bunch of documents into source; make sure we have generated tlogs in decent numbers (>20) 3. Disable BUFFER via API on source and keep on indexing 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps on accumulating ever. was: Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED from CDCR API. Steps to reproduce: 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED. 2. Index bunch of documents into source; make sure we have generated tlogs in decent numbers (>20) 3. Disable BUFFER on source and keep on indexing 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps on accumulating ever. > Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is > disabled from CDCR API > > > Key: SOLR-11652 > URL: https://issues.apache.org/jira/browse/SOLR-11652 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Amrit Sarkar > > Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED > from CDCR API. > Steps to reproduce: > 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED. > 2. Index bunch of documents into source; make sure we have generated tlogs in > decent numbers (>20) > 3. Disable BUFFER via API on source and keep on indexing > 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps > on accumulating ever. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API
[ https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11652: Description: Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED from CDCR API. Steps to reproduce: 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED. 2. Index bunch of documents into source; make sure we have generated tlogs in decent numbers (>20) 3. Disable BUFFER on source and keep on indexing 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps on accumulating ever. was: Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED from CDCR API. More details to follow. > Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is > disabled from CDCR API > > > Key: SOLR-11652 > URL: https://issues.apache.org/jira/browse/SOLR-11652 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Amrit Sarkar > > Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED > from CDCR API. > Steps to reproduce: > 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED. > 2. Index bunch of documents into source; make sure we have generated tlogs in > decent numbers (>20) > 3. Disable BUFFER on source and keep on indexing > 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps > on accumulating ever. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only
[ https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258474#comment-16258474 ] Amrit Sarkar edited comment on SOLR-11600 at 11/19/17 12:44 PM: Examples are listed under https://lucene.apache.org/solr/guide/6_6/streaming-expressions.html#StreamingExpressions-StreamingRequestsandResponses and http://joelsolr.blogspot.in/2015/04/the-streaming-api-solrjio-basics.html. I have cooked one example against {{master}} branch, which strictly required httpClient::4.5.3 {code} package stream.example; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.io.SolrClientCache; import org.apache.solr.client.solrj.io.Tuple; import org.apache.solr.client.solrj.io.eval.DivideEvaluator; import org.apache.solr.client.solrj.io.stream.CloudSolrStream; import org.apache.solr.client.solrj.io.stream.SelectStream; import org.apache.solr.client.solrj.io.stream.StreamContext; import org.apache.solr.client.solrj.io.stream.TupleStream; import org.apache.solr.client.solrj.io.stream.expr.StreamFactory; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.IOException; import java.lang.invoke.MethodHandles; import java.util.ArrayList; import java.util.List; public class QuerySolr { private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); static StreamFactory streamFactory = new StreamFactory() .withCollectionZkHost("collection1","localhost:9983") .withFunctionName("select", SelectStream.class) .withFunctionName("search", CloudSolrStream.class) .withFunctionName("div", DivideEvaluator.class); public static void main(String args[]) throws IOException, SolrServerException { SelectStream stream = (SelectStream)streamFactory .constructStream("select(\n" + " search(collection1, fl=\"id,A_i,B_i\", q=\"*:*\", sort=\"id asc\"),\n" + " id as UNIQUE_KEY,\n" + " div(A_i,B_i) as divRes\n" + ")"); attachStreamFactory(stream); List tuples = getTuples(stream); for (Tuple tuple : tuples) { log.info("tuple: " + tuple.getMap()); System.out.println("tuple: " + tuple.getMap()); } System.exit(0); } private static void attachStreamFactory(TupleStream tupleStream) { StreamContext context = new StreamContext(); context.setSolrClientCache(new SolrClientCache()); context.setStreamFactory(streamFactory); tupleStream.setStreamContext(context); } private static List getTuples(TupleStream tupleStream) throws IOException { tupleStream.open(); List tuples = new ArrayList(); for(;;) { Tuple t = tupleStream.read(); if(t.EOF) { break; } else { tuples.add(t); } } tupleStream.close(); return tuples; } } {code} I need {{System.exit(0);}} to terminate the program, so pretty sure some httpclient is not getting closed properly or such. *_Also, the patch above is absolutely not required to make this work_*, we can move forward with above examples and streams can be constructed without adding constructors to each stream source, decorators or evaluators. The only condition is we have to pass our own {{streamFactory}}. Hope it helps. P.S. Please disregard the PATCH, it serves no purpose. was (Author: sarkaramr...@gmail.com): Examples are listed under https://lucene.apache.org/solr/guide/6_6/streaming-expressions.html#StreamingExpressions-StreamingRequestsandResponses and http://joelsolr.blogspot.in/2015/04/the-streaming-api-solrjio-basics.html. I have cook one example against {{master}} branch, which strictly required httpClient::4.5.3 {code} package stream.example; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.io.SolrClientCache; import org.apache.solr.client.solrj.io.Tuple; import org.apache.solr.client.solrj.io.eval.DivideEvaluator; import org.apache.solr.client.solrj.io.stream.CloudSolrStream; import org.apache.solr.client.solrj.io.stream.SelectStream; import org.apache.solr.client.solrj.io.stream.StreamContext; import org.apache.solr.client.solrj.io.stream.TupleStream; import org.apache.solr.client.solrj.io.stream.expr.StreamFactory; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.IOException; import java.lang.invoke.MethodHandles; import java.util.ArrayList; import java.util.List; public class QuerySolr { private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); static StreamFactory streamFactory = new StreamFactory()
[jira] [Commented] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only
[ https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258474#comment-16258474 ] Amrit Sarkar commented on SOLR-11600: - Examples are listed under https://lucene.apache.org/solr/guide/6_6/streaming-expressions.html#StreamingExpressions-StreamingRequestsandResponses and http://joelsolr.blogspot.in/2015/04/the-streaming-api-solrjio-basics.html. I have cook one example against {{master}} branch, which strictly required httpClient::4.5.3 {code} package stream.example; import org.apache.solr.client.solrj.SolrServerException; import org.apache.solr.client.solrj.io.SolrClientCache; import org.apache.solr.client.solrj.io.Tuple; import org.apache.solr.client.solrj.io.eval.DivideEvaluator; import org.apache.solr.client.solrj.io.stream.CloudSolrStream; import org.apache.solr.client.solrj.io.stream.SelectStream; import org.apache.solr.client.solrj.io.stream.StreamContext; import org.apache.solr.client.solrj.io.stream.TupleStream; import org.apache.solr.client.solrj.io.stream.expr.StreamFactory; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import java.io.IOException; import java.lang.invoke.MethodHandles; import java.util.ArrayList; import java.util.List; public class QuerySolr { private static final Logger log = LoggerFactory.getLogger(MethodHandles.lookup().lookupClass()); static StreamFactory streamFactory = new StreamFactory() .withCollectionZkHost("collection1","localhost:9983") .withFunctionName("select", SelectStream.class) .withFunctionName("search", CloudSolrStream.class) .withFunctionName("div", DivideEvaluator.class); public static void main(String args[]) throws IOException, SolrServerException { SelectStream stream = (SelectStream)streamFactory .constructStream("select(\n" + " search(collection1, fl=\"id,A_i,B_i\", q=\"*:*\", sort=\"id asc\"),\n" + " id as UNIQUE_KEY,\n" + " div(A_i,B_i) as divRes\n" + ")"); attachStreamFactory(stream); List tuples = getTuples(stream); for (Tuple tuple : tuples) { log.info("tuple: " + tuple.getMap()); System.out.println("tuple: " + tuple.getMap()); } System.exit(0); } private static void attachStreamFactory(TupleStream tupleStream) { StreamContext context = new StreamContext(); context.setSolrClientCache(new SolrClientCache()); context.setStreamFactory(streamFactory); tupleStream.setStreamContext(context); } private static List getTuples(TupleStream tupleStream) throws IOException { tupleStream.open(); List tuples = new ArrayList(); for(;;) { Tuple t = tupleStream.read(); if(t.EOF) { break; } else { tuples.add(t); } } tupleStream.close(); return tuples; } } {code} I need {{System.exit(0);}} to terminate the program, so pretty sure some httpclient is not getting closed properly or such. *_Also, the patch above is absolutely not required to make this work_*, we can move forward with above examples and streams can be constructed without adding constructors to each stream source, decorators or evaluators. The only condition is we have to pass our own {{streamFactory}}. Hope it helps. P.S. Please disregard the PATCH, it serves no purpose. > Add Constructor to SelectStream which takes StreamEvaluators as argument. > Current schema forces one to enter a stream expression string only > - > > Key: SOLR-11600 > URL: https://issues.apache.org/jira/browse/SOLR-11600 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ, streaming expressions >Affects Versions: 6.6.1, 7.1 >Reporter: Aroop >Priority: Trivial > Labels: easyfix > Attachments: SOLR-11600.patch > > > The use case is to be able able to supply stream evaluators over a rollup > stream in the following manner, but with instead with Strongly typed objects > and not steaming-expression strings. > {code:bash} > curl --data-urlencode 'expr=select( > id, > div(sum(cat1_i),sum(cat2_i)) as metric1, > coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as > metric2, > rollup( > search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s > asc"), > over="cat_s",sum(cat1_i),sum(cat2_i) > ))' http://localhost:8983/solr/col1/stream > {code} > the current code base does not allow one to provide
[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258392#comment-16258392 ] Amrit Sarkar commented on SOLR-11598: - [~aroopganguly], bq. I will perform tests with the patch and share results if permitted. I think everyone would be pleased if you share. I tested with 1M records and didn't see almost any performance degradation but I think we need to verify this on larger dataset. bq. Also, if you have determined this to have O(N) performance characteristic, are you planning to make it a lot larger and not bounded under 10? There maybe a very good chance I am missing some factor in terms of performance on sorting n-dimensional variables like [~joel.bernstein] mentioned. I think after analysing your test results, we can safely conclude whether we can increase the bound or even 10 is high. Thanks. > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > This is a big limitation for me, as I am working on a feature with a tight > deadline where I need to support 10 dimensional rollups. I did not read any > limitation on the sorting in the documentation and we went ahead with the > installation of 6.6.1. Now we are blocked with this limitation. > This is a Jira to track this work. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at >
[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11598: Attachment: SOLR-11598.patch On seeing the {{SortDoc}} implementation for Single, Double, .. Quad in {{ExportWriter.java}}; it seems repeated code for me since most of the code is already implemented in {{SortDoc}} expect {{compareTo}} function which I did on the newly uploaded patch. All the tests are getting passed. Also increasing the max sort fields to 10, as repeated tests on large dataset with increased sort fields showed very little difference in performance. Looking at the code closely, seems the performance difference is linear, than exponential / polynomial :: {{lessThan}} and {{compareTo}} methods. > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch, SOLR-11598.patch > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > This is a big limitation for me, as I am working on a feature with a tight > deadline where I need to support 10 dimensional rollups. I did not read any > limitation on the sorting in the documentation and we went ahead with the > installation of 6.6.1. Now we are blocked with this limitation. > This is a Jira to track this work. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at >
[jira] [Commented] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs
[ https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257705#comment-16257705 ] Amrit Sarkar commented on SOLR-8389: [~prusko], Thank you for coming up with the patch. Allow me sometime to go through the improvement and would definitely seek your help and collaboration. Thanks Amrit Sarkar > Convert CDCR peer cluster and other configurations into collection properties > modifiable via APIs > - > > Key: SOLR-8389 > URL: https://issues.apache.org/jira/browse/SOLR-8389 > Project: Solr > Issue Type: Improvement > Components: CDCR, SolrCloud >Reporter: Shalin Shekhar Mangar > Attachments: SOLR-8389.patch > > > CDCR configuration is kept inside solrconfig.xml which makes it difficult to > add or change peer cluster configuration. > I propose to move all CDCR config to collection level properties in cluster > state so that they can be modified using the existing modify collection API. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support
[ https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11412: Attachment: CDCR_bidir.png SOLR-11412.patch > Documentation changes for SOLR-11003: Bi-directional CDCR support > - > > Key: SOLR-11412 > URL: https://issues.apache.org/jira/browse/SOLR-11412 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, documentation >Reporter: Amrit Sarkar >Assignee: Varun Thacker > Attachments: CDCR_bidir.png, SOLR-11412.patch, SOLR-11412.patch, > SOLR-11412.patch, SOLR-11412.patch > > > Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its > conclusion. The relevant changes in documentation needs to be done. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support
[ https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16257702#comment-16257702 ] Amrit Sarkar commented on SOLR-11412: - Fixed the patch and added Erick's SOLR-11635 to bi-dir CDCR configurations. Also updated Cdcr-bidir.png. [~varunthacker] this is ready to go, and awaiting your review and feedback. > Documentation changes for SOLR-11003: Bi-directional CDCR support > - > > Key: SOLR-11412 > URL: https://issues.apache.org/jira/browse/SOLR-11412 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, documentation >Reporter: Amrit Sarkar >Assignee: Varun Thacker > Attachments: SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch > > > Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its > conclusion. The relevant changes in documentation needs to be done. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support
[ https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11412: Attachment: (was: CDCR-bidir.png) > Documentation changes for SOLR-11003: Bi-directional CDCR support > - > > Key: SOLR-11412 > URL: https://issues.apache.org/jira/browse/SOLR-11412 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: CDCR, documentation >Reporter: Amrit Sarkar >Assignee: Varun Thacker > Attachments: SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch > > > Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its > conclusion. The relevant changes in documentation needs to be done. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11601) solr.LatLonPointSpatialField : sorting by geodist fails
[ https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255877#comment-16255877 ] Amrit Sarkar commented on SOLR-11601: - Hi Clemens, It doesn't fails it is *intended behavior.* I replicated your scenario on my system and it threw this stack trace: {code} Caused by: org.apache.solr.common.SolrException: A ValueSource isn't directly available from this field. Instead try a query using the distance as the score. at org.apache.solr.schema.AbstractSpatialFieldType.getValueSource(AbstractSpatialFieldType.java:334) at org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:384) at org.apache.solr.search.FunctionQParser.parseValueSourceList(FunctionQParser.java:227) at org.apache.solr.search.function.distance.GeoDistValueSourceParser.parse(GeoDistValueSourceParser.java:54) at org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:370) at org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:82) at org.apache.solr.search.QParser.getQuery(QParser.java:168) at org.apache.solr.search.SortSpecParsing.parseSortSpecImpl(SortSpecParsing.java:120) ... 37 more {code} When I looked at: at org.apache.solr.schema.AbstractSpatialFieldType.getValueSource(AbstractSpatialFieldType.java:334) {code} @Override public ValueSource getValueSource(SchemaField field, QParser parser) { //This is different from Solr 3 LatLonType's approach which uses the MultiValueSource concept to directly expose // the x & y pair of FieldCache value sources. throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, "A ValueSource isn't directly available from this field. Instead try a query using the distance as the score."); } {code} _This function only implements this particular use-case and throws that particular exception._ You should keep using {{sfield=b4_location__geo_si=47.36667,8.55=geodist() asc}} as it is neat too, as comparison to geodist(...,...,...). > solr.LatLonPointSpatialField : sorting by geodist fails > --- > > Key: SOLR-11601 > URL: https://issues.apache.org/jira/browse/SOLR-11601 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 >Reporter: Clemens Wyss >Priority: Blocker > > Im switching my schemas from derprecated solr.LatLonType to > solr.LatLonPointSpatialField. > Now my sortquery (which used to work with solr.LatLonType): > *sort=geodist(b4_location__geo_si,47.36667,8.55) asc* > raises the error > {color:red}*"sort param could not be parsed as a query, and is not a field > that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color} > Invoking sort using syntax > {color:#14892c}sfield=b4_location__geo_si=47.36667,8.55=geodist() asc > works as expected though...{color} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API
Amrit Sarkar created SOLR-11652: --- Summary: Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API Key: SOLR-11652 URL: https://issues.apache.org/jira/browse/SOLR-11652 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Reporter: Amrit Sarkar Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED from CDCR API. More details to follow. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11650) Credentials used for BasicAuth displayed in clear text on slave nodes
[ https://issues.apache.org/jira/browse/SOLR-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11650: Attachment: SOLR-11650.patch Potential patch, I don't have the bandwidth right now to test this out, once I have, will validate whether we can use this patch or post an updated one. > Credentials used for BasicAuth displayed in clear text on slave nodes > - > > Key: SOLR-11650 > URL: https://issues.apache.org/jira/browse/SOLR-11650 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: 6.6.2 >Reporter: Constantin Bugneac >Priority: Critical > Attachments: SOLR-11650.patch, Screen Shot 2017-11-16 at 10.48.38.png > > > Pre-requisites: > Have in place Solr configured in master slave replication with BasicAuth > enabled. > Issue: > In UI on slave (under Replication tab of core) the master url is displayed > with username and password used for BasicAuth in clear text. > Example: > master url:https://solr:sdjudf3t...@solr-master.local.com:8983/solr/mycore > (see attached the screenshot) > Suggestion/Idea: > At least mask the password with *** -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11650) Credentials used for BasicAuth displayed in clear text on slave nodes
[ https://issues.apache.org/jira/browse/SOLR-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16255321#comment-16255321 ] Amrit Sarkar commented on SOLR-11650: - I can see the hashed value of the password, its a cakewalk to retrieve password from that. This should be addressed promptly. > Credentials used for BasicAuth displayed in clear text on slave nodes > - > > Key: SOLR-11650 > URL: https://issues.apache.org/jira/browse/SOLR-11650 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: Authentication >Affects Versions: 6.6.2 >Reporter: Constantin Bugneac >Priority: Critical > Attachments: Screen Shot 2017-11-16 at 10.48.38.png > > > Pre-requisites: > Have in place Solr configured in master slave replication with BasicAuth > enabled. > Issue: > In UI on slave (under Replication tab of core) the master url is displayed > with username and password used for BasicAuth in clear text. > Example: > master url:https://solr:sdjudf3t...@solr-master.local.com:8983/solr/mycore > (see attached the screenshot) > Suggestion/Idea: > At least mask the password with *** -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"
[ https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16253406#comment-16253406 ] Amrit Sarkar commented on SOLR-11613: - I see, I see. Best for both cases; uploaded the patch with 1 line change. Thank you for the reasoning above. > Improve error in admin UI "Sorry, no dataimport-handler defined" > > > Key: SOLR-11613 > URL: https://issues.apache.org/jira/browse/SOLR-11613 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.1 >Reporter: Shawn Heisey >Priority: Minor > Labels: newdev > Attachments: SOLR-11613.patch > > > When the config has no working dataimport handlers, clicking on the > "dataimport" tab for a core/collection shows an error message that states > "Sorry, no dataimport-handler defined". This is a little bit vague. > One idea for an improved message: "The solrconfig.xml file for this index > does not have an operational dataimport handler defined." -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"
[ https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11613: Attachment: SOLR-11613.patch > Improve error in admin UI "Sorry, no dataimport-handler defined" > > > Key: SOLR-11613 > URL: https://issues.apache.org/jira/browse/SOLR-11613 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.1 >Reporter: Shawn Heisey >Priority: Minor > Labels: newdev > Attachments: SOLR-11613.patch > > > When the config has no working dataimport handlers, clicking on the > "dataimport" tab for a core/collection shows an error message that states > "Sorry, no dataimport-handler defined". This is a little bit vague. > One idea for an improved message: "The solrconfig.xml file for this index > does not have an operational dataimport handler defined." -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"
[ https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16253371#comment-16253371 ] Amrit Sarkar commented on SOLR-11613: - [~elyograg] maybe "core" / "collection" than index: "The solrconfig.xml file for this collection does not have an operational dataimport handler defined!" Let me know what suits best, this will a very small patch, I can drive through. > Improve error in admin UI "Sorry, no dataimport-handler defined" > > > Key: SOLR-11613 > URL: https://issues.apache.org/jira/browse/SOLR-11613 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 7.1 >Reporter: Shawn Heisey >Priority: Minor > Labels: newdev > > When the config has no working dataimport handlers, clicking on the > "dataimport" tab for a core/collection shows an error message that states > "Sorry, no dataimport-handler defined". This is a little bit vague. > One idea for an improved message: "The solrconfig.xml file for this index > does not have an operational dataimport handler defined." -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11598: Attachment: SOLR-11598-6_6-streamtests Added another "experimental" patch: {{SOLR-11598-6_6-streamtests}} against {{branch_6_6}} with *nocommit* supporting stream expressions (unique & rollup) can take more than 4 sort fields now. Please mind, these patches are for pure experimental performance analysis purpose. > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop > Labels: patch > Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, > SOLR-11598-master.patch > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > This is a big limitation for me, as I am working on a feature with a tight > deadline where I need to support 10 dimensional rollups. I did not read any > limitation on the sorting in the documentation and we went ahead with the > installation of 6.6.1. Now we are blocked with this limitation. > This is a Jira to track this work. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at >
[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.
[ https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Amrit Sarkar updated SOLR-11598: Attachment: SOLR-11598-6_6.patch SOLR-11598-master.patch [~aroopganguly], I have attached patches against {{master}} and {{branch_6_6}} branches supporting maximum 8 fields instead of current 4, so that we can analyse how the performance gets affected. I have also included very basic tests for {{ExportWriter}}; but effective. > Export Writer needs to support more than 4 Sort fields - Say 10, ideally it > should not be bound at all, but 4 seems to really short sell the StreamRollup > capabilities. > --- > > Key: SOLR-11598 > URL: https://issues.apache.org/jira/browse/SOLR-11598 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) > Components: streaming expressions >Affects Versions: 6.6.1, 7.0 >Reporter: Aroop > Labels: patch > Attachments: SOLR-11598-6_6.patch, SOLR-11598-master.patch > > > I am a user of Streaming and I am currently trying to use rollups on an 10 > dimensional document. > I am unable to get correct results on this query as I am bounded by the > limitation of the export handler which supports only 4 sort fields. > I do not see why this needs to be the case, as it could very well be 10 or 20. > My current needs would be satisfied with 10, but one would want to ask why > can't it be any decent integer n, beyond which we know performance degrades, > but even then it should be caveat emptor. > This is a big limitation for me, as I am working on a feature with a tight > deadline where I need to support 10 dimensional rollups. I did not read any > limitation on the sorting in the documentation and we went ahead with the > installation of 6.6.1. Now we are blocked with this limitation. > This is a Jira to track this work. > [~varunthacker] > Code Link: > https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455 > Error > null:java.io.IOException: A max of 4 sorts can be specified > at > org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452) > at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228) > at > org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at > org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354) > at > org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223) > at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394) > at > org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217) > at > org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437) > at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215) > at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601) > at > org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49) > at > org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809) > at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361) > at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305) > at > org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691) > at > org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582) > at > org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143) > at > org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548) > at > org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226) > at > org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180) > at > org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512) > at >