[jira] [Commented] (SOLR-6930) Provide "Circuit Breakers" For Expensive Solr Queries
[ https://issues.apache.org/jira/browse/SOLR-6930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16620993#comment-16620993 ] Susheel Kumar commented on SOLR-6930: - I agree as well. We have been using timeAllowed but i'll vote for something more sophisticated which can prevent OOM more predictably. > Provide "Circuit Breakers" For Expensive Solr Queries > - > > Key: SOLR-6930 > URL: https://issues.apache.org/jira/browse/SOLR-6930 > Project: Solr > Issue Type: Improvement > Components: search >Reporter: Mike Drob >Priority: Major > > Ref: > http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/_limiting_memory_usage.html > ES currently allows operators to configure "circuit breakers" to preemptively > fail queries that are estimated too large rather than allowing an OOM > Exception to happen. We might be able to do the same thing. -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-12585) Solr fails even ZK quorum has majority
Susheel Kumar created SOLR-12585: Summary: Solr fails even ZK quorum has majority Key: SOLR-12585 URL: https://issues.apache.org/jira/browse/SOLR-12585 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Server, SolrCloud Affects Versions: 6.6.2 Reporter: Susheel Kumar Solr fails to function when one of ZK quorum node gets inaccessible due to DNS issue. E.g. below we had running Solr Cloud cluster and when one of the node went inaccessible due to DNS issue, Solr stops functioning even though the other 2 ZK machines were up and had a majority. See mailing list for more details [http://lucene.472066.n3.nabble.com/Solr-fails-even-ZK-quorum-has-majority-td4399166.html] e.g. ping ditsearch001.es.com ping: cannot resolve ditsearch001.es.com: Unknown host Caused by: org.apache.solr.common.SolrException: java.net.UnknownHostException: ditsearch001.es.com: Name or service not known at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:171) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:117) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:112) at org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:99) at -- This message was sent by Atlassian JIRA (v7.6.3#76005) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10932) install solr service service command fails
[ https://issues.apache.org/jira/browse/SOLR-10932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Susheel Kumar updated SOLR-10932: - Attachment: SOLR-10932.patch Attach patch to get rid of error "Script requires the 'service' command" on Suse-distribution. Based on Shawn's test results, the "service --help" commands works on most of the distributions then "service --version". Can we get this committed as this is a simple fix and with every new release the installation script fails on Suse-distribution > install solr service service command fails > -- > > Key: SOLR-10932 > URL: https://issues.apache.org/jira/browse/SOLR-10932 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 > Environment: Suse linux >Reporter: Susheel Kumar >Priority: Minor > Labels: easyfix, newbie, patch > Attachments: SOLR-10932.patch > > > In SUSE distribution, "service --version" commands always fail and abort the > solr installation with printing the error "Script requires the 'service' > command" > We can fix it by changing "service --version" to "service --help" command. > Shawn's test results > == > This is what I get with OS versions that I have access to when running > "service --version": > CentOS 7: > service ver. 1.1 > Ubuntu 16: > service ver. 0.91-ubuntu1 > Ubuntu 14: > service ver. 0.91-ubuntu1 > CentOS 6: > service ver. 0.91 > Debian 6: > service ver. 0.91-ubuntu1 > Sparc Solaris 10: > bash: service: command not found > = -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10932) install solr service service command fails
[ https://issues.apache.org/jira/browse/SOLR-10932?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Susheel Kumar updated SOLR-10932: - Flags: Patch > install solr service service command fails > -- > > Key: SOLR-10932 > URL: https://issues.apache.org/jira/browse/SOLR-10932 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.6 > Environment: Suse linux >Reporter: Susheel Kumar >Priority: Minor > Labels: easyfix, newbie, patch > > In SUSE distribution, "service --version" commands always fail and abort the > solr installation with printing the error "Script requires the 'service' > command" > We can fix it by changing "service --version" to "service --help" command. > Shawn's test results > == > This is what I get with OS versions that I have access to when running > "service --version": > CentOS 7: > service ver. 1.1 > Ubuntu 16: > service ver. 0.91-ubuntu1 > Ubuntu 14: > service ver. 0.91-ubuntu1 > CentOS 6: > service ver. 0.91 > Debian 6: > service ver. 0.91-ubuntu1 > Sparc Solaris 10: > bash: service: command not found > = -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10944) Get expression fails to return EOF tuple
[ https://issues.apache.org/jira/browse/SOLR-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16100436#comment-16100436 ] Susheel Kumar commented on SOLR-10944: -- Thank you so much, Joel. > Get expression fails to return EOF tuple > - > > Key: SOLR-10944 > URL: https://issues.apache.org/jira/browse/SOLR-10944 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Affects Versions: 6.6 >Reporter: Susheel Kumar >Priority: Blocker > Labels: patch > Fix For: master (8.0), 7.1 > > Attachments: SOLR-10944.patch > > > Below is a simple let expr where search would not find a match and return 0 > result. In that case, we expect get(a) to show a EOF tuple while it is > throwing exception. > === > let(a=search(collection1, > q=id:9, > fl="id,business_email", > sort="business_email asc"), > get(a) > ) > { > "result-set": { > "docs": [ > { > "EXCEPTION": "Index: 0, Size: 0", > "EOF": true, > "RESPONSE_TIME": 8 > } > ] > } > } -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10944) Get expression fails to return EOF tuple
[ https://issues.apache.org/jira/browse/SOLR-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16100072#comment-16100072 ] Susheel Kumar commented on SOLR-10944: -- Hi Joel, I finally have a complex streaming expression for solving my use case but it all relies on this simple fix. Would you be able to review/commit this patch so that i can look forward for 7.x release to use. Thanks, Susheel > Get expression fails to return EOF tuple > - > > Key: SOLR-10944 > URL: https://issues.apache.org/jira/browse/SOLR-10944 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Affects Versions: 6.6 >Reporter: Susheel Kumar >Priority: Blocker > Labels: patch > Attachments: SOLR-10944.patch > > > Below is a simple let expr where search would not find a match and return 0 > result. In that case, we expect get(a) to show a EOF tuple while it is > throwing exception. > === > let(a=search(collection1, > q=id:9, > fl="id,business_email", > sort="business_email asc"), > get(a) > ) > { > "result-set": { > "docs": [ > { > "EXCEPTION": "Index: 0, Size: 0", > "EOF": true, > "RESPONSE_TIME": 8 > } > ] > } > } -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10944) Get expression fails to return EOF tuple
[ https://issues.apache.org/jira/browse/SOLR-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16082910#comment-16082910 ] Susheel Kumar commented on SOLR-10944: -- Hi Joel, Can you please review the patch, suggest any changes and commit to have it in next 7.x release. This will be a simple fix but is very much required in simple to complex expressions involving Get expressions. Thanks, Susheel > Get expression fails to return EOF tuple > - > > Key: SOLR-10944 > URL: https://issues.apache.org/jira/browse/SOLR-10944 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Affects Versions: 6.6 >Reporter: Susheel Kumar >Priority: Blocker > Labels: patch > Attachments: SOLR-10944.patch > > > Below is a simple let expr where search would not find a match and return 0 > result. In that case, we expect get(a) to show a EOF tuple while it is > throwing exception. > === > let(a=search(collection1, > q=id:9, > fl="id,business_email", > sort="business_email asc"), > get(a) > ) > { > "result-set": { > "docs": [ > { > "EXCEPTION": "Index: 0, Size: 0", > "EOF": true, > "RESPONSE_TIME": 8 > } > ] > } > } -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-11017) Add support for unique metric to facet streaming function
Susheel Kumar created SOLR-11017: Summary: Add support for unique metric to facet streaming function Key: SOLR-11017 URL: https://issues.apache.org/jira/browse/SOLR-11017 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrCloud Affects Versions: 6.6 Reporter: Susheel Kumar Add support for Unique metric to facet function which under the cover utilizes JSON Facet API. The challenge is to come up with a keyword which can be used for UniqueMetric. Currently "unique" is used for UniqueStream and can't be utilized. Does "uniq" make sense? ... ... StreamFactory factory = new StreamFactory() .withCollectionZkHost (...) .withFunctionName("facet", FacetStream.class) .withFunctionName("sum", SumMetric.class) .withFunctionName("unique", UniqueStream.class) .withFunctionName("unique", UniqueMetric.class) ... ... -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10945) Get expression fails to operate on sort expr
[ https://issues.apache.org/jira/browse/SOLR-10945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16068433#comment-16068433 ] Susheel Kumar commented on SOLR-10945: -- Hi Joel - I looked the code to debug this issue and here is what i found The above two expressions (...merge(get(a)... or ...merge(sort(get(a)...) which are similar (mathematically/syntactically/functionally) as a whole, but the one which fails where merge is passed get(a) directly, results into error since GetStream is passed in the merge init method and GetStream.getStreamSort() method returns null (below) while in the other case SortStream is passed its getStreamSort() method returns proper comparator. Wondering how can we handle this either by passing StreamComparator to GetStream (and how) or do something in merge to not upfront check. Please share your thoughts /** Return the stream sort - ie, the order in which records are returned */ public StreamComparator getStreamSort(){ return null; } MergeStream -- private void init(StreamComparator comp, TupleStream ... streams) throws IOException { // All streams must both be sorted so that comp can be derived from for(TupleStream stream : streams){ if(!comp.isDerivedFrom(stream.getStreamSort())){ throw new IOException("Invalid MergeStream - all substream comparators (sort) must be a superset of this stream's comparator."); } } > Get expression fails to operate on sort expr > > > Key: SOLR-10945 > URL: https://issues.apache.org/jira/browse/SOLR-10945 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Affects Versions: 6.6 >Reporter: Susheel Kumar >Priority: Minor > > Get expr fails to operate on a variable which has sort stream and returns > "Invalid MergeStream - all substream comparators (sort) must be a superset of > this stream's comparator." Exception tuple. > Below get is given variable a and b which are having sort expr and fails to > work > == > let( > a=sort(select(tuple(id=3,email="C"),id,email),by="id asc,email asc"), > b=sort(select(tuple(id=2,email="B"),id,email),by="id asc,email asc"), > c=merge(get(a),get(b),on="id asc,email asc"), > get(c) > ) > { > "result-set": { > "docs": [ > { > "EXCEPTION": "Invalid MergeStream - all substream comparators (sort) > must be a superset of this stream's comparator.", > "EOF": true > } > ] > } > } > while below sort outside get works > == > let( > a=select(tuple(id=3,email="C"),id,email), > b=select(tuple(id=2,email="B"),id,email), > c=merge(sort(get(a),by="id asc,email asc"),sort(get(b),by="id asc,email asc"), > on="id asc,email asc"), > get(c) > ) > { > "result-set": { > "docs": [ > { > "email": "B", > "id": "2" > }, > { > "email": "C", > "id": "3" > }, > { > "EOF": true, > "RESPONSE_TIME": 0 > } > ] > } > } -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-10944) Get expression fails to return EOF tuple
[ https://issues.apache.org/jira/browse/SOLR-10944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Susheel Kumar updated SOLR-10944: - Attachment: SOLR-10944.patch Hello, Attach is the patch where the below line causes "Index 0, Size 0" exception when get open the stream and since the list variable "l" has no elements, it throws error. Added a test as well. Please let me know if need to be handled differently Bug === if(l.get(0) instanceof Tuple) Fix === Simply assign the iterator then checking for and later read will check for EOF tupleIterator = l.iterator(); > Get expression fails to return EOF tuple > - > > Key: SOLR-10944 > URL: https://issues.apache.org/jira/browse/SOLR-10944 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: search >Affects Versions: 6.6 >Reporter: Susheel Kumar >Priority: Blocker > Labels: patch > Attachments: SOLR-10944.patch > > > Below is a simple let expr where search would not find a match and return 0 > result. In that case, we expect get(a) to show a EOF tuple while it is > throwing exception. > === > let(a=search(collection1, > q=id:9, > fl="id,business_email", > sort="business_email asc"), > get(a) > ) > { > "result-set": { > "docs": [ > { > "EXCEPTION": "Index: 0, Size: 0", > "EOF": true, > "RESPONSE_TIME": 8 > } > ] > } > } -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10945) Get expression fails to operate on sort expr
Susheel Kumar created SOLR-10945: Summary: Get expression fails to operate on sort expr Key: SOLR-10945 URL: https://issues.apache.org/jira/browse/SOLR-10945 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: search Affects Versions: 6.6 Reporter: Susheel Kumar Priority: Minor Get expr fails to operate on a variable which has sort stream and returns "Invalid MergeStream - all substream comparators (sort) must be a superset of this stream's comparator." Exception tuple. Below get is given variable a and b which are having sort expr and fails to work == let( a=sort(select(tuple(id=3,email="C"),id,email),by="id asc,email asc"), b=sort(select(tuple(id=2,email="B"),id,email),by="id asc,email asc"), c=merge(get(a),get(b),on="id asc,email asc"), get(c) ) { "result-set": { "docs": [ { "EXCEPTION": "Invalid MergeStream - all substream comparators (sort) must be a superset of this stream's comparator.", "EOF": true } ] } } while below sort outside get works == let( a=select(tuple(id=3,email="C"),id,email), b=select(tuple(id=2,email="B"),id,email), c=merge(sort(get(a),by="id asc,email asc"),sort(get(b),by="id asc,email asc"), on="id asc,email asc"), get(c) ) { "result-set": { "docs": [ { "email": "B", "id": "2" }, { "email": "C", "id": "3" }, { "EOF": true, "RESPONSE_TIME": 0 } ] } } -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10944) Get expression fails to return EOF tuple
Susheel Kumar created SOLR-10944: Summary: Get expression fails to return EOF tuple Key: SOLR-10944 URL: https://issues.apache.org/jira/browse/SOLR-10944 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: search Affects Versions: 6.6 Reporter: Susheel Kumar Priority: Blocker Below is a simple let expr where search would not find a match and return 0 result. In that case, we expect get(a) to show a EOF tuple while it is throwing exception. === let(a=search(collection1, q=id:9, fl="id,business_email", sort="business_email asc"), get(a) ) { "result-set": { "docs": [ { "EXCEPTION": "Index: 0, Size: 0", "EOF": true, "RESPONSE_TIME": 8 } ] } } -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10933) LetStream variables are not evaluated in proper order
[ https://issues.apache.org/jira/browse/SOLR-10933?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16057631#comment-16057631 ] Susheel Kumar commented on SOLR-10933: -- You saved my day, Joel. I was struggling on this bit and after noticing this issue, i changed my variables to single alphabet the problem disappeared. let(a=fetch(collection1,having(rollup(over=email, count(email), select(search(collection1, q=*:*, fl="id,business_email", sort="business_email asc"), id, business_email as email)), eq(count(email),1)), fl="id,business_email as email", on="email=business_email"), b=fetch(collection1,having(rollup(over=email, count(email), select(search(collection1, q=*:*, fl="id,personal_email", sort="personal_email asc"), id, personal_email as email)), eq(count(email),1)), fl="id,personal_email as email", on="email=personal_email"), c=hashJoin(get(a),hashed=get(b),on="email"), d=hashJoin(get(b),hashed=get(a),on="email"), e=select(get(a),id,email), get(e) ) > LetStream variables are not evaluated in proper order > - > > Key: SOLR-10933 > URL: https://issues.apache.org/jira/browse/SOLR-10933 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) >Reporter: Joel Bernstein >Assignee: Joel Bernstein > Fix For: master (7.0), 6.7 > > Attachments: SOLR-10933.patch > > > The LetStream is currently using a HashMap to hold its variable mappings. > This is problematic because the ordering of the variables will be lost as > they are evaluated. The test cases pass currently because single letter > variables in ascending order are used which by luck caused the variables to > be evaluated in order. > There is a very simple fix for this which is to use a LinkedHashMap to hold > the variables to ensure they are evaluated in the order that they are > received. -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10932) install solr service service command fails
Susheel Kumar created SOLR-10932: Summary: install solr service service command fails Key: SOLR-10932 URL: https://issues.apache.org/jira/browse/SOLR-10932 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 6.6 Environment: Suse linux Reporter: Susheel Kumar Priority: Minor In SUSE distribution, "service --version" commands always fail and abort the solr installation with printing the error "Script requires the 'service' command" We can fix it by changing "service --version" to "service --help" command. Shawn's test results == This is what I get with OS versions that I have access to when running "service --version": CentOS 7: service ver. 1.1 Ubuntu 16: service ver. 0.91-ubuntu1 Ubuntu 14: service ver. 0.91-ubuntu1 CentOS 6: service ver. 0.91 Debian 6: service ver. 0.91-ubuntu1 Sparc Solaris 10: bash: service: command not found = -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10890) Parallel SQL - column not found error
Susheel Kumar created SOLR-10890: Summary: Parallel SQL - column not found error Key: SOLR-10890 URL: https://issues.apache.org/jira/browse/SOLR-10890 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: Parallel SQL Affects Versions: 6.6 Reporter: Susheel Kumar Priority: Minor Parallel SQL throws "column not found" error when the query hits multiple shards and one of shard doesn't have any documents yet. Sample error == {"result-set":{"docs":[{"EXCEPTION":"Failed to execute sqlQuery 'SELECT sr_sv_userFirstName as firstName, sr_sv_userLastName as lastName FROM collection1 ORDEr BY dv_sv_userLastName LIMIT 15' against JDBC connection 'jdbc:calcitesolr:'.\nError while executing SQL \"SELECT sr_sv_userFirstName as firstName, sr_sv_userLastName as lastName FROM collection1 ORDEr BY dv_sv_userLastName LIMIT 15\": From line 1, column 9 to line 1, column 27: Column 'sr_sv_userFirstName' not found in any table","EOF":true,"RESPONSE_TIME":87}]}} -- This message was sent by Atlassian JIRA (v6.4.14#64029) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-10086) Add Streaming Expression for Kafka Streams
[ https://issues.apache.org/jira/browse/SOLR-10086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15850511#comment-15850511 ] Susheel Kumar commented on SOLR-10086: -- Thanks, Kevin for putting the example and steps together (SOLR-10087). Are we looking to put these custom streaming expressions code i.e. especially these common one like kafka etc. under Solr repo (or contrib etc.) or its upto the users to maintain it. > Add Streaming Expression for Kafka Streams > -- > > Key: SOLR-10086 > URL: https://issues.apache.org/jira/browse/SOLR-10086 > Project: Solr > Issue Type: New Feature > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Reporter: Susheel Kumar >Priority: Minor > > This is being asked to have SolrCloud pull data from Kafka topic periodically > using DataImport Handler. > Adding streaming expression support to pull data from Kafka would be good > feature to have. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-10086) Add Streaming Expression for Kafka Streams
Susheel Kumar created SOLR-10086: Summary: Add Streaming Expression for Kafka Streams Key: SOLR-10086 URL: https://issues.apache.org/jira/browse/SOLR-10086 Project: Solr Issue Type: New Feature Security Level: Public (Default Security Level. Issues are Public) Components: SolrJ Reporter: Susheel Kumar Priority: Minor This is being asked to have SolrCloud pull data from Kafka topic periodically using DataImport Handler. Adding streaming expression support to pull data from Kafka would be good feature to have. -- This message was sent by Atlassian JIRA (v6.3.15#6346) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9399) Delete requests do not send credentials & fails for Basic Authentication
[ https://issues.apache.org/jira/browse/SOLR-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15592077#comment-15592077 ] Susheel Kumar commented on SOLR-9399: - I recall something similar experience but let me again look after test has been refactored to make it fail first. > Delete requests do not send credentials & fails for Basic Authentication > > > Key: SOLR-9399 > URL: https://issues.apache.org/jira/browse/SOLR-9399 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 6.0, 6.0.1, 6.x >Reporter: Susheel Kumar > Labels: security > > The getRoutes(..) func of UpdateRequest do not pass credentials to > LBHttpSolrClient when deleteById is set while for updates it passes the > credentials. See below code snippet > if (deleteById != null) { > > Iterator>> entries = > deleteById.entrySet() > .iterator(); > while (entries.hasNext()) { > > Map.Entry> entry = entries.next(); > > String deleteId = entry.getKey(); > Map map = entry.getValue(); > Long version = null; > if (map != null) { > version = (Long) map.get(VER); > } > Slice slice = router.getTargetSlice(deleteId, null, null, null, col); > if (slice == null) { > return null; > } > List urls = urlMap.get(slice.getName()); > if (urls == null) { > return null; > } > String leaderUrl = urls.get(0); > LBHttpSolrClient.Req request = routes.get(leaderUrl); > if (request != null) { > UpdateRequest urequest = (UpdateRequest) request.getRequest(); > urequest.deleteById(deleteId, version); > } else { > UpdateRequest urequest = new UpdateRequest(); > urequest.setParams(params); > urequest.deleteById(deleteId, version); > urequest.setCommitWithin(getCommitWithin()); > request = new LBHttpSolrClient.Req(urequest, urls); > routes.put(leaderUrl, request); > } > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9399) Delete requests do not send credentials & fails for Basic Authentication
[ https://issues.apache.org/jira/browse/SOLR-9399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15592078#comment-15592078 ] Susheel Kumar commented on SOLR-9399: - I recall something similar experience but let me again look after test has been refactored to make it fail first. > Delete requests do not send credentials & fails for Basic Authentication > > > Key: SOLR-9399 > URL: https://issues.apache.org/jira/browse/SOLR-9399 > Project: Solr > Issue Type: Bug > Security Level: Public(Default Security Level. Issues are Public) > Components: SolrJ >Affects Versions: 6.0, 6.0.1, 6.x >Reporter: Susheel Kumar > Labels: security > > The getRoutes(..) func of UpdateRequest do not pass credentials to > LBHttpSolrClient when deleteById is set while for updates it passes the > credentials. See below code snippet > if (deleteById != null) { > > Iterator>> entries = > deleteById.entrySet() > .iterator(); > while (entries.hasNext()) { > > Map.Entry> entry = entries.next(); > > String deleteId = entry.getKey(); > Map map = entry.getValue(); > Long version = null; > if (map != null) { > version = (Long) map.get(VER); > } > Slice slice = router.getTargetSlice(deleteId, null, null, null, col); > if (slice == null) { > return null; > } > List urls = urlMap.get(slice.getName()); > if (urls == null) { > return null; > } > String leaderUrl = urls.get(0); > LBHttpSolrClient.Req request = routes.get(leaderUrl); > if (request != null) { > UpdateRequest urequest = (UpdateRequest) request.getRequest(); > urequest.deleteById(deleteId, version); > } else { > UpdateRequest urequest = new UpdateRequest(); > urequest.setParams(params); > urequest.deleteById(deleteId, version); > urequest.setCommitWithin(getCommitWithin()); > request = new LBHttpSolrClient.Req(urequest, urls); > routes.put(leaderUrl, request); > } > } > } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read
[ https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15539501#comment-15539501 ] Susheel Kumar commented on SOLR-8146: - Thank you, Noble. I am going thru the changes and will get back to you. > Allowing SolrJ CloudSolrClient to have preferred replica for query/read > --- > > Key: SOLR-8146 > URL: https://issues.apache.org/jira/browse/SOLR-8146 > Project: Solr > Issue Type: New Feature > Components: clients - java >Affects Versions: 5.3 >Reporter: Arcadius Ahouansou > Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch, > SOLR-8146.patch > > > h2. Backgrouds > Currently, the CloudSolrClient randomly picks a replica to query. > This is done by shuffling the list of live URLs to query then, picking the > first item from the list. > This ticket is to allow more flexibility and control to some extend which > URLs will be picked up for queries. > Note that this is for queries only and would not affect update/delete/admin > operations. > h2. Implementation > The current patch uses regex pattern and moves to the top of the list of URLs > only those matching the given regex specified by the system property > {code}solr.preferredQueryNodePattern{code} > Initially, I thought it may be good to have Solr nodes tagged with a string > pattern (snitch?) and use that pattern for matching the URLs. > Any comment, recommendation or feedback would be appreciated. > h2. Use Cases > There are many cases where the ability to choose the node where queries go > can be very handy: > h3. Special node for manual user queries and analytics > One may have a SolrCLoud cluster where every node host the same set of > collections with: > - multiple large SolrCLoud nodes (L) used for production apps and > - have 1 small node (S) in the same cluster with less ram/cpu used only for > manual user queries, data export and other production issue investigation. > This ticket would allow to configure the applications using SolrJ to query > only the (L) nodes > This use case is similar to the one described in SOLR-5501 raised by [~manuel > lenormand] > h3. Minimizing network traffic > > For simplicity, let's say that we have a SolrSloud cluster deployed on 2 (or > N) separate racks: rack1 and rack2. > On each rack, we have a set of SolrCloud VMs as well as a couple of client > VMs querying solr using SolrJ. > All solr nodes are identical and have the same number of collections. > What we would like to achieve is: > - clients on rack1 will by preference query only SolrCloud nodes on rack1, > and > - clients on rack2 will by preference query only SolrCloud nodes on rack2. > - Cross-rack read will happen if and only if one of the racks has no > available Solr node to serve a request. > In other words, we want read operations to be local to a rack whenever > possible. > Note that write/update/delete/admin operations should not be affected. > Note that in our use case, we have a cross DC deployment. So, replace > rack1/rack2 by DC1/DC2 > Any comment would be very appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read
[ https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15527951#comment-15527951 ] Susheel Kumar commented on SOLR-8146: - Hello Noble, Can you please review the pull request and provide feedback on the approach of implementing routingRule and accordingly i can move forward with it. Thanks, Susheel > Allowing SolrJ CloudSolrClient to have preferred replica for query/read > --- > > Key: SOLR-8146 > URL: https://issues.apache.org/jira/browse/SOLR-8146 > Project: Solr > Issue Type: New Feature > Components: clients - java >Affects Versions: 5.3 >Reporter: Arcadius Ahouansou > Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch > > > h2. Backgrouds > Currently, the CloudSolrClient randomly picks a replica to query. > This is done by shuffling the list of live URLs to query then, picking the > first item from the list. > This ticket is to allow more flexibility and control to some extend which > URLs will be picked up for queries. > Note that this is for queries only and would not affect update/delete/admin > operations. > h2. Implementation > The current patch uses regex pattern and moves to the top of the list of URLs > only those matching the given regex specified by the system property > {code}solr.preferredQueryNodePattern{code} > Initially, I thought it may be good to have Solr nodes tagged with a string > pattern (snitch?) and use that pattern for matching the URLs. > Any comment, recommendation or feedback would be appreciated. > h2. Use Cases > There are many cases where the ability to choose the node where queries go > can be very handy: > h3. Special node for manual user queries and analytics > One may have a SolrCLoud cluster where every node host the same set of > collections with: > - multiple large SolrCLoud nodes (L) used for production apps and > - have 1 small node (S) in the same cluster with less ram/cpu used only for > manual user queries, data export and other production issue investigation. > This ticket would allow to configure the applications using SolrJ to query > only the (L) nodes > This use case is similar to the one described in SOLR-5501 raised by [~manuel > lenormand] > h3. Minimizing network traffic > > For simplicity, let's say that we have a SolrSloud cluster deployed on 2 (or > N) separate racks: rack1 and rack2. > On each rack, we have a set of SolrCloud VMs as well as a couple of client > VMs querying solr using SolrJ. > All solr nodes are identical and have the same number of collections. > What we would like to achieve is: > - clients on rack1 will by preference query only SolrCloud nodes on rack1, > and > - clients on rack2 will by preference query only SolrCloud nodes on rack2. > - Cross-rack read will happen if and only if one of the racks has no > available Solr node to serve a request. > In other words, we want read operations to be local to a rack whenever > possible. > Note that write/update/delete/admin operations should not be affected. > Note that in our use case, we have a cross DC deployment. So, replace > rack1/rack2 by DC1/DC2 > Any comment would be very appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9188) BlockUnknown property makes inter-node communication impossible
[ https://issues.apache.org/jira/browse/SOLR-9188?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15440819#comment-15440819 ] Susheel Kumar commented on SOLR-9188: - Yes, Jan. The The cluster in my case works fine without any issue and infact we moved up to three environment (development, functional & performance) with QA certified and didn't notice any issue until one developer noticed these error messages in Logs. Removing blockUnknown doesn't help as it then allows anyone to access Solr directly without challenging with user / pwd. The Solr Cluster in our case has multiple shards. Please let me know if i can provide any more details. Thanks, Susheel > BlockUnknown property makes inter-node communication impossible > --- > > Key: SOLR-9188 > URL: https://issues.apache.org/jira/browse/SOLR-9188 > Project: Solr > Issue Type: Bug > Components: SolrCloud >Affects Versions: 6.0 >Reporter: Piotr Tempes >Priority: Critical > Labels: BasicAuth, Security > Attachments: solr9188-errorlog.txt > > > When I setup my solr cloud without blockUnknown property it works as > expected. When I want to block non authenticated requests I get following > stacktrace during startup (see attached file). -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9399) Delete requests do not send credentials & fails for Basic Authentication
Susheel Kumar created SOLR-9399: --- Summary: Delete requests do not send credentials & fails for Basic Authentication Key: SOLR-9399 URL: https://issues.apache.org/jira/browse/SOLR-9399 Project: Solr Issue Type: Bug Security Level: Public (Default Security Level. Issues are Public) Components: SolrJ Affects Versions: 6.0.1, 6.0, 6.x Reporter: Susheel Kumar The getRoutes(..) func of UpdateRequest do not pass credentials to LBHttpSolrClient when deleteById is set while for updates it passes the credentials. See below code snippet if (deleteById != null) { Iterator>> entries = deleteById.entrySet() .iterator(); while (entries.hasNext()) { Map.Entry> entry = entries.next(); String deleteId = entry.getKey(); Map map = entry.getValue(); Long version = null; if (map != null) { version = (Long) map.get(VER); } Slice slice = router.getTargetSlice(deleteId, null, null, null, col); if (slice == null) { return null; } List urls = urlMap.get(slice.getName()); if (urls == null) { return null; } String leaderUrl = urls.get(0); LBHttpSolrClient.Req request = routes.get(leaderUrl); if (request != null) { UpdateRequest urequest = (UpdateRequest) request.getRequest(); urequest.deleteById(deleteId, version); } else { UpdateRequest urequest = new UpdateRequest(); urequest.setParams(params); urequest.deleteById(deleteId, version); urequest.setCommitWithin(getCommitWithin()); request = new LBHttpSolrClient.Req(urequest, urls); routes.put(leaderUrl, request); } } } -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9370) Add sample code snippet to confluence documentaion for Basic Authentication
Susheel Kumar created SOLR-9370: --- Summary: Add sample code snippet to confluence documentaion for Basic Authentication Key: SOLR-9370 URL: https://issues.apache.org/jira/browse/SOLR-9370 Project: Solr Issue Type: Task Security Level: Public (Default Security Level. Issues are Public) Components: documentation Reporter: Susheel Kumar Priority: Minor Please add below code snippet to under "Using BasicAuth with SolrJ" as current code "snippet" doesn't give visibility on how basic authentication can be set when querying How to set credentials when querying using SolrJ - Basic Authentication = SolrQuery query = new SolrQuery(); query.setQuery("*:*"); // Do any other query setup needed. SolrRequest req = new QueryRequest(query); req.setBasicAuthCredentials(userName, password); QueryResponse rsp = req.process(solrClient, collection); -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read
[ https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382761#comment-15382761 ] Susheel Kumar edited comment on SOLR-8146 at 7/18/16 6:14 PM: -- Thanks, Paul. I like the routingRule terminology than preferredNodes. The current rules like cores, freeDisk, host etc doesn't include "rule" in their names, so wanted to double check if "routingRule" name is okay and there is similar parameter name __route__ for routing keys https://cwiki.apache.org/confluence/display/solr/Advanced+Distributed+Request+Options. Hope these names all fit together to avoid any ambiguity. was (Author: susheel2...@gmail.com): Thanks, Paul. I like the routingRule terminology than preferredNodes. The current rules like cores, freeDisk, host etc doesn't include "rule" in their names, so wanted to double check if "routingRule" name is okay and there is similar parameter name _route_ for routing keys https://cwiki.apache.org/confluence/display/solr/Advanced+Distributed+Request+Options. Hope these names all fit together to avoid any ambiguity. > Allowing SolrJ CloudSolrClient to have preferred replica for query/read > --- > > Key: SOLR-8146 > URL: https://issues.apache.org/jira/browse/SOLR-8146 > Project: Solr > Issue Type: New Feature > Components: clients - java >Affects Versions: 5.3 >Reporter: Arcadius Ahouansou > Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch > > > h2. Backgrouds > Currently, the CloudSolrClient randomly picks a replica to query. > This is done by shuffling the list of live URLs to query then, picking the > first item from the list. > This ticket is to allow more flexibility and control to some extend which > URLs will be picked up for queries. > Note that this is for queries only and would not affect update/delete/admin > operations. > h2. Implementation > The current patch uses regex pattern and moves to the top of the list of URLs > only those matching the given regex specified by the system property > {code}solr.preferredQueryNodePattern{code} > Initially, I thought it may be good to have Solr nodes tagged with a string > pattern (snitch?) and use that pattern for matching the URLs. > Any comment, recommendation or feedback would be appreciated. > h2. Use Cases > There are many cases where the ability to choose the node where queries go > can be very handy: > h3. Special node for manual user queries and analytics > One may have a SolrCLoud cluster where every node host the same set of > collections with: > - multiple large SolrCLoud nodes (L) used for production apps and > - have 1 small node (S) in the same cluster with less ram/cpu used only for > manual user queries, data export and other production issue investigation. > This ticket would allow to configure the applications using SolrJ to query > only the (L) nodes > This use case is similar to the one described in SOLR-5501 raised by [~manuel > lenormand] > h3. Minimizing network traffic > > For simplicity, let's say that we have a SolrSloud cluster deployed on 2 (or > N) separate racks: rack1 and rack2. > On each rack, we have a set of SolrCloud VMs as well as a couple of client > VMs querying solr using SolrJ. > All solr nodes are identical and have the same number of collections. > What we would like to achieve is: > - clients on rack1 will by preference query only SolrCloud nodes on rack1, > and > - clients on rack2 will by preference query only SolrCloud nodes on rack2. > - Cross-rack read will happen if and only if one of the racks has no > available Solr node to serve a request. > In other words, we want read operations to be local to a rack whenever > possible. > Note that write/update/delete/admin operations should not be affected. > Note that in our use case, we have a cross DC deployment. So, replace > rack1/rack2 by DC1/DC2 > Any comment would be very appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read
[ https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382761#comment-15382761 ] Susheel Kumar commented on SOLR-8146: - Thanks, Paul. I like the routingRule terminology than preferredNodes. The current rules like cores, freeDisk, host etc doesn't include "rule" in their names, so wanted to double check if "routingRule" name is okay and there is similar parameter name _route_ for routing keys https://cwiki.apache.org/confluence/display/solr/Advanced+Distributed+Request+Options. Hope these names all fit together to avoid any ambiguity. > Allowing SolrJ CloudSolrClient to have preferred replica for query/read > --- > > Key: SOLR-8146 > URL: https://issues.apache.org/jira/browse/SOLR-8146 > Project: Solr > Issue Type: New Feature > Components: clients - java >Affects Versions: 5.3 >Reporter: Arcadius Ahouansou > Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch > > > h2. Backgrouds > Currently, the CloudSolrClient randomly picks a replica to query. > This is done by shuffling the list of live URLs to query then, picking the > first item from the list. > This ticket is to allow more flexibility and control to some extend which > URLs will be picked up for queries. > Note that this is for queries only and would not affect update/delete/admin > operations. > h2. Implementation > The current patch uses regex pattern and moves to the top of the list of URLs > only those matching the given regex specified by the system property > {code}solr.preferredQueryNodePattern{code} > Initially, I thought it may be good to have Solr nodes tagged with a string > pattern (snitch?) and use that pattern for matching the URLs. > Any comment, recommendation or feedback would be appreciated. > h2. Use Cases > There are many cases where the ability to choose the node where queries go > can be very handy: > h3. Special node for manual user queries and analytics > One may have a SolrCLoud cluster where every node host the same set of > collections with: > - multiple large SolrCLoud nodes (L) used for production apps and > - have 1 small node (S) in the same cluster with less ram/cpu used only for > manual user queries, data export and other production issue investigation. > This ticket would allow to configure the applications using SolrJ to query > only the (L) nodes > This use case is similar to the one described in SOLR-5501 raised by [~manuel > lenormand] > h3. Minimizing network traffic > > For simplicity, let's say that we have a SolrSloud cluster deployed on 2 (or > N) separate racks: rack1 and rack2. > On each rack, we have a set of SolrCloud VMs as well as a couple of client > VMs querying solr using SolrJ. > All solr nodes are identical and have the same number of collections. > What we would like to achieve is: > - clients on rack1 will by preference query only SolrCloud nodes on rack1, > and > - clients on rack2 will by preference query only SolrCloud nodes on rack2. > - Cross-rack read will happen if and only if one of the racks has no > available Solr node to serve a request. > In other words, we want read operations to be local to a rack whenever > possible. > Note that write/update/delete/admin operations should not be affected. > Note that in our use case, we have a cross DC deployment. So, replace > rack1/rack2 by DC1/DC2 > Any comment would be very appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read
[ https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382390#comment-15382390 ] Susheel Kumar edited comment on SOLR-8146 at 7/18/16 2:54 PM: -- Thanks, Noble and Arcadius for clarifying the status of SOLR-8146. Hello Noble, I can start working on the patch. Have a question to clarify 1. For multi-data center scenario, the preferredNodes rule may specify different values / ranges depending on, from which data center solrj client is querying? So do you see preferredNodes rule being used during query operation like http://localhost:8983/solr/collection1/select?rule=preferredNodes=ip_1:192,ip_2:93 The current Snitches design/implementation is only being used in Admin Collections API (https://cwiki.apache.org/confluence/display/solr/Collections+API) for replica placement so this will be another usage of Snitches and extending to query operations. Thanks, Susheel was (Author: susheel2...@gmail.com): Thanks, Noble and Arcadius for clarifying the status of SOLR-8146. Hello Noble, I can start working on the patch. Have a question to clarify 1. For multi-date center scenario, the preferredNodes rule may specify different values / ranges depending on, from which data center solrj client is querying? So do you see preferredNodes rule being used during query operation like http://localhost:8983/solr/collection1/select?rule=preferredNodes=ip_1:192,ip_2:93 The current Snitches design/implementation is only being used in Admin Collections API (https://cwiki.apache.org/confluence/display/solr/Collections+API) for replica placement so this will be another usage of Snitches and extending to query operations. Thanks, Susheel > Allowing SolrJ CloudSolrClient to have preferred replica for query/read > --- > > Key: SOLR-8146 > URL: https://issues.apache.org/jira/browse/SOLR-8146 > Project: Solr > Issue Type: New Feature > Components: clients - java >Affects Versions: 5.3 >Reporter: Arcadius Ahouansou > Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch > > > h2. Backgrouds > Currently, the CloudSolrClient randomly picks a replica to query. > This is done by shuffling the list of live URLs to query then, picking the > first item from the list. > This ticket is to allow more flexibility and control to some extend which > URLs will be picked up for queries. > Note that this is for queries only and would not affect update/delete/admin > operations. > h2. Implementation > The current patch uses regex pattern and moves to the top of the list of URLs > only those matching the given regex specified by the system property > {code}solr.preferredQueryNodePattern{code} > Initially, I thought it may be good to have Solr nodes tagged with a string > pattern (snitch?) and use that pattern for matching the URLs. > Any comment, recommendation or feedback would be appreciated. > h2. Use Cases > There are many cases where the ability to choose the node where queries go > can be very handy: > h3. Special node for manual user queries and analytics > One may have a SolrCLoud cluster where every node host the same set of > collections with: > - multiple large SolrCLoud nodes (L) used for production apps and > - have 1 small node (S) in the same cluster with less ram/cpu used only for > manual user queries, data export and other production issue investigation. > This ticket would allow to configure the applications using SolrJ to query > only the (L) nodes > This use case is similar to the one described in SOLR-5501 raised by [~manuel > lenormand] > h3. Minimizing network traffic > > For simplicity, let's say that we have a SolrSloud cluster deployed on 2 (or > N) separate racks: rack1 and rack2. > On each rack, we have a set of SolrCloud VMs as well as a couple of client > VMs querying solr using SolrJ. > All solr nodes are identical and have the same number of collections. > What we would like to achieve is: > - clients on rack1 will by preference query only SolrCloud nodes on rack1, > and > - clients on rack2 will by preference query only SolrCloud nodes on rack2. > - Cross-rack read will happen if and only if one of the racks has no > available Solr node to serve a request. > In other words, we want read operations to be local to a rack whenever > possible. > Note that write/update/delete/admin operations should not be affected. > Note that in our use case, we have a cross DC deployment. So, replace > rack1/rack2 by DC1/DC2 > Any comment would be very appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For a
[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read
[ https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15382390#comment-15382390 ] Susheel Kumar commented on SOLR-8146: - Thanks, Noble and Arcadius for clarifying the status of SOLR-8146. Hello Noble, I can start working on the patch. Have a question to clarify 1. For multi-date center scenario, the preferredNodes rule may specify different values / ranges depending on, from which data center solrj client is querying? So do you see preferredNodes rule being used during query operation like http://localhost:8983/solr/collection1/select?rule=preferredNodes=ip_1:192,ip_2:93 The current Snitches design/implementation is only being used in Admin Collections API (https://cwiki.apache.org/confluence/display/solr/Collections+API) for replica placement so this will be another usage of Snitches and extending to query operations. Thanks, Susheel > Allowing SolrJ CloudSolrClient to have preferred replica for query/read > --- > > Key: SOLR-8146 > URL: https://issues.apache.org/jira/browse/SOLR-8146 > Project: Solr > Issue Type: New Feature > Components: clients - java >Affects Versions: 5.3 >Reporter: Arcadius Ahouansou > Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch > > > h2. Backgrouds > Currently, the CloudSolrClient randomly picks a replica to query. > This is done by shuffling the list of live URLs to query then, picking the > first item from the list. > This ticket is to allow more flexibility and control to some extend which > URLs will be picked up for queries. > Note that this is for queries only and would not affect update/delete/admin > operations. > h2. Implementation > The current patch uses regex pattern and moves to the top of the list of URLs > only those matching the given regex specified by the system property > {code}solr.preferredQueryNodePattern{code} > Initially, I thought it may be good to have Solr nodes tagged with a string > pattern (snitch?) and use that pattern for matching the URLs. > Any comment, recommendation or feedback would be appreciated. > h2. Use Cases > There are many cases where the ability to choose the node where queries go > can be very handy: > h3. Special node for manual user queries and analytics > One may have a SolrCLoud cluster where every node host the same set of > collections with: > - multiple large SolrCLoud nodes (L) used for production apps and > - have 1 small node (S) in the same cluster with less ram/cpu used only for > manual user queries, data export and other production issue investigation. > This ticket would allow to configure the applications using SolrJ to query > only the (L) nodes > This use case is similar to the one described in SOLR-5501 raised by [~manuel > lenormand] > h3. Minimizing network traffic > > For simplicity, let's say that we have a SolrSloud cluster deployed on 2 (or > N) separate racks: rack1 and rack2. > On each rack, we have a set of SolrCloud VMs as well as a couple of client > VMs querying solr using SolrJ. > All solr nodes are identical and have the same number of collections. > What we would like to achieve is: > - clients on rack1 will by preference query only SolrCloud nodes on rack1, > and > - clients on rack2 will by preference query only SolrCloud nodes on rack2. > - Cross-rack read will happen if and only if one of the racks has no > available Solr node to serve a request. > In other words, we want read operations to be local to a rack whenever > possible. > Note that write/update/delete/admin operations should not be affected. > Note that in our use case, we have a cross DC deployment. So, replace > rack1/rack2 by DC1/DC2 > Any comment would be very appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-9283) Documentation on using preferredNodes-ImplicitSnitch for Multi Data Center scenario
[ https://issues.apache.org/jira/browse/SOLR-9283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15366619#comment-15366619 ] Susheel Kumar commented on SOLR-9283: - Thank you Noble for your response. Let me know if anyone is working on completing the patch or I can give a shot on the patch if you can provide some direction/design. Thanks, Susheel > Documentation on using preferredNodes-ImplicitSnitch for Multi Data Center > scenario > --- > > Key: SOLR-9283 > URL: https://issues.apache.org/jira/browse/SOLR-9283 > Project: Solr > Issue Type: Improvement > Security Level: Public(Default Security Level. Issues are Public) >Affects Versions: 6.0 >Reporter: Susheel Kumar >Priority: Blocker > Labels: documentation > > SOLR-8146 and SOLR-8522 has been worked on to allow SolrJ CloudSolrClient to > have preferred replica for query/read more specifically for multiple Data > Center's scenario but there is no/unclear documentation on how exactly this > feature can be used and what steps needs to be taken care at SolrJ client > side and part of collection configuration/state. > I am offering help to create the required documentation if little more > details can be provided. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8146) Allowing SolrJ CloudSolrClient to have preferred replica for query/read
[ https://issues.apache.org/jira/browse/SOLR-8146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15364991#comment-15364991 ] Susheel Kumar commented on SOLR-8146: - Hello Noble, Arcadius, Can you please describe how exactly ImplicitSnitch can be used for preferredNodes and if there is anything to be done on SolrJ client to use preferredNodes for querying replicas? I have created a JIRA https://issues.apache.org/jira/browse/SOLR-9283 to document the exact steps/details for anyone to refer. Thanks, Susheel > Allowing SolrJ CloudSolrClient to have preferred replica for query/read > --- > > Key: SOLR-8146 > URL: https://issues.apache.org/jira/browse/SOLR-8146 > Project: Solr > Issue Type: New Feature > Components: clients - java >Affects Versions: 5.3 >Reporter: Arcadius Ahouansou > Attachments: SOLR-8146.patch, SOLR-8146.patch, SOLR-8146.patch > > > h2. Backgrouds > Currently, the CloudSolrClient randomly picks a replica to query. > This is done by shuffling the list of live URLs to query then, picking the > first item from the list. > This ticket is to allow more flexibility and control to some extend which > URLs will be picked up for queries. > Note that this is for queries only and would not affect update/delete/admin > operations. > h2. Implementation > The current patch uses regex pattern and moves to the top of the list of URLs > only those matching the given regex specified by the system property > {code}solr.preferredQueryNodePattern{code} > Initially, I thought it may be good to have Solr nodes tagged with a string > pattern (snitch?) and use that pattern for matching the URLs. > Any comment, recommendation or feedback would be appreciated. > h2. Use Cases > There are many cases where the ability to choose the node where queries go > can be very handy: > h3. Special node for manual user queries and analytics > One may have a SolrCLoud cluster where every node host the same set of > collections with: > - multiple large SolrCLoud nodes (L) used for production apps and > - have 1 small node (S) in the same cluster with less ram/cpu used only for > manual user queries, data export and other production issue investigation. > This ticket would allow to configure the applications using SolrJ to query > only the (L) nodes > This use case is similar to the one described in SOLR-5501 raised by [~manuel > lenormand] > h3. Minimizing network traffic > > For simplicity, let's say that we have a SolrSloud cluster deployed on 2 (or > N) separate racks: rack1 and rack2. > On each rack, we have a set of SolrCloud VMs as well as a couple of client > VMs querying solr using SolrJ. > All solr nodes are identical and have the same number of collections. > What we would like to achieve is: > - clients on rack1 will by preference query only SolrCloud nodes on rack1, > and > - clients on rack2 will by preference query only SolrCloud nodes on rack2. > - Cross-rack read will happen if and only if one of the racks has no > available Solr node to serve a request. > In other words, we want read operations to be local to a rack whenever > possible. > Note that write/update/delete/admin operations should not be affected. > Note that in our use case, we have a cross DC deployment. So, replace > rack1/rack2 by DC1/DC2 > Any comment would be very appreciated. > Thanks. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-9283) Documentation on using preferredNodes-ImplicitSnitch for Multi Data Center scenario
Susheel Kumar created SOLR-9283: --- Summary: Documentation on using preferredNodes-ImplicitSnitch for Multi Data Center scenario Key: SOLR-9283 URL: https://issues.apache.org/jira/browse/SOLR-9283 Project: Solr Issue Type: Improvement Security Level: Public (Default Security Level. Issues are Public) Affects Versions: 6.0 Reporter: Susheel Kumar Priority: Blocker SOLR-8146 and SOLR-8522 has been worked on to allow SolrJ CloudSolrClient to have preferred replica for query/read more specifically for multiple Data Center's scenario but there is no/unclear documentation on how exactly this feature can be used and what steps needs to be taken care at SolrJ client side and part of collection configuration/state. I am offering help to create the required documentation if little more details can be provided. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8686) Install Script hard codes the SOLR_ENV path in /etc/init.d/solr
Susheel Kumar created SOLR-8686: --- Summary: Install Script hard codes the SOLR_ENV path in /etc/init.d/solr Key: SOLR-8686 URL: https://issues.apache.org/jira/browse/SOLR-8686 Project: Solr Issue Type: Bug Components: scripts and tools Affects Versions: 5.4.1 Reporter: Susheel Kumar Until Solr-5.3.1 (that i am aware of), the install script would set the right SOLR_ENV path in /etc/init.d/solr which is passed as -d "Directory for live / writable Solr files..." but with solr-5.4.1 i see it always sets to /etc/default/solr.in.sh. Below is diff snippet of install_solr_service.sh of 5.3.1 vs 5.4.1 sed_expr1="s#SOLR_INSTALL_DIR=.*#SOLR_INSTALL_DIR=$SOLR_EXTRACT_DIR/$SOLR_SERVICE#" < sed_expr2="s#SOLR_ENV=.*#SOLR_ENV=$SOLR_VAR_DIR/solr.in.sh#" < sed_expr3="s#RUNAS=.*#RUNAS=$SOLR_USER#" --- > sed_expr1="s#SOLR_INSTALL_DIR=.*#SOLR_INSTALL_DIR=\"$SOLR_EXTRACT_DIR/$SOLR_SERVICE\"#" > sed_expr2="s#SOLR_ENV=.*#SOLR_ENV=\"/etc/default/$SOLR_SERVICE.in.sh\"#" > sed_expr3="s#RUNAS=.*#RUNAS=\"$SOLR_USER\"#" -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8592) Create collection alias from New UI doesn't actually work
Susheel Kumar created SOLR-8592: --- Summary: Create collection alias from New UI doesn't actually work Key: SOLR-8592 URL: https://issues.apache.org/jira/browse/SOLR-8592 Project: Solr Issue Type: Bug Environment: Solr Dashboard New UI shipped with 5.4.0 Reporter: Susheel Kumar Priority: Minor The new UI shipped with 5.4.0 allows collection aliases to be created from UI and it does work but when you try to use any alias created using UI http://:8983/solr/index.html#/ it gives error saying error": { "msg": "Could not find collection : [object Object]", "code": 400 } While doing same operation using admin/collection API it actually works. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7986) JDBC Driver for SQL Interface
[ https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968384#comment-14968384 ] Susheel Kumar commented on SOLR-7986: - Hi Joel, Created SOLR-8184 and attached the patch with tests and linked to SOLR-8125. Thanks, Susheel On Tue, Oct 20, 2015 at 9:15 PM, Joel Bernstein (JIRA) > JDBC Driver for SQL Interface > - > > Key: SOLR-7986 > URL: https://issues.apache.org/jira/browse/SOLR-7986 > Project: Solr > Issue Type: New Feature > Components: clients - java >Affects Versions: Trunk >Reporter: Joel Bernstein > Fix For: Trunk > > Attachments: SOLR-7986-SPI.patch, SOLR-7986.patch, SOLR-7986.patch, > SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, > SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch > > > This ticket is to create a JDBC Driver (thin client) for the new SQL > interface (SOLR-7560). As part of this ticket a driver will be added to the > Solrj libary under the package: *org.apache.solr.client.solrj.io.sql* > Initial implementation will include basic *Driver*, *Connection*, *Statement* > and *ResultSet* implementations. > Future releases can build on this implementation to support a wide range of > JDBC clients and tools. > *Syntax using parallel Map/Reduce for aggregations*: > {code} > Properties props = new Properties(); > props.put("aggregationMode", "map_reduce"); > props.put("numWorkers", "10"); > Connection con = > DriverManager.getConnection("jdbc:solr://?collection=", > props); > Statement stmt = con.createStatement(); > ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a > having sum(b) > 100"); > while(rs.next()) { > String a = rs.getString("a"); > double sumB = rs.getDouble("sum(b)"); > } > {code} > *Syntax using JSON facet API for aggregations*: > {code} > Properties props = new Properties(); > props.put("aggregationMode", "facet"); > Connection con = > DriverManager.getConnection("jdbc:solr://?collection=", > props); > Statement stmt = con.createStatement(); > ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a > having sum(b) > 100"); > while(rs.next()) { > String a = rs.getString("a"); > double sumB = rs.getDouble("sum(b)"); > } > {code} > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7986) JDBC Driver for SQL Interface
[ https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968382#comment-14968382 ] Susheel Kumar commented on SOLR-7986: - Hi Kevin, I have created SOLR-8184 for negative tests and i see some commonality between the tests from 8179 & 8184. Interestingly the missing zookeeper test doesn't get pass in either patch. Thanks, Susheel On Wed, Oct 21, 2015 at 12:57 PM, Kevin Risden (JIRA) > JDBC Driver for SQL Interface > - > > Key: SOLR-7986 > URL: https://issues.apache.org/jira/browse/SOLR-7986 > Project: Solr > Issue Type: New Feature > Components: clients - java >Affects Versions: Trunk >Reporter: Joel Bernstein > Fix For: Trunk > > Attachments: SOLR-7986-SPI.patch, SOLR-7986.patch, SOLR-7986.patch, > SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch, > SOLR-7986.patch, SOLR-7986.patch, SOLR-7986.patch > > > This ticket is to create a JDBC Driver (thin client) for the new SQL > interface (SOLR-7560). As part of this ticket a driver will be added to the > Solrj libary under the package: *org.apache.solr.client.solrj.io.sql* > Initial implementation will include basic *Driver*, *Connection*, *Statement* > and *ResultSet* implementations. > Future releases can build on this implementation to support a wide range of > JDBC clients and tools. > *Syntax using parallel Map/Reduce for aggregations*: > {code} > Properties props = new Properties(); > props.put("aggregationMode", "map_reduce"); > props.put("numWorkers", "10"); > Connection con = > DriverManager.getConnection("jdbc:solr://?collection=", > props); > Statement stmt = con.createStatement(); > ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a > having sum(b) > 100"); > while(rs.next()) { > String a = rs.getString("a"); > double sumB = rs.getDouble("sum(b)"); > } > {code} > *Syntax using JSON facet API for aggregations*: > {code} > Properties props = new Properties(); > props.put("aggregationMode", "facet"); > Connection con = > DriverManager.getConnection("jdbc:solr://?collection=", > props); > Statement stmt = con.createStatement(); > ResultSet rs = stmt.executeQuery("select a, sum(b) from tablex group by a > having sum(b) > 100"); > while(rs.next()) { > String a = rs.getString("a"); > double sumB = rs.getDouble("sum(b)"); > } > {code} > -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8184) Negative tests for JDBC Connection String
[ https://issues.apache.org/jira/browse/SOLR-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14968377#comment-14968377 ] Susheel Kumar commented on SOLR-8184: - Some observations: The test "testConnectionStringWithMissingZKHost" throws SolrException / TimeOutException than SQLException and similarly the test "testConnectionStringWithWrongCollection" goes thru various retries before it fails. Is that right behavior? > Negative tests for JDBC Connection String > - > > Key: SOLR-8184 > URL: https://issues.apache.org/jira/browse/SOLR-8184 > Project: Solr > Issue Type: Test > Environment: Trunk >Reporter: Susheel Kumar >Priority: Minor > Attachments: SOLR-8184.patch > > > Ticket to track negative tests for JDBC connection string SOLR-7986 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8184) Negative tests for JDBC Connection String
[ https://issues.apache.org/jira/browse/SOLR-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Susheel Kumar updated SOLR-8184: Attachment: SOLR-8184.patch Negative tests for JDBC Connection String > Negative tests for JDBC Connection String > - > > Key: SOLR-8184 > URL: https://issues.apache.org/jira/browse/SOLR-8184 > Project: Solr > Issue Type: Test > Environment: Trunk >Reporter: Susheel Kumar >Priority: Minor > Attachments: SOLR-8184.patch > > > Ticket to track negative tests for JDBC connection string SOLR-7986 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8184) Negative tests for JDBC Connection String
[ https://issues.apache.org/jira/browse/SOLR-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Susheel Kumar updated SOLR-8184: Flags: Patch > Negative tests for JDBC Connection String > - > > Key: SOLR-8184 > URL: https://issues.apache.org/jira/browse/SOLR-8184 > Project: Solr > Issue Type: Test > Environment: Trunk >Reporter: Susheel Kumar >Priority: Minor > > Ticket to track negative tests for JDBC connection string SOLR-7986 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Updated] (SOLR-8184) Negative tests for JDBC Connection String
[ https://issues.apache.org/jira/browse/SOLR-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Susheel Kumar updated SOLR-8184: Environment: Trunk (was: Trunl) > Negative tests for JDBC Connection String > - > > Key: SOLR-8184 > URL: https://issues.apache.org/jira/browse/SOLR-8184 > Project: Solr > Issue Type: Test > Environment: Trunk >Reporter: Susheel Kumar >Priority: Minor > > Ticket to track negative tests for JDBC connection string SOLR-7986 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8184) Negative tests for JDBC Connection String
Susheel Kumar created SOLR-8184: --- Summary: Negative tests for JDBC Connection String Key: SOLR-8184 URL: https://issues.apache.org/jira/browse/SOLR-8184 Project: Solr Issue Type: Test Environment: Trunl Reporter: Susheel Kumar Priority: Minor Ticket to track negative tests for JDBC connection string SOLR-7986 -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7986) JDBC Driver for SQL Interface
[ https://issues.apache.org/jira/browse/SOLR-7986?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14903665#comment-14903665 ] Susheel Kumar commented on SOLR-7986: - Hi Joel, I added locally 3 bad connection string tests. Please let me know your opinion before I provide a patch a) The first below test "testConnectionStringWithMissingZKHost" throws SolrException than SQLException. Is that right behaviour. Just wanted to confirm. b) Is it okay to add more public multiple tests method OR have one test method and calling various private test methods inside it. Any preference since I see the later is more common . c) The test "testConnectionStringWithWrongCollection" goes thru various retries before it fails. Below the console messages. Is that right behavior. Thanks, Susheel @Test public void testConnectionStringWithMissingZKHost() throws Exception { //should throw Solr exception as per current design exception.expect(SolrException.class); String zkHost = zkServer.getZkAddress(); Properties props = new Properties(); //bad connection string: missing zkHost Connection con = DriverManager.getConnection("jdbc:solr://" + "?collection=collection1", props); } @Test public void testConnectionStringJumbled() throws Exception { //should throw SQL exception exception.expect(SQLException.class); Properties props = new Properties(); String zkHost = zkServer.getZkAddress(); //Bad connection string: string jumbled Connection con = DriverManager.getConnection("solr:jdbc://" + zkHost + "?collection=collection1", props); } @Test public void testConnectionStringWithWrongCollection() throws Exception { //should throw SQL exception exception.expect(SQLException.class); Properties props = new Properties(); String zkHost = zkServer.getZkAddress(); //Bad connection string: wrong collection name Connection con = DriverManager.getConnection("jdbc:solr://" + zkHost + "?collection=mycollection", props); Statement stmt = con.createStatement(); ResultSet rs = stmt.executeQuery("select id, a_i, a_s, a_f from mycollection order by a_i desc limit 2"); } testConnectionStringWithWrongCollection console messages. - Sep 22, 2015 5:26:28 PM com.carrotsearch.randomizedtesting.ThreadLeakControl checkThreadLeaks WARNING: Will linger awaiting termination of 6 leaked thread(s). 95116 WARN (TEST-JdbcTest.testConnectionStringWithWrongCollection-seed#[8ED3B3162E3AB02D]-SendThread(127.0.0.1:55137)) [n:127.0.0.1:55092_cb_i%2Fof c:collection1 s:shard1 r:core_node4 x:collection1] o.a.z.ClientCnxn Session 0x14ff6f24fcf0012 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 96786 WARN (TEST-JdbcTest.testConnectionStringWithWrongCollection-seed#[8ED3B3162E3AB02D]-SendThread(127.0.0.1:55137)) [n:127.0.0.1:55092_cb_i%2Fof c:collection1 s:shard1 r:core_node4 x:collection1] o.a.z.ClientCnxn Session 0x14ff6f24fcf0012 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 98838 WARN (TEST-JdbcTest.testConnectionStringWithWrongCollection-seed#[8ED3B3162E3AB02D]-SendThread(127.0.0.1:55137)) [n:127.0.0.1:55092_cb_i%2Fof c:collection1 s:shard1 r:core_node4 x:collection1] o.a.z.ClientCnxn Session 0x14ff6f24fcf0012 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717) at org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361) at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081) 100696 WARN (TEST-JdbcTest.testConnectionStringWithWrongCollection-seed#[8ED3B3162E3AB02D]-SendThread(127.0.0.1:55137)) [n:127.0.0.1:55092_cb_i%2Fof c:collection1 s:shard1 r:core_node4 x:collection1] o.a.z.ClientCnxn Session 0x14ff6f24fcf0012 for server null, unexpected error, closing socket connection and attempting reconnect java.net.ConnectException: Connection
[jira] [Commented] (SOLR-8002) Field aliases support for SQL queries
[ https://issues.apache.org/jira/browse/SOLR-8002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14738038#comment-14738038 ] Susheel Kumar commented on SOLR-8002: - Agree, Joel that this is tricky to start. I'll anyway continue with more tests and will look into SOLR-7986 as well. > Field aliases support for SQL queries > - > > Key: SOLR-8002 > URL: https://issues.apache.org/jira/browse/SOLR-8002 > Project: Solr > Issue Type: New Feature > Components: search >Affects Versions: Trunk >Reporter: Susheel Kumar > > Currently field aliases are not supported for SQL queries against SQL > Handler. E.g. below SQL query > select id,name as product_name from techproducts limit 20 > currently fails as data returned contains still "name" as the field/column > key than product_name -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8002) Field aliases support for SQL queries
[ https://issues.apache.org/jira/browse/SOLR-8002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14738005#comment-14738005 ] Susheel Kumar commented on SOLR-8002: - Thanks, Dennis for the explanation & I agree with the approach that either SQL statement or String Expression would be converted to Stream Expression object. > Field aliases support for SQL queries > - > > Key: SOLR-8002 > URL: https://issues.apache.org/jira/browse/SOLR-8002 > Project: Solr > Issue Type: New Feature > Components: search >Affects Versions: Trunk >Reporter: Susheel Kumar > > Currently field aliases are not supported for SQL queries against SQL > Handler. E.g. below SQL query > select id,name as product_name from techproducts limit 20 > currently fails as data returned contains still "name" as the field/column > key than product_name -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8002) Field aliases support for SQL queries
[ https://issues.apache.org/jira/browse/SOLR-8002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14736174#comment-14736174 ] Susheel Kumar edited comment on SOLR-8002 at 9/9/15 5:04 AM: - Hi Joel, Davis, Just to clarify that utilizing the SelectStream in SQL api (SQLHandler.java) would require transforming the SQL expression into SOLR streaming expressions for SelectStream to work. So for e.g. SQL expression select id, field_i, str_s from collection1 where text='' order by field_i desc would be transformed to Solr Streaming expression search(collection1, q="text:", fl="id,field_i,str_s", sort="field_i desc") Please let me know your thoughts & if that is the correct understanding. Thanks, Susheel was (Author: susheel2...@gmail.com): Hi Joel, Davis, Just to clarify that utilizing the SelectStream in SQL api (SQLHandler.java) would require transforming the SQL expression into SOLR streaming expressions for SelectStream to work. So for e.g. SQL expression select id, field_i, str_s from collection1 where text='' order by field_i desc would be transformed to Solr Streaming expression search(collection1, q="text:", fl="id,field_i,str_s", sort="field_i desc") Please let me know your thoughts on this. Thanks, Susheel > Field aliases support for SQL queries > - > > Key: SOLR-8002 > URL: https://issues.apache.org/jira/browse/SOLR-8002 > Project: Solr > Issue Type: New Feature > Components: search >Affects Versions: Trunk >Reporter: Susheel Kumar > > Currently field aliases are not supported for SQL queries against SQL > Handler. E.g. below SQL query > select id,name as product_name from techproducts limit 20 > currently fails as data returned contains still "name" as the field/column > key than product_name -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Comment Edited] (SOLR-8002) Field aliases support for SQL queries
[ https://issues.apache.org/jira/browse/SOLR-8002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14736174#comment-14736174 ] Susheel Kumar edited comment on SOLR-8002 at 9/9/15 5:03 AM: - Hi Joel, Davis, Just to clarify that utilizing the SelectStream in SQL api (SQLHandler.java) would require transforming the SQL expression into SOLR streaming expressions for SelectStream to work. So for e.g. SQL expression select id, field_i, str_s from collection1 where text='' order by field_i desc would be transformed to Solr Streaming expression search(collection1, q="text:", fl="id,field_i,str_s", sort="field_i desc") Please let me know your thoughts on this. Thanks, Susheel was (Author: susheel2...@gmail.com): Hi Joel, Davis, Just to clarify that utilizing the SelectStream in SQL api (SQLHandler.java) would require transforming the SQL expression into SOLR streaming expressions for SelectStream to work. So for e.g. SQL expression select id, field_i, str_s from collection1 where text='' order by field_i desc would be transformed to search(collection1, q="text:", fl="id,field_i,str_s", sort="field_i desc") Please let me know your thoughts on this. Thanks, Susheel > Field aliases support for SQL queries > - > > Key: SOLR-8002 > URL: https://issues.apache.org/jira/browse/SOLR-8002 > Project: Solr > Issue Type: New Feature > Components: search >Affects Versions: Trunk >Reporter: Susheel Kumar > > Currently field aliases are not supported for SQL queries against SQL > Handler. E.g. below SQL query > select id,name as product_name from techproducts limit 20 > currently fails as data returned contains still "name" as the field/column > key than product_name -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8002) Field aliases support for SQL queries
[ https://issues.apache.org/jira/browse/SOLR-8002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14736174#comment-14736174 ] Susheel Kumar commented on SOLR-8002: - Hi Joel, Davis, Just to clarify that utilizing the SelectStream in SQL api (SQLHandler.java) would require transforming the SQL expression into SOLR streaming expressions for SelectStream to work. So for e.g. SQL expression select id, field_i, str_s from collection1 where text='' order by field_i desc would be transformed to search(collection1, q="text:", fl="id,field_i,str_s", sort="field_i desc") Please let me know your thoughts on this. Thanks, Susheel > Field aliases support for SQL queries > - > > Key: SOLR-8002 > URL: https://issues.apache.org/jira/browse/SOLR-8002 > Project: Solr > Issue Type: New Feature > Components: search >Affects Versions: Trunk >Reporter: Susheel Kumar > > Currently field aliases are not supported for SQL queries against SQL > Handler. E.g. below SQL query > select id,name as product_name from techproducts limit 20 > currently fails as data returned contains still "name" as the field/column > key than product_name -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-8002) Field aliases support for SQL queries
[ https://issues.apache.org/jira/browse/SOLR-8002?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14728447#comment-14728447 ] Susheel Kumar commented on SOLR-8002: - Sure, Joel. Let me start looking into the patch SOLR-7669. > Field aliases support for SQL queries > - > > Key: SOLR-8002 > URL: https://issues.apache.org/jira/browse/SOLR-8002 > Project: Solr > Issue Type: New Feature > Components: search >Affects Versions: Trunk >Reporter: Susheel Kumar > > Currently field aliases are not supported for SQL queries against SQL > Handler. E.g. below SQL query > select id,name as product_name from techproducts limit 20 > currently fails as data returned contains still "name" as the field/column > key than product_name -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7560) Parallel SQL Support
[ https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14726659#comment-14726659 ] Susheel Kumar commented on SOLR-7560: - Hi Joel, I have created a JIRA SOLR-8002 for field aliases. Also noticed the JIRA SOLR-7986 you have created and design thoughts on creating Connection, Statement & ResultSet classes. Thanks, Susheel > Parallel SQL Support > > > Key: SOLR-7560 > URL: https://issues.apache.org/jira/browse/SOLR-7560 > Project: Solr > Issue Type: New Feature > Components: clients - java, search >Reporter: Joel Bernstein > Fix For: Trunk > > Attachments: SOLR-7560.calcite.patch, SOLR-7560.patch, > SOLR-7560.patch, SOLR-7560.patch, SOLR-7560.patch > > > This ticket provides support for executing *Parallel SQL* queries across > SolrCloud collections. The SQL engine will be built on top of the Streaming > API (SOLR-7082), which provides support for *parallel relational algebra* and > *real-time map-reduce*. > Basic design: > 1) A new SQLHandler will be added to process SQL requests. The SQL statements > will be compiled to live Streaming API objects for parallel execution across > SolrCloud worker nodes. > 2) SolrCloud collections will be abstracted as *Relational Tables*. > 3) The Presto SQL parser will be used to parse the SQL statements. > 4) A JDBC thin client will be added as a Solrj client. > This ticket will focus on putting the framework in place and providing basic > SELECT support and GROUP BY aggregate support. > Future releases will build on this framework to provide additional SQL > features. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Created] (SOLR-8002) Field aliases support for SQL queries
Susheel Kumar created SOLR-8002: --- Summary: Field aliases support for SQL queries Key: SOLR-8002 URL: https://issues.apache.org/jira/browse/SOLR-8002 Project: Solr Issue Type: New Feature Components: search Affects Versions: Trunk Reporter: Susheel Kumar Currently field aliases are not supported for SQL queries against SQL Handler. E.g. below SQL query select id,name as product_name from techproducts limit 20 currently fails as data returned contains still "name" as the field/column key than product_name -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7560) Parallel SQL Support
[ https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14712381#comment-14712381 ] Susheel Kumar commented on SOLR-7560: - Thanks, Joel. I'll try to write tests around the supported features. Do you have any list of items/tickets for future releases or If I can help to maintain. Also i will try to understand how presto is being plugged with Solr but if you have any pointers please let me know. > Parallel SQL Support > > > Key: SOLR-7560 > URL: https://issues.apache.org/jira/browse/SOLR-7560 > Project: Solr > Issue Type: New Feature > Components: clients - java, search >Reporter: Joel Bernstein > Fix For: Trunk > > Attachments: SOLR-7560.calcite.patch, SOLR-7560.patch, > SOLR-7560.patch, SOLR-7560.patch, SOLR-7560.patch > > > This ticket provides support for executing *Parallel SQL* queries across > SolrCloud collections. The SQL engine will be built on top of the Streaming > API (SOLR-7082), which provides support for *parallel relational algebra* and > *real-time map-reduce*. > Basic design: > 1) A new SQLHandler will be added to process SQL requests. The SQL statements > will be compiled to live Streaming API objects for parallel execution across > SolrCloud worker nodes. > 2) SolrCloud collections will be abstracted as *Relational Tables*. > 3) The Presto SQL parser will be used to parse the SQL statements. > 4) A JDBC thin client will be added as a Solrj client. > This ticket will focus on putting the framework in place and providing basic > SELECT support and GROUP BY aggregate support. > Future releases will build on this framework to provide additional SQL > features. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7560) Parallel SQL Support
[ https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14708682#comment-14708682 ] Susheel Kumar commented on SOLR-7560: - Hi Joel, I started with two basic tests on my local box a) add field alias e.g. select id,name as product_name from techproducts limit 20 which currently fails as data returned contains still "name" as the field/column key than product_name b) I wanted to get additional field returned from SQL e.g select id,name,manu,mul(price,weight) from techproducts limit 20 which currently fails with error "Aggregate functions only supported with group by queries." while actually I just want to have additional calculated field based on some function/formula for every document. I checked SQLHandler.java which currently throws this out due to presence of parenthesis without any group by/aggregate function. Please let me know your suggestion on this. Thanks, Susheel > Parallel SQL Support > > > Key: SOLR-7560 > URL: https://issues.apache.org/jira/browse/SOLR-7560 > Project: Solr > Issue Type: New Feature > Components: clients - java, search >Reporter: Joel Bernstein > Fix For: Trunk > > Attachments: SOLR-7560.calcite.patch, SOLR-7560.patch, > SOLR-7560.patch, SOLR-7560.patch, SOLR-7560.patch > > > This ticket provides support for executing *Parallel SQL* queries across > SolrCloud collections. The SQL engine will be built on top of the Streaming > API (SOLR-7082), which provides support for *parallel relational algebra* and > *real-time map-reduce*. > Basic design: > 1) A new SQLHandler will be added to process SQL requests. The SQL statements > will be compiled to live Streaming API objects for parallel execution across > SolrCloud worker nodes. > 2) SolrCloud collections will be abstracted as *Relational Tables*. > 3) The Presto SQL parser will be used to parse the SQL statements. > 4) A JDBC thin client will be added as a Solrj client. > This ticket will focus on putting the framework in place and providing basic > SELECT support and GROUP BY aggregate support. > Future releases will build on this framework to provide additional SQL > features. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7560) Parallel SQL Support
[ https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14705552#comment-14705552 ] Susheel Kumar commented on SOLR-7560: - Thanks, Eric for pointing server dist target. Now I am able to run basic SQL. Will start looking into deeper. > Parallel SQL Support > > > Key: SOLR-7560 > URL: https://issues.apache.org/jira/browse/SOLR-7560 > Project: Solr > Issue Type: New Feature > Components: clients - java, search >Reporter: Joel Bernstein > Fix For: Trunk > > Attachments: SOLR-7560.calcite.patch, SOLR-7560.patch, > SOLR-7560.patch, SOLR-7560.patch, SOLR-7560.patch > > > This ticket provides support for executing *Parallel SQL* queries across > SolrCloud collections. The SQL engine will be built on top of the Streaming > API (SOLR-7082), which provides support for *parallel relational algebra* and > *real-time map-reduce*. > Basic design: > 1) A new SQLHandler will be added to process SQL requests. The SQL statements > will be compiled to live Streaming API objects for parallel execution across > SolrCloud worker nodes. > 2) SolrCloud collections will be abstracted as *Relational Tables*. > 3) The Presto SQL parser will be used to parse the SQL statements. > 4) A JDBC thin client will be added as a Solrj client. > This ticket will focus on putting the framework in place and providing basic > SELECT support and GROUP BY aggregate support. > Future releases will build on this framework to provide additional SQL > features. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7560) Parallel SQL Support
[ https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14704223#comment-14704223 ] Susheel Kumar commented on SOLR-7560: - Thanks, Joel. Found the SQLHandler config and I did checkout the trunk and compiled using 'ant compile' but getting error when starting solr. Will look into classpath which may be causing the issue. ./bin/solr status Found 1 Solr nodes: Solr process 11789 running on port 8983 Error: Could not find or load main class org.apache.solr.util.SolrCLI. > Parallel SQL Support > > > Key: SOLR-7560 > URL: https://issues.apache.org/jira/browse/SOLR-7560 > Project: Solr > Issue Type: New Feature > Components: clients - java, search >Reporter: Joel Bernstein > Fix For: Trunk > > Attachments: SOLR-7560.calcite.patch, SOLR-7560.patch, > SOLR-7560.patch, SOLR-7560.patch, SOLR-7560.patch > > > This ticket provides support for executing *Parallel SQL* queries across > SolrCloud collections. The SQL engine will be built on top of the Streaming > API (SOLR-7082), which provides support for *parallel relational algebra* and > *real-time map-reduce*. > Basic design: > 1) A new SQLHandler will be added to process SQL requests. The SQL statements > will be compiled to live Streaming API objects for parallel execution across > SolrCloud worker nodes. > 2) SolrCloud collections will be abstracted as *Relational Tables*. > 3) The Presto SQL parser will be used to parse the SQL statements. > 4) A JDBC thin client will be added as a Solrj client. > This ticket will focus on putting the framework in place and providing basic > SELECT support and GROUP BY aggregate support. > Future releases will build on this framework to provide additional SQL > features. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org
[jira] [Commented] (SOLR-7560) Parallel SQL Support
[ https://issues.apache.org/jira/browse/SOLR-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14703667#comment-14703667 ] Susheel Kumar commented on SOLR-7560: - Hi, I can help to test the Parallel SQL Support feature which is very useful for analytical purpose. Can I get some info on setting up SQLHandler / some instructions to get started. Thanks, Susheel > Parallel SQL Support > > > Key: SOLR-7560 > URL: https://issues.apache.org/jira/browse/SOLR-7560 > Project: Solr > Issue Type: New Feature > Components: clients - java, search >Reporter: Joel Bernstein > Fix For: Trunk > > Attachments: SOLR-7560.calcite.patch, SOLR-7560.patch, > SOLR-7560.patch, SOLR-7560.patch, SOLR-7560.patch > > > This ticket provides support for executing *Parallel SQL* queries across > SolrCloud collections. The SQL engine will be built on top of the Streaming > API (SOLR-7082), which provides support for *parallel relational algebra* and > *real-time map-reduce*. > Basic design: > 1) A new SQLHandler will be added to process SQL requests. The SQL statements > will be compiled to live Streaming API objects for parallel execution across > SolrCloud worker nodes. > 2) SolrCloud collections will be abstracted as *Relational Tables*. > 3) The Presto SQL parser will be used to parse the SQL statements. > 4) A JDBC thin client will be added as a Solrj client. > This ticket will focus on putting the framework in place and providing basic > SELECT support and GROUP BY aggregate support. > Future releases will build on this framework to provide additional SQL > features. -- This message was sent by Atlassian JIRA (v6.3.4#6332) - To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional commands, e-mail: dev-h...@lucene.apache.org