[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-05-20 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11724:

Attachment: (was: SOLR-11724.patch)

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-05-20 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11724:

Attachment: (was: SOLR-11724.patch)

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-05-20 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11724:

Attachment: SOLR-11724.patch

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-05-20 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11724:

Attachment: SOLR-11724.patch

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-05-20 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11724:

Attachment: SOLR-11724.patch

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-05-20 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16843890#comment-16843890
 ] 

Amrit Sarkar commented on SOLR-11724:
-

Attached test which works smooth, with the proposed code.

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch, 
> SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-05-19 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16843554#comment-16843554
 ] 

Amrit Sarkar commented on SOLR-11724:
-

Thanks [~TimSolr] and numerous others on the mailing list for reporting.

The correct way of solving this issue is to identify the correct base-url of 
Solr for the respective core we need to trigger REQUESTRECOVERY to and create a 
local HttpSolrClient instead of using CloudSolrClient from CdcrReplicatorState 
(which will forward the request to the leader of the shard instead to the 
rightful solr node).

I baked a small patch a few weeks back, still need to work on tests properly to 
see why it is failing now.

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2019-05-19 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11724:

Attachment: SOLR-11724.patch

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Major
> Fix For: 7.3.1, 7.4, 8.0
>
> Attachments: SOLR-11724.patch, SOLR-11724.patch, SOLR-11724.patch
>
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast one document is inserted again on source; which propels to 
> target and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16839160#comment-16839160
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Thanks [~janhoy],

bq. So it would be better to try to force CDCR into using a "solr thread pool" 
for its communication in such a way that the existing code in the path will 
classify it as a request that needs the header.
Probably, but cannot figure out a way until now. Since there are CDCR internal 
APIs are called at multiple levels, not sure where we can fit it and where not, 
should be easy. I moved the Cdcr check to Auth utils, for now, to clean it up.

Also tried to write meaningful tests but some dangling references are limiting 
the tests to complete successfully. Some resources are getting leaked by 
MiniSolrCloudCluster.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch, SOLR-11959.patch, SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Attachment: SOLR-11959.patch

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch, SOLR-11959.patch, SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-09 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836758#comment-16836758
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Thank you, Jan for the guidance. I have cooked up a patch around the design you 
have stated. 
I may have done this cleaner, and I intend to refactor accordingly. 
Especially I don't like mentioning Cdcr in *{{PKIAuthPlugin}}* code, I am 
looking for a better way to do that.

Looking forward to feedback. 



> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch, SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-09 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Attachment: SOLR-11959.patch

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch, SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-09 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11959:

Comment: was deleted

(was: Hi Jan, I am looking at the Auth code and finally got a bit familiar. 
I am able to build the logic where CDCR requests (LASTPROCESSEDVERSION, 
CHECKPOINTS, etc) made within the same cluster, even if they are not called 
within the local thread pool, get validated by PKIAuthPlugin.
Though *I am still not able to locate where exactly PKIAuthPlugin whitelists 
nodes* (i.e. live nodes listed under its own zookeeper), doing debugging for a 
while but not able to find the code.
Any help is appreciated.)

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-05-09 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16836248#comment-16836248
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Hi Jan, I am looking at the Auth code and finally got a bit familiar. 
I am able to build the logic where CDCR requests (LASTPROCESSEDVERSION, 
CHECKPOINTS, etc) made within the same cluster, even if they are not called 
within the local thread pool, get validated by PKIAuthPlugin.
Though *I am still not able to locate where exactly PKIAuthPlugin whitelists 
nodes* (i.e. live nodes listed under its own zookeeper), doing debugging for a 
while but not able to find the code.
Any help is appreciated.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-04-29 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16829715#comment-16829715
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Thanks, Jan. I see the direction. 

Can we utilize JWTAuthPlugin (JWT basically) solely (written by you, and I was 
reading it to understand how auth works overall in Solr) to have secure 
communication cross clusters? And what's needed outside of it is just to have 
network access (ports open) to respective zookeeper and Solr ports (which are 
already part of instructions).

I am not yet done with understanding the Auth module of Solr, and may suggest 
something doesn't make sense.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-04-25 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826256#comment-16826256
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Thanks [~janhoy], I see.
I will read the PKI concept in detail then, looks like the most viable solution 
for what we are trying to achieve here. I will try to bake a patch on the same.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11959) CDCR unauthorized to replicate to a target collection that is update protected in security.json

2019-04-25 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11959?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16826114#comment-16826114
 ] 

Amrit Sarkar commented on SOLR-11959:
-

Since SOLR-8389 didn't get enough traction, I would like to complete this Jira 
with the existing design.

{{CdcrReplicator}} at the Source internally creates SolrClient for the target 
and issues UpdateRequest. We can pass details for Basic Auth in the classic 
manner, part of the Request Header.
For this to work -- 
1. We can put Basic Auth -- username password details for the target at the 
source, which can result in more security issues since plain text password will 
be mentioned in solrconfig.xml which is exposed at multiple facets, unlike 
security.json.
2. Read security.json of the target collection at source (since source cluster 
has all access to all the files at target), unhash the password and pass it in 
the UpdateRequest. At the solrconfig.xml level at source, we need to provide 
the user only, whose password will be fetched. This is a better security 
solution than above, as reading security doc for a cluster is restricted to one 
module, Cdcr.

Looking forward to feedback on this.

> CDCR unauthorized to replicate to a target collection that is update 
> protected in security.json
> ---
>
> Key: SOLR-11959
> URL: https://issues.apache.org/jira/browse/SOLR-11959
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication, CDCR
>Affects Versions: 7.2
>Reporter: Donny Andrews
>Priority: Major
> Attachments: SOLR-11959.patch
>
>
> Steps to reproduce: 
>  # Create a source and a target collection in their respective clusters. 
>  # Update security.json to require a non-admin role to read and write. 
>  # Index to source collection 
> Expected: 
> The target collection should receive the update
> Actual:
> {code:java}
> org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
> from server at http://redacted/solr/redacted: Expected mime type 
> application/octet-stream but got text/html. 
>  
>  
>  Error 401 Unauthorized request, Response code: 401
>  
>  HTTP ERROR 401
>  Problem accessing /solr/redacted/update. Reason:
>   Unauthorized request, Response code: 401
>  
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:607)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
>  at 
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
>  at 
> org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1103)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:883)
>  at 
> org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:816)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
>  at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
>  at 
> org.apache.solr.handler.CdcrReplicator.sendRequest(CdcrReplicator.java:140)
>  at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:104)
>  at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>  at java.lang.Thread.run(Thread.java:748){code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13401) Metrics showing wring 'spins' property after override

2019-04-15 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-13401:
---

 Summary: Metrics showing wring 'spins' property after override
 Key: SOLR-13401
 URL: https://issues.apache.org/jira/browse/SOLR-13401
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Affects Versions: 8.0
Reporter: Amrit Sarkar


Even after setting required param to "false" acc to documentation –

https://lucene.apache.org/solr/guide/7_4/taking-solr-to-production.html#dynamic-defaults-for-concurrentmergescheduler

Alternatively, the boolean system property {{lucene.cms.override_spins}} can be 
set in the SOLR_OPTS variable in the include file to override the auto-detected 
value. Similarly, the system property {{lucene.cms.override_core_count}} can be 
set to the number of CPU cores to override the auto-detected processor count.
Container.fs.spins and Core.fs.spins are calculated as "true". 
{code}
{
  "responseHeader":{
"status":0,
"QTime":90},
  "metrics":{
"solr.jetty":{
  "org.eclipse.jetty.server.handler.DefaultHandler.1xx-responses":{
"count":0,
"meanRate":0.0,
"1minRate":0.0,
"5minRate":0.0,
"15minRate":0.0},
  "org.eclipse.jetty.server.handler.DefaultHandler.2xx-responses":{
"count":749,
"meanRate":0.26577854703424414,
"1minRate":0.20563648170407736,
"5minRate":0.2340577654572216,
"15minRate":0.24729247425529033},
  "org.eclipse.jetty.server.handler.DefaultHandler.3xx-responses":{
"count":0,
"meanRate":0.0,
"1minRate":0.0,
"5minRate":0.0,
"15minRate":0.0},
  "org.eclipse.jetty.server.handler.DefaultHandler.4xx-responses":{
"count":1,
"meanRate":3.548445163413177E-4,
"1minRate":6.64685036699106E-18,
"5minRate":2.7733971887474625E-6,
"15minRate":1.0450429716535352E-4},
  "org.eclipse.jetty.server.handler.DefaultHandler.5xx-responses":{
"count":0,
"meanRate":0.0,
"1minRate":0.0,
"5minRate":0.0,
"15minRate":0.0},
  "org.eclipse.jetty.server.handler.DefaultHandler.active-dispatches":0,
  "org.eclipse.jetty.server.handler.DefaultHandler.active-requests":0,
  "org.eclipse.jetty.server.handler.DefaultHandler.active-suspended":0,
  "org.eclipse.jetty.server.handler.DefaultHandler.async-dispatches":{
"count":0,
"meanRate":0.0,
"1minRate":0.0,
"5minRate":0.0,
"15minRate":0.0},
  "org.eclipse.jetty.server.handler.DefaultHandler.async-timeouts":{
"count":0,
"meanRate":0.0,
"1minRate":0.0,
"5minRate":0.0,
"15minRate":0.0},
  "org.eclipse.jetty.server.handler.DefaultHandler.connect-requests":{
"count":0,
"meanRate":0.0,
"1minRate":0.0,
"5minRate":0.0,
"15minRate":0.0,
"min_ms":0.0,
"max_ms":0.0,
"mean_ms":0.0,
"median_ms":0.0,
"stddev_ms":0.0,
"p75_ms":0.0,
"p95_ms":0.0,
"p99_ms":0.0,
"p999_ms":0.0},
  "org.eclipse.jetty.server.handler.DefaultHandler.delete-requests":{
"count":0,
"meanRate":0.0,
"1minRate":0.0,
"5minRate":0.0,
"15minRate":0.0,
"min_ms":0.0,
"max_ms":0.0,
"mean_ms":0.0,
"median_ms":0.0,
"stddev_ms":0.0,
"p75_ms":0.0,
"p95_ms":0.0,
"p99_ms":0.0,
"p999_ms":0.0},
  "org.eclipse.jetty.server.handler.DefaultHandler.dispatches":{
"count":750,
"meanRate":0.26613247165752213,
"1minRate":0.20563648170407736,
"5minRate":0.23406053885441036,
"15minRate":0.24739697855245568,
"min_ms":1.0,
"max_ms":8231.0,
"mean_ms":1.9138611068896936,
"median_ms":1.0,
"stddev_ms":4.227101567072412,
"p75_ms":2.0,
"p95_ms":2.0,
"p99_ms":34.0,
"p999_ms":48.0},
  "org.eclipse.jetty.server.handler.DefaultHandler.get-requests":{
"count":427,
"meanRate":0.15151854520088134,
"1minRate":0.12770720959194207,
"5minRate":0.13538027663316224,
"15minRate":0.13928403821024407,
"min_ms":1.0,
"max_ms":1759.0,
"mean_ms":2.059088748457511,
"median_ms":1.0,
"stddev_ms":5.404623515051114,
"p75_ms":1.0,
"p95_ms":2.0,
"p99_ms":34.0,
"p999_ms":49.0},
  "org.eclipse.jetty.server.handler.DefaultHandler.head-requests":{
"count":0,
"meanRate":0.0,
"1minRate":0.0,
"5minRate":0.0,
"15minRate":0.0,
"min_ms":0.0,
"max_ms":0.0,
"mean_ms":0.0,
"median_ms":0.0,
"stddev_ms":0.0,
"p75_ms":0.0,
"p95_ms":0.0,
"p99_ms":0.0,
"p99

[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr

2019-04-05 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16811247#comment-16811247
 ] 

Amrit Sarkar commented on SOLR-9272:


[~janhoy], me too. Uploaded fresh patch which can be applied against 
{{master}}. I am checking out the {{urlScheme}} param proposed by Steve.

> Auto resolve zkHost for bin/solr zk for running Solr
> 
>
> Key: SOLR-9272
> URL: https://issues.apache.org/jira/browse/SOLR-9272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>Priority: Major
>  Labels: newdev
> Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch
>
>
> Spinoff from SOLR-9194:
> We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already 
> running. We can optionally accept the {{-p}} parameter instead, and with that 
> use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's 
> easier to remember solr port than zk string.
> Example:
> {noformat}
> bin/solr start -c -p 9090
> bin/solr zk ls / -p 9090
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13329) Placing exact number of replicas on a set of solr nodes, instead of each solr node.

2019-03-18 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16795650#comment-16795650
 ] 

Amrit Sarkar commented on SOLR-13329:
-

Hi Noble, thank you for the response, appreciate that.

This particular rule is applied as three separate rules for each node, and 1 
replica of each shard is added on respective "solr-node-1",."solr-node-3". 

How can we make it to -- either of the three? Is there a straightforward manner 
of doing it?

> Placing exact number of replicas on a set of solr nodes, instead of each solr 
> node.
> ---
>
> Key: SOLR-13329
> URL: https://issues.apache.org/jira/browse/SOLR-13329
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (9.0)
>Reporter: Amrit Sarkar
>Priority: Major
>
> Let's say we have a requirement where we would like to place:
> {code}
> exact X replica on a set of solr nodes comprises of solr-node-1, solr-node-2, 
> ... solr-node-N.
> {code}
> e.g. exact 1 replica on either of the respective 3 solr nodes, solr-node-1, 
> solr-node-2, solr-node-3, and rest of the replicas can be placed on 
> corresponding solr nodes.
> Right now we don't have a straightforward manner of doing the same. 
> Autoscaling cluster policy also doesn't support such behavior, but instead 
> takes an array of solr node names and treat them as separate rules as per 
> https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html#sysprop-attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13329) Placing exact number of replicas on a set of solr nodes, instead of each solr node.

2019-03-18 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13329:

Description: 
Let's say we have a requirement where we would like to place:
{code}
exact X replica on a set of solr nodes comprises of solr-node-1, solr-node-2, 
... solr-node-N.
{code}
e.g. exact 1 replica on either of the respective 3 solr nodes, solr-node-1, 
solr-node-2, solr-node-3, and rest of the replicas can be placed on 
corresponding solr nodes.

Right now we don't have a straightforward manner of doing the same. Autoscaling 
cluster policy also doesn't support such behavior, but instead takes an array 
of solr node names and treat them as separate rules as per 
https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html#sysprop-attribute.


  was:
Let's say we have a requirement where we would like to place:
{code}
exact X replica on a set of solr nodes comprises of solr-node-1, solr-node-2, 
... solr-node-N.
{code}
e.g. exact 1 replica on first 3 solr nodes, solr-node-1, solr-node-2, 
solr-node-3, and rest of the replicas can be placed on corresponding solr nodes.

Right now we don't have a straightforward manner of doing the same. Autoscaling 
cluster policy also doesn't support such behavior, but instead takes an array 
of solr node names and treat them as separate rules as per 
https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html#sysprop-attribute.



> Placing exact number of replicas on a set of solr nodes, instead of each solr 
> node.
> ---
>
> Key: SOLR-13329
> URL: https://issues.apache.org/jira/browse/SOLR-13329
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (9.0)
>Reporter: Amrit Sarkar
>Priority: Major
>
> Let's say we have a requirement where we would like to place:
> {code}
> exact X replica on a set of solr nodes comprises of solr-node-1, solr-node-2, 
> ... solr-node-N.
> {code}
> e.g. exact 1 replica on either of the respective 3 solr nodes, solr-node-1, 
> solr-node-2, solr-node-3, and rest of the replicas can be placed on 
> corresponding solr nodes.
> Right now we don't have a straightforward manner of doing the same. 
> Autoscaling cluster policy also doesn't support such behavior, but instead 
> takes an array of solr node names and treat them as separate rules as per 
> https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html#sysprop-attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13329) Placing exact number of replicas on a set of solr nodes, instead of each solr node.

2019-03-18 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13329?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16794862#comment-16794862
 ] 

Amrit Sarkar commented on SOLR-13329:
-

Hi [~noble.paul], wondering if you have thoughts on this on how we can achieve 
this. Thanks in advance.

> Placing exact number of replicas on a set of solr nodes, instead of each solr 
> node.
> ---
>
> Key: SOLR-13329
> URL: https://issues.apache.org/jira/browse/SOLR-13329
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: master (9.0)
>Reporter: Amrit Sarkar
>Priority: Major
>
> Let's say we have a requirement where we would like to place:
> {code}
> exact X replica on a set of solr nodes comprises of solr-node-1, solr-node-2, 
> ... solr-node-N.
> {code}
> e.g. exact 1 replica on either of the respective 3 solr nodes, solr-node-1, 
> solr-node-2, solr-node-3, and rest of the replicas can be placed on 
> corresponding solr nodes.
> Right now we don't have a straightforward manner of doing the same. 
> Autoscaling cluster policy also doesn't support such behavior, but instead 
> takes an array of solr node names and treat them as separate rules as per 
> https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html#sysprop-attribute.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13329) Placing exact number of replicas on a set of solr nodes, instead of each solr node.

2019-03-18 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-13329:
---

 Summary: Placing exact number of replicas on a set of solr nodes, 
instead of each solr node.
 Key: SOLR-13329
 URL: https://issues.apache.org/jira/browse/SOLR-13329
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Affects Versions: master (9.0)
Reporter: Amrit Sarkar


Let's say we have a requirement where we would like to place:
{code}
exact X replica on a set of solr nodes comprises of solr-node-1, solr-node-2, 
... solr-node-N.
{code}
e.g. exact 1 replica on first 3 solr nodes, solr-node-1, solr-node-2, 
solr-node-3, and rest of the replicas can be placed on corresponding solr nodes.

Right now we don't have a straightforward manner of doing the same. Autoscaling 
cluster policy also doesn't support such behavior, but instead takes an array 
of solr node names and treat them as separate rules as per 
https://lucene.apache.org/solr/guide/7_7/solrcloud-autoscaling-policy-preferences.html#sysprop-attribute.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-03-11 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16789280#comment-16789280
 ] 

Amrit Sarkar commented on SOLR-11126:
-

[~janhoy] yes I believe, the patch is committed.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: 8.0
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-03-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16786298#comment-16786298
 ] 

Amrit Sarkar commented on SOLR-13166:
-

Hi Erick,

> It looks like we can override the checks with the "force" option, in which 
> case things are fine. Is that true? I like the idea that in that expert case 
> they need to do "other stuff" before making the change, so this is great.
That is true.

> Make sure to test the Admin UI's Schema Browser functionality too. That it 
> presents a good error message, and perhaps pops up a dialogue "Are you sure 
> you want to add more than 1000 fields to the schema? YES / NO", whereupon 
> clicking YES will add the force=true ??
Yes, I was wondering to do something like that. Popup with a message for Solr 
Admin UI, make it generic for every endpoint. I will see what can I do. 

> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13166.patch, SOLR-13166.patch, SOLR-13166.patch
>
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13226) Add note for Node Added Trigger documentation in Autoscaling

2019-02-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16761974#comment-16761974
 ] 

Amrit Sarkar commented on SOLR-13226:
-

My understanding of having non-strict rules is "follow it until there is no 
other way". Unlike here, where those rules are ignored and not taken into 
consideration at all.

bq. Setting the rules as strict may mitigate the problem, but why is Solr doing 
that?
I believe that is how the trigger is designed, of what source code and test I 
read and understood. Do ADDREPLICA operation on any available 
node/shard/collection until a violation occurs. [~shalinmangar] may have 
something to add on this.

> Add note for Node Added Trigger documentation in Autoscaling
> 
>
> Key: SOLR-13226
> URL: https://issues.apache.org/jira/browse/SOLR-13226
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13226.patch, Screen Shot 2019-02-05 at 3.55.31 
> AM.png, Screen Shot 2019-02-05 at 4.02.00 AM.png
>
>
> Node Added Trigger doesn't abide by SOFT rules [strict: false] and results in 
> abnormal cluster operational behavior.
> Let's say; we wish to do the following:
> 1. Not more than 10 cores reside on Single node.
> 2. Wish to distribute the cores, replicas equally to each Node.
> If we go by the following policy:
> not more than one replica for unique shard on a node. not a strict rule.
> {code}
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY", "strict": false},
> {code}
> distribute the replicas equally across the nodes, not a strict rule.
> {code}
>   {"replica": "#EQUAL", "node": "#ANY", "strict": false},
> {code}
> not more than 10 cores allowed on a single node, strict rule.
> {code}
>   {"cores": "<10", "node": "#ANY"}
> {code}
> cluster state ends up like:
> Screenshot -1
> Only the strict rule is followed and multiple replicas are added to single 
> Solr node, as rules are not strict – _{"replica":"<2", "shard": "#EACH", 
> "node": "#ANY", "strict": false}_
> While the following with all strict rule generate normal operational 
> behavior, add a replica to each shard of collection 'wiki' :
> {code}
> [
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY"},
>   {"replica": "#EQUAL", "node": "#ANY"},
>   {"cores": "<10", "node": "#ANY"}
>   ]
> {code}
> Screenshot -2
> This behavior should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13226) Add note for Node Added Trigger documentation in Autoscaling

2019-02-06 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13226:

Attachment: SOLR-13226.patch

> Add note for Node Added Trigger documentation in Autoscaling
> 
>
> Key: SOLR-13226
> URL: https://issues.apache.org/jira/browse/SOLR-13226
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13226.patch, Screen Shot 2019-02-05 at 3.55.31 
> AM.png, Screen Shot 2019-02-05 at 4.02.00 AM.png
>
>
> Node Added Trigger doesn't abide by SOFT rules [strict: false] and results in 
> abnormal cluster operational behavior.
> Let's say; we wish to do the following:
> 1. Not more than 10 cores reside on Single node.
> 2. Wish to distribute the cores, replicas equally to each Node.
> If we go by the following policy:
> not more than one replica for unique shard on a node. not a strict rule.
> {code}
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY", "strict": false},
> {code}
> distribute the replicas equally across the nodes, not a strict rule.
> {code}
>   {"replica": "#EQUAL", "node": "#ANY", "strict": false},
> {code}
> not more than 10 cores allowed on a single node, strict rule.
> {code}
>   {"cores": "<10", "node": "#ANY"}
> {code}
> cluster state ends up like:
> Screenshot -1
> Only the strict rule is followed and multiple replicas are added to single 
> Solr node, as rules are not strict – _{"replica":"<2", "shard": "#EACH", 
> "node": "#ANY", "strict": false}_
> While the following with all strict rule generate normal operational 
> behavior, add a replica to each shard of collection 'wiki' :
> {code}
> [
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY"},
>   {"replica": "#EQUAL", "node": "#ANY"},
>   {"cores": "<10", "node": "#ANY"}
>   ]
> {code}
> Screenshot -2
> This behavior should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13226) Add note for Node Added Trigger documentation in Autoscaling

2019-02-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16761700#comment-16761700
 ] 

Amrit Sarkar commented on SOLR-13226:
-

attached patch with single line note on the above stated behavior.

> Add note for Node Added Trigger documentation in Autoscaling
> 
>
> Key: SOLR-13226
> URL: https://issues.apache.org/jira/browse/SOLR-13226
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13226.patch, Screen Shot 2019-02-05 at 3.55.31 
> AM.png, Screen Shot 2019-02-05 at 4.02.00 AM.png
>
>
> Node Added Trigger doesn't abide by SOFT rules [strict: false] and results in 
> abnormal cluster operational behavior.
> Let's say; we wish to do the following:
> 1. Not more than 10 cores reside on Single node.
> 2. Wish to distribute the cores, replicas equally to each Node.
> If we go by the following policy:
> not more than one replica for unique shard on a node. not a strict rule.
> {code}
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY", "strict": false},
> {code}
> distribute the replicas equally across the nodes, not a strict rule.
> {code}
>   {"replica": "#EQUAL", "node": "#ANY", "strict": false},
> {code}
> not more than 10 cores allowed on a single node, strict rule.
> {code}
>   {"cores": "<10", "node": "#ANY"}
> {code}
> cluster state ends up like:
> Screenshot -1
> Only the strict rule is followed and multiple replicas are added to single 
> Solr node, as rules are not strict – _{"replica":"<2", "shard": "#EACH", 
> "node": "#ANY", "strict": false}_
> While the following with all strict rule generate normal operational 
> behavior, add a replica to each shard of collection 'wiki' :
> {code}
> [
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY"},
>   {"replica": "#EQUAL", "node": "#ANY"},
>   {"cores": "<10", "node": "#ANY"}
>   ]
> {code}
> Screenshot -2
> This behavior should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13226) Add note in Node Added Trigger documentation in Autoscaling

2019-02-06 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13226:

Description: 
Node Added Trigger doesn't abide by SOFT rules [strict: false] and results in 
abnormal cluster operational behavior.

Let's say; we wish to do the following:
1. Not more than 10 cores reside on Single node.
2. Wish to distribute the cores, replicas equally to each Node.

If we go by the following policy:

not more than one replica for unique shard on a node. not a strict rule.
{code}
  {"replica":"<2", "shard": "#EACH", "node": "#ANY", "strict": false},
{code}
distribute the replicas equally across the nodes, not a strict rule.
{code}
  {"replica": "#EQUAL", "node": "#ANY", "strict": false},
{code}
not more than 10 cores allowed on a single node, strict rule.
{code}
  {"cores": "<10", "node": "#ANY"}
{code}
cluster state ends up like:

Screenshot -1

Only the strict rule is followed and multiple replicas are added to single Solr 
node, as rules are not strict – _{"replica":"<2", "shard": "#EACH", "node": 
"#ANY", "strict": false}_

While the following with all strict rule generate normal operational behavior, 
add a replica to each shard of collection 'wiki' :
{code}
[
  {"replica":"<2", "shard": "#EACH", "node": "#ANY"},
  {"replica": "#EQUAL", "node": "#ANY"},
  {"cores": "<10", "node": "#ANY"}
  ]
{code}

Screenshot -2

This behavior should be documented.

  was:
Node Added Trigger doesn't abide by SOFT rules [strict: false] and results in 
abnormal cluster operational behavior.

Let's say; we wish to do the following:
1. Not more than 10 cores reside on Single node.
2. Wish to distribute the cores, replicas equally to each Node.

If we go by the following policy:

not more than one replica for unique shard on a node. not a strict rule.
{code}
  {"replica":"<2", "shard": "#EACH", "node": "#ANY", "strict": false},
{code}
distribute the replicas equally across the nodes, not a strict rule.
{code}
  {"replica": "#EQUAL", "node": "#ANY", "strict": false},
{code}
not more than 10 cores allowed on a single node, strict rule.
{code}
  {"cores": "<10", "node": "#ANY"}
{code}
cluster state ends up like:

Screenshot -1

Only the strict rule is followed and multiple replicas are added to single Solr 
node, as rules are not strict – _{"replica":"<2", "shard": "#EACH", "node": 
"#ANY", "strict": false}_

While the following with all strict rule generate normal operational behavior, 
add a replica to each shard of collection 'wiki' :
{code}
[
  {"replica":"<2", "shard": "#EACH", "node": "#ANY"},
  {"replica": "#EQUAL", "node": "#ANY"},
  {"cores": "<10", "node": "#ANY"}
  ]
{code}

This behavior should be documented.


> Add note in Node Added Trigger documentation in Autoscaling
> ---
>
> Key: SOLR-13226
> URL: https://issues.apache.org/jira/browse/SOLR-13226
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: Screen Shot 2019-02-05 at 3.55.31 AM.png, Screen Shot 
> 2019-02-05 at 4.02.00 AM.png
>
>
> Node Added Trigger doesn't abide by SOFT rules [strict: false] and results in 
> abnormal cluster operational behavior.
> Let's say; we wish to do the following:
> 1. Not more than 10 cores reside on Single node.
> 2. Wish to distribute the cores, replicas equally to each Node.
> If we go by the following policy:
> not more than one replica for unique shard on a node. not a strict rule.
> {code}
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY", "strict": false},
> {code}
> distribute the replicas equally across the nodes, not a strict rule.
> {code}
>   {"replica": "#EQUAL", "node": "#ANY", "strict": false},
> {code}
> not more than 10 cores allowed on a single node, strict rule.
> {code}
>   {"cores": "<10", "node": "#ANY"}
> {code}
> cluster state ends up like:
> Screenshot -1
> Only the strict rule is followed and multiple replicas are added to single 
> Solr node, as rules are not strict – _{"replica":"<2", "shard": "#EACH", 
> "node": "#ANY", "strict": false}_
> While the following with all strict rule generate normal operational 
> behavior, add a replica to each shard of collection 'wiki' :
> {code}
> [
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY"},
>   {"replica": "#EQUAL", "node": "#ANY"},
>   {"cores": "<10", "node": "#ANY"}
>   ]
> {code}
> Screenshot -2
> This behavior should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13226) Add note for Node Added Trigger documentation in Autoscaling

2019-02-06 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13226:

Summary: Add note for Node Added Trigger documentation in Autoscaling  
(was: Add note in Node Added Trigger documentation in Autoscaling)

> Add note for Node Added Trigger documentation in Autoscaling
> 
>
> Key: SOLR-13226
> URL: https://issues.apache.org/jira/browse/SOLR-13226
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: Screen Shot 2019-02-05 at 3.55.31 AM.png, Screen Shot 
> 2019-02-05 at 4.02.00 AM.png
>
>
> Node Added Trigger doesn't abide by SOFT rules [strict: false] and results in 
> abnormal cluster operational behavior.
> Let's say; we wish to do the following:
> 1. Not more than 10 cores reside on Single node.
> 2. Wish to distribute the cores, replicas equally to each Node.
> If we go by the following policy:
> not more than one replica for unique shard on a node. not a strict rule.
> {code}
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY", "strict": false},
> {code}
> distribute the replicas equally across the nodes, not a strict rule.
> {code}
>   {"replica": "#EQUAL", "node": "#ANY", "strict": false},
> {code}
> not more than 10 cores allowed on a single node, strict rule.
> {code}
>   {"cores": "<10", "node": "#ANY"}
> {code}
> cluster state ends up like:
> Screenshot -1
> Only the strict rule is followed and multiple replicas are added to single 
> Solr node, as rules are not strict – _{"replica":"<2", "shard": "#EACH", 
> "node": "#ANY", "strict": false}_
> While the following with all strict rule generate normal operational 
> behavior, add a replica to each shard of collection 'wiki' :
> {code}
> [
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY"},
>   {"replica": "#EQUAL", "node": "#ANY"},
>   {"cores": "<10", "node": "#ANY"}
>   ]
> {code}
> Screenshot -2
> This behavior should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13226) Add note in Node Added Trigger documentation in Autoscaling

2019-02-06 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13226:

Attachment: Screen Shot 2019-02-05 at 4.02.00 AM.png

> Add note in Node Added Trigger documentation in Autoscaling
> ---
>
> Key: SOLR-13226
> URL: https://issues.apache.org/jira/browse/SOLR-13226
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: Screen Shot 2019-02-05 at 3.55.31 AM.png, Screen Shot 
> 2019-02-05 at 4.02.00 AM.png
>
>
> Node Added Trigger doesn't abide by SOFT rules [strict: false] and results in 
> abnormal cluster operational behavior.
> Let's say; we wish to do the following:
> 1. Not more than 10 cores reside on Single node.
> 2. Wish to distribute the cores, replicas equally to each Node.
> If we go by the following policy:
> not more than one replica for unique shard on a node. not a strict rule.
> {code}
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY", "strict": false},
> {code}
> distribute the replicas equally across the nodes, not a strict rule.
> {code}
>   {"replica": "#EQUAL", "node": "#ANY", "strict": false},
> {code}
> not more than 10 cores allowed on a single node, strict rule.
> {code}
>   {"cores": "<10", "node": "#ANY"}
> {code}
> cluster state ends up like:
> Screenshot -1
> Only the strict rule is followed and multiple replicas are added to single 
> Solr node, as rules are not strict – _{"replica":"<2", "shard": "#EACH", 
> "node": "#ANY", "strict": false}_
> While the following with all strict rule generate normal operational 
> behavior, add a replica to each shard of collection 'wiki' :
> {code}
> [
>   {"replica":"<2", "shard": "#EACH", "node": "#ANY"},
>   {"replica": "#EQUAL", "node": "#ANY"},
>   {"cores": "<10", "node": "#ANY"}
>   ]
> {code}
> This behavior should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13226) Add note in Node Added Trigger documentation in Autoscaling

2019-02-06 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-13226:
---

 Summary: Add note in Node Added Trigger documentation in 
Autoscaling
 Key: SOLR-13226
 URL: https://issues.apache.org/jira/browse/SOLR-13226
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Amrit Sarkar
 Attachments: Screen Shot 2019-02-05 at 3.55.31 AM.png

Node Added Trigger doesn't abide by SOFT rules [strict: false] and results in 
abnormal cluster operational behavior.

Let's say; we wish to do the following:
1. Not more than 10 cores reside on Single node.
2. Wish to distribute the cores, replicas equally to each Node.

If we go by the following policy:

not more than one replica for unique shard on a node. not a strict rule.
{code}
  {"replica":"<2", "shard": "#EACH", "node": "#ANY", "strict": false},
{code}
distribute the replicas equally across the nodes, not a strict rule.
{code}
  {"replica": "#EQUAL", "node": "#ANY", "strict": false},
{code}
not more than 10 cores allowed on a single node, strict rule.
{code}
  {"cores": "<10", "node": "#ANY"}
{code}
cluster state ends up like:

Screenshot -1

Only the strict rule is followed and multiple replicas are added to single Solr 
node, as rules are not strict – _{"replica":"<2", "shard": "#EACH", "node": 
"#ANY", "strict": false}_

While the following with all strict rule generate normal operational behavior, 
add a replica to each shard of collection 'wiki' :
{code}
[
  {"replica":"<2", "shard": "#EACH", "node": "#ANY"},
  {"replica": "#EQUAL", "node": "#ANY"},
  {"cores": "<10", "node": "#ANY"}
  ]
{code}

This behavior should be documented.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-30 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13166:

Attachment: SOLR-13166.patch

> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13166.patch, SOLR-13166.patch, SOLR-13166.patch
>
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-30 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16756308#comment-16756308
 ] 

Amrit Sarkar commented on SOLR-13166:
-

Attaching another patch, with all above and ---

support for default limits to be specified in cluster-properties is added and 
can be specified as:
*"updateHandler.autoSoftCommit.maxTime.lower.limit"* for floor limit of 
*"updateHandler.autoSoftCommit.maxTime"*

I wish to make it generic, with *"upper.limit"* and *"lower.limit"* as 
particular suffixes for variant solrconfig.xml parameters.

> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13166.patch, SOLR-13166.patch, SOLR-13166.patch
>
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-28 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13166:

Attachment: SOLR-13166.patch

> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13166.patch, SOLR-13166.patch
>
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-25 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752101#comment-16752101
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Requesting final feedback on the above. I am not able to test windows script 
(solr.cmd), requesting if someone from the community can do sanity checking.

sample startup cmd;
{code}
bin/solr start -c -p 8983 -w data
{code}

Thanks.


> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-25 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13166:

Attachment: SOLR-13166.patch

> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13166.patch
>
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-25 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16752042#comment-16752042
 ] 

Amrit Sarkar commented on SOLR-13166:
-

Attaching a patch with the following design:

1. SchemaChecksManager: does few hard-coded checks, along with changing 
docValues, indexed, multiValued etc while some documents are already indexed. 
The checks may and may not apply, but an error will be thrown with the user 
with helping/justifying message.
2. SolrConfigChecksManger: does few hard-coded checks for autoCommits and cache 
sizes.

To bypass such checks and execute the command anyway use inline parameter 
*{{force=true}}*.
e.g.
{code}
curl http://localhost:8983/solr/wiki/config?force=true -H 
'Content-type:application/json' -d'
{
  "set-property": {
"updateHandler.autoCommit.maxTime":15000,
"updateHandler.autoCommit.openSearcher":false
  }
}'
{code}
{code}
curl -X POST -H 'Content-type:application/json' --data-binary '{
  "replace-field":{
 "name":"id",
 "type":"text_general",
 "stored":false }
}' http://localhost:8983/solr/wiki/schema?force=true
{code}

Requesting feedbacks, any other way we can tackle this issue etc.

> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-25 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13166?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13166:

Description: 
While working with Solr, schema and configuration changes without understanding 
can result in severe node failures, and much effort and time get consumed to 
fix such situations.

Few such problematic situations can be:

* Too many fields in the schema
* Too many commits: too short auto commit
* Spellchecker, suggester issues. Build suggester index on startup or on every 
commit causes memory pressure and latency issues
-- Schema mess-ups
* Text field commented out and Solr refuses to reload core
* Rename field type for unique key or version field
* Single-valued to multivalued and vice versa
* Switching between docvalues on/off
* Changing text to string type because user wanted to facet on a text field

The intention is to add a layer above Schema and Config API to have some checks 
and let the end user know the ramifications of the changes he/she intends to do.


  was:
While working with Solr, schema and configuration changes without understanding 
can result in severe node failures, and much effort and time get consumed to 
fix such situations.

Few such problematic situations can be:

* Too many fields in the schema
* Too many commits: too short auto commit
* Too high cache sizes set which bloats heap memory
* Spellchecker, suggester issues. Build suggester index on startup or on every 
commit causes memory pressure and latency issues
-- Schema mess-ups
* Text field commented out and Solr refuses to reload core
* Rename field type for unique key or version field
* Single-valued to multivalued and vice versa
* Switching between docvalues on/off
* Copy/pasting from old solr schema examples and trying them on new versions
* Changing text to string type because user wanted to facet on a text field
* CDCR: if user forgets turning off buffer and the target goes down, the tlog 
accumulates until node runs out of disk space or has huge recovery time.

The intention is to add a layer above Schema and Config API to have some checks 
and let the end user know the ramifications of the changes he/she intends to do.



> Add smart checks for Config and Schema API in Solr to avoid malicious updates
> -
>
> Key: SOLR-13166
> URL: https://issues.apache.org/jira/browse/SOLR-13166
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: config-api, Schema and Analysis
>Reporter: Amrit Sarkar
>Priority: Major
>
> While working with Solr, schema and configuration changes without 
> understanding can result in severe node failures, and much effort and time 
> get consumed to fix such situations.
> Few such problematic situations can be:
> * Too many fields in the schema
> * Too many commits: too short auto commit
> * Spellchecker, suggester issues. Build suggester index on startup or on 
> every commit causes memory pressure and latency issues
> -- Schema mess-ups
> * Text field commented out and Solr refuses to reload core
> * Rename field type for unique key or version field
> * Single-valued to multivalued and vice versa
> * Switching between docvalues on/off
> * Changing text to string type because user wanted to facet on a text field
> The intention is to add a layer above Schema and Config API to have some 
> checks and let the end user know the ramifications of the changes he/she 
> intends to do.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13166) Add smart checks for Config and Schema API in Solr to avoid malicious updates

2019-01-25 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-13166:
---

 Summary: Add smart checks for Config and Schema API in Solr to 
avoid malicious updates
 Key: SOLR-13166
 URL: https://issues.apache.org/jira/browse/SOLR-13166
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: config-api, Schema and Analysis
Reporter: Amrit Sarkar


While working with Solr, schema and configuration changes without understanding 
can result in severe node failures, and much effort and time get consumed to 
fix such situations.

Few such problematic situations can be:

* Too many fields in the schema
* Too many commits: too short auto commit
* Too high cache sizes set which bloats heap memory
* Spellchecker, suggester issues. Build suggester index on startup or on every 
commit causes memory pressure and latency issues
-- Schema mess-ups
* Text field commented out and Solr refuses to reload core
* Rename field type for unique key or version field
* Single-valued to multivalued and vice versa
* Switching between docvalues on/off
* Copy/pasting from old solr schema examples and trying them on new versions
* Changing text to string type because user wanted to facet on a text field
* CDCR: if user forgets turning off buffer and the target goes down, the tlog 
accumulates until node runs out of disk space or has huge recovery time.

The intention is to add a layer above Schema and Config API to have some checks 
and let the end user know the ramifications of the changes he/she intends to do.




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16742776#comment-16742776
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 1/15/19 6:49 AM:
--

Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} if not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with the above, I will add 
relevant documentation and finish this up.


was (Author: sarkaramr...@gmail.com):
Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with the above, I will add 
relevant documentation and finish this up.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: (was: SOLR-13035.patch)

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: (was: SOLR-13035.patch)

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16742776#comment-16742776
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 1/15/19 6:06 AM:
--

Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with the above, I will add 
relevant documentation and finish this up.


was (Author: sarkaramr...@gmail.com):
Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with all good above, I will 
add relevant documentation and finish this up.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16742776#comment-16742776
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 1/15/19 6:05 AM:
--

Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. {{SOLR_DATA_HOME}} now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. {{SOLR_DATA_HOME}} is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for {{SOLR_DATA_HOME}}, not sure how to test startup 
script changes except manually. If everyone agrees with all good above, I will 
add relevant documentation and finish this up.


was (Author: sarkaramr...@gmail.com):
Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. SOLR_DATA_HOME now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. SOLR_DATA_HOME is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for SOLR_DATA_HOME, not sure how to test startup script 
changes except manually. If everyone agrees with all good above, I will add 
relevant documentation and finish this up.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16742776#comment-16742776
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Fresh patch uploaded, the design:

1. {{SOLR_VAR_ROOT}} introduced, defaults to {{SOLR_TIP}} (installation dir)
2. {{SOLR_DATA_HOME}} will be resolved to {{}}/data if not 
passed explicitly
3. {{SOLR_LOGS_DIR}} will be resolved to {{}}/logs if not passed 
explicitly
4. {{SOLR_PID_DIR}} will be resolved to {{}}/, if not passed 
then as before, {{}}/bin

a. SOLR_DATA_HOME now will be resolved to both instancePath and dataDir for 
cores, unlike just dataDir before.
b. SOLR_DATA_HOME is not present in the server, an attempt will be made to 
create the specified directory.

I have only added tests for SOLR_DATA_HOME, not sure how to test startup script 
changes except manually. If everyone agrees with all good above, I will add 
relevant documentation and finish this up.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: (was: SOLR-13035.patch)

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-13 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch, 
> SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-09 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-09 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16738678#comment-16738678
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Made limited progress;
 * SOLR_VAR_ROOT (defaults to SOLR_TIP) will only have existence in the startup 
scripts, and respectively SOLR_DATA_HOME and SOLR_LOGS_DIR will be set. I 
haven't yet configured SOLR_PID_DIR, it seems it is not that obvious.
 
Reason: {{solr.data.home}} and other parameters are set in the startup scripts 
itself, hence resolving them before passing them on to SolrCLI.java, so that 
Solr admin UI dashboard etc can show absolute data directory path
{code}
  SOLR_START_OPTS=('-server' "${JAVA_MEM_OPTS[@]}" "${GC_TUNE[@]}" 
"${GC_LOG_OPTS[@]}" \
"${REMOTE_JMX_OPTS[@]}" "${CLOUD_MODE_OPTS[@]}" $SOLR_LOG_LEVEL_OPT 
-Dsolr.log.dir="$SOLR_LOGS_DIR" \
"-Djetty.port=$SOLR_PORT" "-DSTOP.PORT=$stop_port" "-DSTOP.KEY=$STOP_KEY" \
"${SOLR_HOST_ARG[@]}" "-Duser.timezone=$SOLR_TIMEZONE" \
"-Djetty.home=$SOLR_SERVER_DIR" "-Dsolr.solr.home=$SOLR_HOME" 
"-Dsolr.data.home=$SOLR_DATA_HOME" "-Dsolr.install.dir=$SOLR_TIP" \
"-Dsolr.default.confdir=$DEFAULT_CONFDIR" "${LOG4J_CONFIG[@]}" 
"${SOLR_OPTS[@]}")
{code}
note: I didn't try overriding the parameters in SolrCLI.java yet!!

* Added {{solrVarRoot}} in SolrXmlConfig.

 Working on adding documentation.

I will add the property to {{solr.cmd}}, and tests if we have agreement on 
above usage of SOLR_VAR_ROOT. Hope this is ok, else I can move modules around 
according to suggestions.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-04 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16734037#comment-16734037
 ] 

Amrit Sarkar commented on SOLR-11126:
-

Ah :( Right. Didn't fail to start for me but yes fails to create the collection 
due to the missing constructor. Patch uploaded. Added new test for checking 
collection creation. {{ant test}}, {{beasts}}, {{precommit}} pass. checked 
properly.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11126) Node-level health check handler

2019-01-04 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11126:

Attachment: SOLR-11126.patch

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11126) Node-level health check handler

2019-01-04 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11126:

Attachment: (was: SOLR-11126.patch)

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11126) Node-level health check handler

2019-01-04 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11126:

Attachment: SOLR-11126.patch

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-04 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733958#comment-16733958
 ] 

Amrit Sarkar commented on SOLR-11126:
-

Thanks, Shalin, apologies you had to review multiple times. I didn't think this 
point through and moved forward with changes:

bq. The CommonParams used to have /admin/health but now has /admin/info/health. 
It is okay to change the path because this API has never been released but 
there is some inconsistency because ImplicitPlugins.json still has 
"/admin/health"

The following is mentioned in the {{ImplicitPlugins}} documentation: 

{code}
System Settings 
Return server statistics and settings.

Documentation: 
https://wiki.apache.org/solr/SystemInformationRequestHandlers#SystemInfoHandler

API Endpoints   Class & JavadocsParamset
v1: solr/admin/info/system

v2: api/node/system
{solr-javadocs}/solr-core/org/apache/solr/handler/admin/SystemInfoHandler.html[SystemInfoHandler]
_ADMIN_SYSTEM
This endpoint can also take the collection or core name in the path 
(solr//admin/system or solr//admin/system) which will include 
all of the system-level information and additional information about the 
specific core that served the request.
{code}

All the InfoHandlers are available with 
{{/solr//admin/}}; which is essentially utilized by 
SolrClient. Thus, all info endpoints are mentioned in ImplicitPlugins.json.

{code}
"/admin/plugins": {
  "class": "solr.PluginInfoHandler"
},
"/admin/threads": {
  "class": "solr.ThreadDumpHandler",
  "useParams":"_ADMIN_THREADS"
},
"/admin/properties": {
  "class": "solr.PropertiesRequestHandler",
  "useParams":"_ADMIN_PROPERTIES"
},
"/admin/logging": {
  "class": "solr.LoggingHandler",
  "useParams":"_ADMIN_LOGGING"
},
{code}

Since this endpoint is available only in Solrcloud mode, I have added another 
line in the documentation. Beasts 100 rounds run successfully, precommit pass.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11126) Node-level health check handler

2019-01-04 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11126:

Attachment: SOLR-11126.patch

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Shalin Shekhar Mangar
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-03 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733214#comment-16733214
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Thank you Jan and Shalin for the fruitful discussion above and we do have 
consensus on SOLR_VAR_ROOT with it default pointing to SOLR_TIP. 

Working on it.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2019-01-03 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733214#comment-16733214
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 1/3/19 4:48 PM:
-

Thank you Jan and Shalin for the fruitful discussion above. So we have 
consensus on SOLR_VAR_ROOT with it default pointing to SOLR_TIP. 

Working on it.


was (Author: sarkaramr...@gmail.com):
Thank you Jan and Shalin for the fruitful discussion above and we do have 
consensus on SOLR_VAR_ROOT with it default pointing to SOLR_TIP. 

Working on it.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11126:

Attachment: SOLR-11126.patch

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16733211#comment-16733211
 ] 

Amrit Sarkar commented on SOLR-11126:
-

Fresh patch uploaded, incorporating all suggestions made above. Thank you.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch, 
> SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2019-01-03 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16732987#comment-16732987
 ] 

Amrit Sarkar commented on SOLR-11126:
-

Thanks Shalin for the feedback. I see there are some details left to clean up 
and add. On it.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13094) NPE while doing regular Facet

2018-12-24 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-13094:
---

 Summary: NPE while doing regular Facet
 Key: SOLR-13094
 URL: https://issues.apache.org/jira/browse/SOLR-13094
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Facet Module
Affects Versions: 7.5.0
Reporter: Amrit Sarkar


I am issuing a regular facet query:
{code}
params = new ModifiableSolrParams()
.add("q", query.trim())
.add("rows", "0")
.add("facet", "true")
.add("facet.field", "description")
.add("facet.limit", "200");
{code}

Exception:
{code}
2018-12-24 15:50:20.843 ERROR (qtp690521419-130) [c:wiki s:shard2 r:core_node4 
x:wiki_shard2_replica_n2] o.a.s.s.HttpSolrCall 
null:org.apache.solr.common.SolrException: Exception during facet.field: 
description
at 
org.apache.solr.request.SimpleFacets.lambda$getFacetFieldCounts$0(SimpleFacets.java:832)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.solr.request.SimpleFacets$3.execute(SimpleFacets.java:765)
at 
org.apache.solr.request.SimpleFacets.getFacetFieldCounts(SimpleFacets.java:841)
at 
org.apache.solr.handler.component.FacetComponent.getFacetCounts(FacetComponent.java:329)
at 
org.apache.solr.handler.component.FacetComponent.process(FacetComponent.java:273)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:298)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:199)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2541)
at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:709)
at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:515)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:377)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:323)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1634)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:533)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:146)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:257)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1595)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:255)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1317)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:203)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:473)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1564)
at 
org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:201)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1219)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:144)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:219)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:126)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at 
org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:132)
at org.eclipse.jetty.server.Server.handle(Server.java:531)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:352)
at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:260)
at 
org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:281)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:102)
at org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:118)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(EatWhatYouKill.java:333)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.doProduce(EatWhatYouKill.java:310)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.tryProduce(EatWhatYouKill.java:168)
at 
org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.run(EatWhatYouKill.java:126)
at 
org.eclipse.jetty.util.thread.ReservedThreadExecutor$ReservedThread.

[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-18 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16724134#comment-16724134
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Hi [~janhoy],

SOLR_TIP is the system property defined in Solr startup script which defines 
the parent path of Solr installation directory. If we unzip solr-7.5.0.tgz, 
{{[PATH-TO]/solr-7.5.0}} would be the SOLR_.TIP. So yes, a default Linux script 
install will point SOLR_TIP to {{/opt/solr}}.

SOLR_VAR_ROOT will be a great addition operation wise, but do we need another 
variable amidst various available, it is already quite confusing with 
{{instancePath}}, {{dataDir}} and others etc w.r.t cores. 
One can define absolute paths for DIR properties (SOLR_LOGS_DIR, SOLR_PID_DIR 
etc.) if desired and supporting relative paths will still work w.r.t current 
directory / SOLR_HOME / SOLR_TIP etc.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711533#comment-16711533
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 12/14/18 4:10 PM:
---


Thanks [~janhoy] for the feedback;

bq. Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

bq. In general, I'd prefer of much of the logic regarding folder resolution etc 
was done in the common SolrCLI.java so as little logic as possible needs to be 
duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

bq. While I believe solr.xml could be created if not exist, I'm not so sure we 
should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

bq. I agree on the goal of making Solr more container friendly and clearly 
separate what paths can be made R/O and what paths typically need a separate 
volume mounted. Can you perhaps create a diagram that clearly shows the various 
directory paths we have today with their defaults and how they will be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> server --> solr --> [data_dir] Index files
SOLR_TIP -> server --> solr --> [instance_dir] Core properties
SOLR_TIP -> server --> solr --> [zoo_data] Embedded ZK data
SOLR_TIP -> server --> [logs]

READ specific contents in the same directory [server/solr]
SOLR_TIP -> server --> solr --> solr.xml [changes requires NODE 
restart]
SOLR_TIP -> server --> solr --> [configsets] Default config sets
{code}

For the below-stated startup command with the current patch; R/W directories 
will look like following;
{code}
cd $SOLR_TIP
bin/solr start -c -p 8983 -t data -l logs
{code}

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> data -> [data_dir] Index files
[instance_dir] Core properties
[zoo_data] Embedded ZK data
SOLR_TIP -> logs

All other respective dirs would be READ specific, and changes 
in them requires NODE restart.
{code}

Adding support for adding solr.xml if not exist will be helpful. That way, as 
mentioned above pointing SOLR_HOME to an empty directory would be enough to 
achieve defined R/W directories. Though intention of writing the patch was to 
seperate directories which are created / modified after node restart. Default 
{{SOLR_HOME}} can still point to [SOLR_TIP]/server/solr and picks up default 
solr.xml.

Looking forward to feedback.


was (Author: sarkaramr...@gmail.com):
host: ftp.lucidworks.com
port: 22



  @Override
  public Collection getApis() {
return singletonList(new ApiBag.ReqHandlerToApi(this, 
getSpec("node.Health")));
  }

  @Override
  public Boolean registerV1() {
return Boolean.FALSE;
  }

  @Override
  public Boolean registerV2() {
return Boolean.TRUE;
  }


{
  "description": "Provides information about system health for a node.",
  "methods": ["GET"],
  "url": {
"paths": [
  "/node/health"
]
  }
}




Thanks [~janhoy] for the feedback;

bq. Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

bq. In general, I'd prefer of much of the logic regarding folder resolution etc 
was done in the common SolrCLI.java so as little logic as possible needs to be 
duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

bq. While I believe solr.xml could be created if not exist, I'm not so sure we 
should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

bq. I agree on the goal of making Solr more container friendly and cl

[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711533#comment-16711533
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 12/14/18 4:05 PM:
---

host: ftp.lucidworks.com
port: 22



  @Override
  public Collection getApis() {
return singletonList(new ApiBag.ReqHandlerToApi(this, 
getSpec("node.Health")));
  }

  @Override
  public Boolean registerV1() {
return Boolean.FALSE;
  }

  @Override
  public Boolean registerV2() {
return Boolean.TRUE;
  }


{
  "description": "Provides information about system health for a node.",
  "methods": ["GET"],
  "url": {
"paths": [
  "/node/health"
]
  }
}




Thanks [~janhoy] for the feedback;

bq. Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

bq. In general, I'd prefer of much of the logic regarding folder resolution etc 
was done in the common SolrCLI.java so as little logic as possible needs to be 
duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

bq. While I believe solr.xml could be created if not exist, I'm not so sure we 
should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

bq. I agree on the goal of making Solr more container friendly and clearly 
separate what paths can be made R/O and what paths typically need a separate 
volume mounted. Can you perhaps create a diagram that clearly shows the various 
directory paths we have today with their defaults and how they will be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> server --> solr --> [data_dir] Index files
SOLR_TIP -> server --> solr --> [instance_dir] Core properties
SOLR_TIP -> server --> solr --> [zoo_data] Embedded ZK data
SOLR_TIP -> server --> [logs]

READ specific contents in the same directory [server/solr]
SOLR_TIP -> server --> solr --> solr.xml [changes requires NODE 
restart]
SOLR_TIP -> server --> solr --> [configsets] Default config sets
{code}

For the below-stated startup command with the current patch; R/W directories 
will look like following;
{code}
cd $SOLR_TIP
bin/solr start -c -p 8983 -t data -l logs
{code}

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> data -> [data_dir] Index files
[instance_dir] Core properties
[zoo_data] Embedded ZK data
SOLR_TIP -> logs

All other respective dirs would be READ specific, and changes 
in them requires NODE restart.
{code}

Adding support for adding solr.xml if not exist will be helpful. That way, as 
mentioned above pointing SOLR_HOME to an empty directory would be enough to 
achieve defined R/W directories. Though intention of writing the patch was to 
seperate directories which are created / modified after node restart. Default 
{{SOLR_HOME}} can still point to [SOLR_TIP]/server/solr and picks up default 
solr.xml.

Looking forward to feedback.


was (Author: sarkaramr...@gmail.com):
Thanks [~janhoy] for the feedback;

bq. Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

bq. In general, I'd prefer of much of the logic regarding folder resolution etc 
was done in the common SolrCLI.java so as little logic as possible needs to be 
duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

bq. While I believe solr.xml could be created if not exist, I'm not so sure we 
should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

bq. I agree on the goal of making Solr more container friendly and cle

[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711533#comment-16711533
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 12/14/18 4:03 PM:
---

Thanks [~janhoy] for the feedback;

bq. Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

bq. In general, I'd prefer of much of the logic regarding folder resolution etc 
was done in the common SolrCLI.java so as little logic as possible needs to be 
duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

bq. While I believe solr.xml could be created if not exist, I'm not so sure we 
should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

bq. I agree on the goal of making Solr more container friendly and clearly 
separate what paths can be made R/O and what paths typically need a separate 
volume mounted. Can you perhaps create a diagram that clearly shows the various 
directory paths we have today with their defaults and how they will be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> server --> solr --> [data_dir] Index files
SOLR_TIP -> server --> solr --> [instance_dir] Core properties
SOLR_TIP -> server --> solr --> [zoo_data] Embedded ZK data
SOLR_TIP -> server --> [logs]

READ specific contents in the same directory [server/solr]
SOLR_TIP -> server --> solr --> solr.xml [changes requires NODE 
restart]
SOLR_TIP -> server --> solr --> [configsets] Default config sets
{code}

For the below-stated startup command with the current patch; R/W directories 
will look like following;
{code}
cd $SOLR_TIP
bin/solr start -c -p 8983 -t data -l logs
{code}

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> data -> [data_dir] Index files
 [instance_dir] Core properties
 [zoo_data] Embedded ZK data
SOLR_TIP -> logs

All other respective dirs would be READ specific, and changes 
in them requires NODE restart.
{code}

Adding support for adding solr.xml if not exist will be helpful. That way, as 
mentioned above pointing SOLR_HOME to an empty directory would be enough to 
achieve defined R/W directories. Though intention of writing the patch was to 
seperate directories which are created / modified after node restart. Default 
{{SOLR_HOME}} can still point to [SOLR_TIP]/server/solr and picks up default 
solr.xml.

Looking forward to feedback.


was (Author: sarkaramr...@gmail.com):
Thanks [~janhoy] for the feedback;

bq. Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

bq. In general, I'd prefer of much of the logic regarding folder resolution etc 
was done in the common SolrCLI.java so as little logic as possible needs to be 
duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

bq. While I believe solr.xml could be created if not exist, I'm not so sure we 
should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

bq. I agree on the goal of making Solr more container friendly and clearly 
separate what paths can be made R/O and what paths typically need a separate 
volume mounted. Can you perhaps create a diagram that clearly shows the various 
directory paths we have today with their defaults and how they will be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

{code}
   

[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711533#comment-16711533
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 12/14/18 4:03 PM:
---

Thanks [~janhoy] for the feedback;

bq. Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

bq. In general, I'd prefer of much of the logic regarding folder resolution etc 
was done in the common SolrCLI.java so as little logic as possible needs to be 
duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

bq. While I believe solr.xml could be created if not exist, I'm not so sure we 
should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

bq. I agree on the goal of making Solr more container friendly and clearly 
separate what paths can be made R/O and what paths typically need a separate 
volume mounted. Can you perhaps create a diagram that clearly shows the various 
directory paths we have today with their defaults and how they will be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> server --> solr --> [data_dir] Index files
SOLR_TIP -> server --> solr --> [instance_dir] Core properties
SOLR_TIP -> server --> solr --> [zoo_data] Embedded ZK data
SOLR_TIP -> server --> [logs]

READ specific contents in the same directory [server/solr]
SOLR_TIP -> server --> solr --> solr.xml [changes requires NODE 
restart]
SOLR_TIP -> server --> solr --> [configsets] Default config sets
{code}

For the below-stated startup command with the current patch; R/W directories 
will look like following;
{code}
cd $SOLR_TIP
bin/solr start -c -p 8983 -t data -l logs
{code}

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> data -> [data_dir] Index files
   [instance_dir] Core properties
   [zoo_data] Embedded ZK data
SOLR_TIP -> logs

All other respective dirs would be READ specific, and changes 
in them requires NODE restart.
{code}

Adding support for adding solr.xml if not exist will be helpful. That way, as 
mentioned above pointing SOLR_HOME to an empty directory would be enough to 
achieve defined R/W directories. Though intention of writing the patch was to 
seperate directories which are created / modified after node restart. Default 
{{SOLR_HOME}} can still point to [SOLR_TIP]/server/solr and picks up default 
solr.xml.

Looking forward to feedback.


was (Author: sarkaramr...@gmail.com):
Thanks [~janhoy] for the feedback;

bq. Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

bq. In general, I'd prefer of much of the logic regarding folder resolution etc 
was done in the common SolrCLI.java so as little logic as possible needs to be 
duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

bq. While I believe solr.xml could be created if not exist, I'm not so sure we 
should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

bq. I agree on the goal of making Solr more container friendly and clearly 
separate what paths can be made R/O and what paths typically need a separate 
volume mounted. Can you perhaps create a diagram that clearly shows the various 
directory paths we have today with their defaults and how they will be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

{code}
WRITE specific 

[jira] [Updated] (SOLR-11126) Node-level health check handler

2018-12-14 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11126:

Attachment: SOLR-11126.patch

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2018-12-14 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16721546#comment-16721546
 ] 

Amrit Sarkar commented on SOLR-11126:
-

Integrated SOLR-11126-v2.patch into fresh one: SOLR-11126.patch against 
*{{master}}* branch and changed the V1 API path to *{{/admin/info/health}}* and 
V2 API *{{/api/node/health}}* is now supported.

The V1 API path is changed and moved under InfoHandler in conjunction to 
already supported V2 APIs listed in 
https://lucene.apache.org/solr/guide/7_5/implicit-requesthandlers.html.

*  added another negative test when CoreContainer is not available (null or 
shutdown)
*  removed redundant overridden methods
*  added relevant documentation

Tests and Precommit pass. Requesting review and feedback.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11126) Node-level health check handler

2018-12-13 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11126?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16721029#comment-16721029
 ] 

Amrit Sarkar commented on SOLR-11126:
-

Hi [~anshumg], thank you for adding this handler, this can be used in the 
cloud-native environment for liveliness check.

I don't see the handler definition in the official documentation, is there any 
specific reason behind the same? i.e. tests are not stable or we need to add 
more checks. I am willing to work to complete what's remaining.

> Node-level health check handler
> ---
>
> Key: SOLR-11126
> URL: https://issues.apache.org/jira/browse/SOLR-11126
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
>Priority: Major
> Fix For: master (8.0)
>
> Attachments: SOLR-11126-v2.patch, SOLR-11126.patch
>
>
> Solr used to have the PING handler at core level, but with SolrCloud, we are 
> missing a node level health check handler. It would be good to have. The API 
> would look like:
> * solr/admin/health (v1 API)
> * solr/node/health (v2 API)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13055) Introduce check to determine "liveliness" of a Solr node

2018-12-11 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16717687#comment-16717687
 ] 

Amrit Sarkar commented on SOLR-13055:
-

SOLR-11126 has already introduced the above-stated checks as part of 
_HealthCheckHandler_ with endpoint *{{/admin/health}}*. 

This handler is not mentioned in the documentation and hence I missed it. I 
will comment on the Jira to add appropriate docs.

Closing this JIRA, we can open a new one for enhancements. 

> Introduce check to determine "liveliness" of a Solr node
> 
>
> Key: SOLR-13055
> URL: https://issues.apache.org/jira/browse/SOLR-13055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Minor
>  Labels: cloud, native
>
> As the applications are becoming cloud-friendly; there are multiple probes 
> which are required to verify availability of a node.
> Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
>  determine if a node is live and ready to serve live traffic.
> Solr should also support such probes out of the box as an API or otherwise to 
> make things easier. In this JIRA, we are tracking the necessary checks we 
> need to determine if a node is  'liveliness', in all modes, standalone and 
> cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13055) Introduce check to determine "liveliness" of a Solr node

2018-12-11 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar resolved SOLR-13055.
-
Resolution: Duplicate

> Introduce check to determine "liveliness" of a Solr node
> 
>
> Key: SOLR-13055
> URL: https://issues.apache.org/jira/browse/SOLR-13055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Minor
>  Labels: cloud, native
>
> As the applications are becoming cloud-friendly; there are multiple probes 
> which are required to verify availability of a node.
> Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
>  determine if a node is live and ready to serve live traffic.
> Solr should also support such probes out of the box as an API or otherwise to 
> make things easier. In this JIRA, we are tracking the necessary checks we 
> need to determine if a node is  'liveliness', in all modes, standalone and 
> cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13055) Introduce check to determine "liveliness" of a Solr node

2018-12-10 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16716357#comment-16716357
 ] 

Amrit Sarkar commented on SOLR-13055:
-

This can be achieved in multiple manners and following I suggest introducing 
"*isLive*" parameter in *SystemInfoHandler*.

Command: {code}http://localhost:8983/solr/admin/info/system?isLive=true{code}
Responses:
{code}
{
  "responseHeader":{
"status":0,
"QTime":1524}}
{code}
{code}
{
  "responseHeader":{
"status":503,
"QTime":2},
  "error":{
"metadata":[
  "error-class","org.apache.solr.common.SolrException",
  "root-error-class","org.apache.solr.common.SolrException"],
"msg":"Connection to Zookeeper is lost. Node unhealthy.",
"code":503}}
{code}
note: the response metadata is added in every response in the base class.

Three checks for liveliness in place:
1. Node upTime greater than 0 milliseconds.
2. CoreContainer is not NULL and active, not shut.
3. Able to connect to Zookeeper, in case of node started in Solrcloud mode.

I am quite happy with these basic checks at the node level and design and 
requesting feedback and suggestions.



> Introduce check to determine "liveliness" of a Solr node
> 
>
> Key: SOLR-13055
> URL: https://issues.apache.org/jira/browse/SOLR-13055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Minor
>  Labels: cloud, native
>
> As the applications are becoming cloud-friendly; there are multiple probes 
> which are required to verify availability of a node.
> Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
>  determine if a node is live and ready to serve live traffic.
> Solr should also support such probes out of the box as an API or otherwise to 
> make things easier. In this JIRA, we are tracking the necessary checks we 
> need to determine if a node is  'liveliness', in all modes, standalone and 
> cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13055) Introduce check to determine "liveliness" of a Solr node

2018-12-10 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-13055:
---

 Summary: Introduce check to determine "liveliness" of a Solr node
 Key: SOLR-13055
 URL: https://issues.apache.org/jira/browse/SOLR-13055
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Amrit Sarkar


As the applications becoming cloud friendly; there are multiple probes which 
are required to verify availability of a node.

Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
 determine if a node is live and ready to serve live traffic.

Solr should also support such probes out of the box as an API or otherwise to 
make things easier. In this JIRA, we are tracking the necessary checks we need 
to determine if a node is  'liveliness', in all modes, standalone and cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13055) Introduce check to determine "liveliness" of a Solr node

2018-12-10 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13055:

Description: 
As the applications are becoming cloud-friendly; there are multiple probes 
which are required to verify availability of a node.

Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
 determine if a node is live and ready to serve live traffic.

Solr should also support such probes out of the box as an API or otherwise to 
make things easier. In this JIRA, we are tracking the necessary checks we need 
to determine if a node is  'liveliness', in all modes, standalone and cloud.

  was:
As the applications EW becoming cloud friendly; there are multiple probes which 
are required to verify availability of a node.

Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
 determine if a node is live and ready to serve live traffic.

Solr should also support such probes out of the box as an API or otherwise to 
make things easier. In this JIRA, we are tracking the necessary checks we need 
to determine if a node is  'liveliness', in all modes, standalone and cloud.


> Introduce check to determine "liveliness" of a Solr node
> 
>
> Key: SOLR-13055
> URL: https://issues.apache.org/jira/browse/SOLR-13055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Minor
>  Labels: cloud, native
>
> As the applications are becoming cloud-friendly; there are multiple probes 
> which are required to verify availability of a node.
> Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
>  determine if a node is live and ready to serve live traffic.
> Solr should also support such probes out of the box as an API or otherwise to 
> make things easier. In this JIRA, we are tracking the necessary checks we 
> need to determine if a node is  'liveliness', in all modes, standalone and 
> cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13055) Introduce check to determine "liveliness" of a Solr node

2018-12-10 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13055:

Description: 
As the applications EW becoming cloud friendly; there are multiple probes which 
are required to verify availability of a node.

Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
 determine if a node is live and ready to serve live traffic.

Solr should also support such probes out of the box as an API or otherwise to 
make things easier. In this JIRA, we are tracking the necessary checks we need 
to determine if a node is  'liveliness', in all modes, standalone and cloud.

  was:
As the applications becoming cloud friendly; there are multiple probes which 
are required to verify availability of a node.

Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
 determine if a node is live and ready to serve live traffic.

Solr should also support such probes out of the box as an API or otherwise to 
make things easier. In this JIRA, we are tracking the necessary checks we need 
to determine if a node is  'liveliness', in all modes, standalone and cloud.


> Introduce check to determine "liveliness" of a Solr node
> 
>
> Key: SOLR-13055
> URL: https://issues.apache.org/jira/browse/SOLR-13055
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Minor
>  Labels: cloud, native
>
> As the applications EW becoming cloud friendly; there are multiple probes 
> which are required to verify availability of a node.
> Like in Kubernetes we need 'liveliness' and 'readiness' probe explained in 
> https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-probes/n
>  determine if a node is live and ready to serve live traffic.
> Solr should also support such probes out of the box as an API or otherwise to 
> make things easier. In this JIRA, we are tracking the necessary checks we 
> need to determine if a node is  'liveliness', in all modes, standalone and 
> cloud.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12934) Make Update Request Processors CDCR aware (i.e. skip process if CDCR forwarded update)

2018-12-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12934?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16712248#comment-16712248
 ] 

Amrit Sarkar commented on SOLR-12934:
-

While looking at the code; for each URP; the base methods 
processAdd,processDelete and rest are called, which triggers the next URP 
execution, at the end of each method implementation, like:
{code}
@Override
public void processAdd(AddUpdateCommand cmd) throws IOException {
  final SolrInputDocument doc = cmd.getSolrInputDocument();
  if (! doc.containsKey(fieldName)) {
doc.addField(fieldName, getDefaultValue());
  }
  super.processAdd(cmd);
}
{code}
which means we have to accommodate the CDCR aware logic at each overridden 
method; it is cumbersome but works fine. Updated CdcrBidirectionalTest with 
clusters have their default / own URP chain, and it works.
Still, need to update documentation, showcasing for custom URPs you need to 
implement the logic yourself; so half-baked solution.
{code}
  if (cmd.getReq().getParams().get(CDCR_UPDATE) != null) {
super.processAdd(cmd);
return;
  }
{code}

> Make Update Request Processors CDCR aware (i.e. skip process if CDCR 
> forwarded update)
> --
>
> Key: SOLR-12934
> URL: https://issues.apache.org/jira/browse/SOLR-12934
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, UpdateRequestProcessors
>Reporter: Amrit Sarkar
>Priority: Major
>
> While [setting up 
> CDCR|https://lucene.apache.org/solr/guide/7_5/cdcr-config.html] in 
> {{solrconfig.xml}} at {{target}} cluster, we need to make default update 
> processor chain with {{CdcrUpdateProcessorFactory}} like:
> {code}
> 
>   
>   
> 
> {code}
> {code}
> 
>   
> cdcr-processor-chain
>   
> 
> {code}
> The motivation having a default update processor chain with no other but 
> {{CdcrUpdateProcessorFactory}} is to NOT MODIFY already processed and 
> transformed data at source. And it works perfectly.
> In {{Bidirectional}} scenario, we need to set this default chain at both 
> clusters, source & target. And while sending documents from application side; 
> we need to EXPLICITLY SET 
> [update.chain|https://lucene.apache.org/solr/guide/6_6/update-request-processors.html#UpdateRequestProcessors-CustomUpdateRequestProcessorChain]
>  with each batch at the primary/source cluster. This introduces an extra 
> activity/effort at the application end.
> It would be great if we can make Update Request Processors CDCR aware; i.e. 
> skip and don't process the doc batches which are CDCR forwarded ones and 
> treat the others as default. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711533#comment-16711533
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Thanks [~janhoy] for the feedback;

> Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
> to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

>In general, I'd prefer of much of the logic regarding folder resolution etc 
>was done in the common SolrCLI.java so as little logic as possible needs to be 
>duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

>While I believe solr.xml could be created if not exist, I'm not so sure we 
>should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

> I agree on the goal of making Solr more container friendly and clearly 
> separate what paths can be made R/O and what paths typically need a separate 
> volume mounted. Can you perhaps create a diagram that clearly shows the 
> various directory paths we have today with their defaults and how they will 
> be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> server --> solr --> [data_dir] Index files
 -> server --> solr --> [instance_dir] Core properties
 -> server --> solr --> [zoo_data] Embedded ZK data
 -> server --> [logs]

READ specific contents in the same directory [server/solr]
 -> server --> solr --> solr.xml [changes requires NODE 
restart]
 -> server --> solr --> [configsets] Default config sets

For the below-stated startup command with the current patch; R/W directories 
will look like following;
{code}
cd $SOLR_TIP
bin/solr start -c -p 8983 -t data -l logs
{code}

WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> [data_dir] Index files
 -> [instance_dir] Core properties
 -> [zoo_data] Embedded ZK data
 -> [logs]

 All other respective dirs would be READ specific; and changes in them 
requires NODE restart.

Adding support for adding solr.xml if not exist will be helpful. That way, as 
mentioned above pointing SOLR_HOME to an empty directory would be enough to 
achieve defined R/W directories. Though intention of writing the patch was to 
seperate directories which are created / modified after node restart. Default 
{{SOLR_HOME}} can still point to [SOLR_TIP]/server/solr and picks up default 
solr.xml.

Looking forward to feedback.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711533#comment-16711533
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 12/6/18 2:49 PM:
--

Thanks [~janhoy] for the feedback;

bq. Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

bq. In general, I'd prefer of much of the logic regarding folder resolution etc 
was done in the common SolrCLI.java so as little logic as possible needs to be 
duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

bq. While I believe solr.xml could be created if not exist, I'm not so sure we 
should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

bq. I agree on the goal of making Solr more container friendly and clearly 
separate what paths can be made R/O and what paths typically need a separate 
volume mounted. Can you perhaps create a diagram that clearly shows the various 
directory paths we have today with their defaults and how they will be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> server --> solr --> [data_dir] Index files
SOLR_TIP -> server --> solr --> [instance_dir] Core properties
SOLR_TIP -> server --> solr --> [zoo_data] Embedded ZK data
SOLR_TIP -> server --> [logs]

READ specific contents in the same directory [server/solr]
SOLR_TIP -> server --> solr --> solr.xml [changes requires NODE 
restart]
SOLR_TIP -> server --> solr --> [configsets] Default config sets
{code}

For the below-stated startup command with the current patch; R/W directories 
will look like following;
{code}
cd $SOLR_TIP
bin/solr start -c -p 8983 -t data -l logs
{code}

{code}
WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> [data_dir] Index files
SOLR_TIP -> [instance_dir] Core properties
SOLR_TIP -> [zoo_data] Embedded ZK data
SOLR_TIP -> [logs]

All other respective dirs would be READ specific; and changes 
in them requires NODE restart.
{code}

Adding support for adding solr.xml if not exist will be helpful. That way, as 
mentioned above pointing SOLR_HOME to an empty directory would be enough to 
achieve defined R/W directories. Though intention of writing the patch was to 
seperate directories which are created / modified after node restart. Default 
{{SOLR_HOME}} can still point to [SOLR_TIP]/server/solr and picks up default 
solr.xml.

Looking forward to feedback.


was (Author: sarkaramr...@gmail.com):
Thanks [~janhoy] for the feedback;

> Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
> to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

>In general, I'd prefer of much of the logic regarding folder resolution etc 
>was done in the common SolrCLI.java so as little logic as possible needs to be 
>duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

>While I believe solr.xml could be created if not exist, I'm not so sure we 
>should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

> I agree on the goal of making Solr more container friendly and clearly 
> separate what paths can be made R/O and what paths typically need a separate 
> volume mounted. Can you perhaps create a diagram that clearly shows the 
> various directory paths we have today with their defaults and how they will 
> be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> server --

[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711533#comment-16711533
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 12/6/18 2:48 PM:
--

Thanks [~janhoy] for the feedback;

> Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
> to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

>In general, I'd prefer of much of the logic regarding folder resolution etc 
>was done in the common SolrCLI.java so as little logic as possible needs to be 
>duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

>While I believe solr.xml could be created if not exist, I'm not so sure we 
>should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

> I agree on the goal of making Solr more container friendly and clearly 
> separate what paths can be made R/O and what paths typically need a separate 
> volume mounted. Can you perhaps create a diagram that clearly shows the 
> various directory paths we have today with their defaults and how they will 
> be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> server --> solr --> [data_dir] Index files
SOLR_TIP -> server --> solr --> [instance_dir] Core properties
SOLR_TIP -> server --> solr --> [zoo_data] Embedded ZK data
SOLR_TIP -> server --> [logs]

READ specific contents in the same directory [server/solr]
SOLR_TIP -> server --> solr --> solr.xml [changes requires NODE 
restart]
SOLR_TIP -> server --> solr --> [configsets] Default config sets

For the below-stated startup command with the current patch; R/W directories 
will look like following;
{code}
cd $SOLR_TIP
bin/solr start -c -p 8983 -t data -l logs
{code}

WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> [data_dir] Index files
SOLR_TIP -> [instance_dir] Core properties
SOLR_TIP -> [zoo_data] Embedded ZK data
SOLR_TIP -> [logs]

All other respective dirs would be READ specific; and changes 
in them requires NODE restart.

Adding support for adding solr.xml if not exist will be helpful. That way, as 
mentioned above pointing SOLR_HOME to an empty directory would be enough to 
achieve defined R/W directories. Though intention of writing the patch was to 
seperate directories which are created / modified after node restart. Default 
{{SOLR_HOME}} can still point to [SOLR_TIP]/server/solr and picks up default 
solr.xml.

Looking forward to feedback.


was (Author: sarkaramr...@gmail.com):
Thanks [~janhoy] for the feedback;

> Looks scary to do SOLR_LOGS_DIR="$(pwd)/$SOLR_LOGS_DIR". It would be better 
> to require the user to configure absolute path here.
Sure. I felt supporting relative path makes it a bit easy for deploying 
multiple Solr nodes on the machine/server. If  {{$(pwd)/$SOLR_LOGS_DIR}} is not 
the right way to do; probably {{SOLR_TIP/$SOLR_LOGS_DIR}}. The absolute path is 
supported as the original design.

>In general, I'd prefer of much of the logic regarding folder resolution etc 
>was done in the common SolrCLI.java so as little logic as possible needs to be 
>duplicated in bin/solr and bin/solr.cmd
Yeah makes sense. I followed the same resolution criteria implemented for 
SOLR_HOME, will move that logic to SolrCLI.java.

>While I believe solr.xml could be created if not exist, I'm not so sure we 
>should go create SOLR_DATA_HOME if it does not exist?
Sure. Creating SOLR_DATA_HOME if not exist was again minor improvement for 
end-user, just like $SOLR_LOGS_DIR, though not necessary.

> I agree on the goal of making Solr more container friendly and clearly 
> separate what paths can be made R/O and what paths typically need a separate 
> volume mounted. Can you perhaps create a diagram that clearly shows the 
> various directory paths we have today with their defaults and how they will 
> be changed?
We intend not to change the default respective directory paths but instead 
makes it easy to run in a container environment. 

SOLR_TIP -> solr installation directory
Default state: 

WRITE specific dirs from Solr/Zk node.
SOLR_TIP -> server --> solr --> [data_dir

[jira] [Comment Edited] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710768#comment-16710768
 ] 

Amrit Sarkar edited comment on SOLR-13035 at 12/6/18 2:03 PM:
--

Thanks [~elyograg] and [~janhoy],

I see in SOLR-6671 the motivation behind {{solr.data.home}} and understand why 
we don't want wish to have core properties under the same directory. I strongly 
agree with [~shalinmangar] on making Solr easy to use with dockers/containers 
and don't have to configure out of ordinary and have workarounds.

With current patch, entire {{server/solr}} can be READ-ONLY, and directories 
specified at Solr startup:
{code}
cd $SOLR_TIP
bin/solr start -c -p 8983 -t data -l logs
{code}
would be WRITE-ONLY.

I can work on a design we have consensus on. Looking forward to feedback, I can 
start right on, and as mentioned we can get in version 8.0.




was (Author: sarkaramr...@gmail.com):
Thanks [~elyograg] and [~janhoy],

I see in SOLR-6671 the motivation behind {{solr.data.home}} and understand why 
we don't want wish to have core properties under the same directory. I strongly 
agree with [~shalinmangar] on making Solr easy to use with dockers/containers 
and don't have to configure out of ordinary and have workarounds.

With current patch, entire {{server/solr}} can be READ-ONLY, and directories 
specified at Solr startup:
{code}
bin/solr start -c -p 8983 -t data -l logs
{code}
would be WRITE-ONLY.

I can work on a design we have consensus on. Looking forward to feedback, I can 
start right on, and as mentioned we can get in version 8.0.



> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-06 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16711466#comment-16711466
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Feedback on the below statement:

bq. Why are you changing the default location of logs from SOLR_TIP/server/logs 
to SOLR_HOME/logs, i.e. SOLR_TIP/server/solr/logs? Also, the installer script 
explicitly configures SOLR_LOGS_DIR to /var/solr/logs, and not to 
/var/solr/data/logs as would be the equivalent if it belongs inside SOLR_HOME?

I apologize I uploaded the patch with a typo; didn't intend to change the 
default {{SOLR_LOGS_DIR}}. Uploaded correct patch.

Reviewing other comments and will share thoughts shortly.



> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch, SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-05 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16710768#comment-16710768
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Thanks [~elyograg] and [~janhoy],

I see in SOLR-6671 the motivation behind {{solr.data.home}} and understand why 
we don't want wish to have core properties under the same directory. I strongly 
agree with [~shalinmangar] on making Solr easy to use with dockers/containers 
and don't have to configure out of ordinary and have workarounds.

With current patch, entire {{server/solr}} can be READ-ONLY, and directories 
specified at Solr startup:
{code}
bin/solr start -c -p 8983 -t data -l logs
{code}
would be WRITE-ONLY.

I can work on a design we have consensus on. Looking forward to feedback, I can 
start right on, and as mentioned we can get in version 8.0.



> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-03 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Description: 
{{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is already 
available as per SOLR-6671.

The writable content in Solr are index files, core properties, and ZK data if 
embedded zookeeper is started in SolrCloud mode. It would be great if all 
writable content can come under the same directory to have separate READ-ONLY 
and WRITE-ONLY directories.

It can then also solve official docker Solr image issues:
https://github.com/docker-solr/docker-solr/issues/74
https://github.com/docker-solr/docker-solr/issues/133

  was:
{{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is already 
available as per SOLR-6671.

The writable content in Solr are index files, core properties, and ZK data if 
embedded zookeeper is started in SolrCloud mode. It would be great if all 
writable content can come under the same directory to have seperate READ-ONLY 
and WRITE-ONLY directories.

It can then also solve official docker Solr image issues:
https://github.com/docker-solr/docker-solr/issues/74
https://github.com/docker-solr/docker-solr/issues/133


> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have separate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-03 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Description: 
{{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is already 
available as per SOLR-6671.

The writable content in Solr are index files, core properties, and ZK data if 
embedded zookeeper is started in SolrCloud mode. It would be great if all 
writable content can come under the same directory to have seperate READ-ONLY 
and WRITE-ONLY directories.

It can then also solve official docker Solr image issues:
https://github.com/docker-solr/docker-solr/issues/74
https://github.com/docker-solr/docker-solr/issues/133

  was:
{{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is already 
available as per SOLR-6671.

The writable content in Solr are index files, core properties, and ZK data if 
embedded zookeeper is also started in SolrCloud mode. It would be great if all 
writable content can come under the same directory.

It can then also solve official docker Solr image issues:
https://github.com/docker-solr/docker-solr/issues/74
https://github.com/docker-solr/docker-solr/issues/133


> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is started in SolrCloud mode. It would be great if all 
> writable content can come under the same directory to have seperate READ-ONLY 
> and WRITE-ONLY directories.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-02 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-13035:

Attachment: SOLR-13035.patch

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
> Attachments: SOLR-13035.patch
>
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is also started in SolrCloud mode. It would be great if 
> all writable content can come under the same directory.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-02 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-13035?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16706470#comment-16706470
 ] 

Amrit Sarkar commented on SOLR-13035:
-

Attaching first draft patch. More hardened tests and doc changes remaining.

> Utilize solr.data.home / solrDataHome in solr.xml to set all writable files 
> in single directory
> ---
>
> Key: SOLR-13035
> URL: https://issues.apache.org/jira/browse/SOLR-13035
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>Priority: Major
>
> {{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is 
> already available as per SOLR-6671.
> The writable content in Solr are index files, core properties, and ZK data if 
> embedded zookeeper is also started in SolrCloud mode. It would be great if 
> all writable content can come under the same directory.
> It can then also solve official docker Solr image issues:
> https://github.com/docker-solr/docker-solr/issues/74
> https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-13035) Utilize solr.data.home / solrDataHome in solr.xml to set all writable files in single directory

2018-12-02 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-13035:
---

 Summary: Utilize solr.data.home / solrDataHome in solr.xml to set 
all writable files in single directory
 Key: SOLR-13035
 URL: https://issues.apache.org/jira/browse/SOLR-13035
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Amrit Sarkar


{{solr.data.home}} system property or {{solrDataHome}} in _solr.xml_ is already 
available as per SOLR-6671.

The writable content in Solr are index files, core properties, and ZK data if 
embedded zookeeper is also started in SolrCloud mode. It would be great if all 
writable content can come under the same directory.

It can then also solve official docker Solr image issues:
https://github.com/docker-solr/docker-solr/issues/74
https://github.com/docker-solr/docker-solr/issues/133



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12524) CdcrBidirectionalTest.testBiDir() regularly fails

2018-11-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16676442#comment-16676442
 ] 

Amrit Sarkar edited comment on SOLR-12524 at 11/6/18 9:44 AM:
--

Another set of exceptions occurring:

{code}
  [beaster]   2> 22099 ERROR (cdcr-replicator-61-thread-1) [] 
o.a.s.c.u.ExecutorUtil Uncaught exception java.lang.AssertionError thrown by 
thread: cdcr-replicator-61-thread-1
  [beaster]   2> java.lang.Exception: Submitter stack trace
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:184)
 ~[java/:?]
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$1(CdcrReplicatorScheduler.java:76)
 ~[java/:?]
  [beaster]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_181]
  [beaster]   2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
  [beaster]   2> nov. 06, 2018 11:37:34 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
  [beaster]   2> WARNING: Uncaught exception in thread: 
Thread[cdcr-replicator-61-thread-1,5,TGRP-CdcrBidirectionalTest]
  [beaster]   2> java.lang.AssertionError
  [beaster]   2>at 
__randomizedtesting.SeedInfo.seed([E87F434F86998C33]:0)
  [beaster]   2>at 
org.apache.solr.update.TransactionLog$LogReader.next(TransactionLog.java:677)
  [beaster]   2>at 
org.apache.solr.update.CdcrTransactionLog$CdcrLogReader.next(CdcrTransactionLog.java:304)
  [beaster]   2>at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.next(CdcrUpdateLog.java:630)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:77)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  [beaster]   2>at java.lang.Thread.run(Thread.java:748)
  [beaster]   2> 
{code}


was (Author: sarkaramr...@gmail.com):
Another set of exceptions occurring:

  [beaster]   2> 22099 ERROR (cdcr-replicator-61-thread-1) [] 
o.a.s.c.u.ExecutorUtil Uncaught exception java.lang.AssertionError thrown by 
thread: cdcr-replicator-61-thread-1
  [beaster]   2> java.lang.Exception: Submitter stack trace
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:184)
 ~[java/:?]
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$1(CdcrReplicatorScheduler.java:76)
 ~[java/:?]
  [beaster]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_181]
  [beaster]   2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
  [beaster]   2> nov. 06, 2018 11:37:34 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
  [beaster]   2> WARNING: Uncaught exception in thread: 
Thread[cdcr-replicator-61-thread-1,5,TGRP-CdcrBidirectionalTest]
  [beaster]   2> java.lang.AssertionError
  [beaster]   2>at 
__randomizedtesting.SeedInfo.seed([E87F434F86998C33]:0)
  [beaster]   2>at 
or

[jira] [Commented] (SOLR-12524) CdcrBidirectionalTest.testBiDir() regularly fails

2018-11-06 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16676442#comment-16676442
 ] 

Amrit Sarkar commented on SOLR-12524:
-

Another set of exceptions occurring:

  [beaster]   2> 22099 ERROR (cdcr-replicator-61-thread-1) [] 
o.a.s.c.u.ExecutorUtil Uncaught exception java.lang.AssertionError thrown by 
thread: cdcr-replicator-61-thread-1
  [beaster]   2> java.lang.Exception: Submitter stack trace
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:184)
 ~[java/:?]
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$1(CdcrReplicatorScheduler.java:76)
 ~[java/:?]
  [beaster]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) 
~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
 ~[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
[?:1.8.0_181]
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
[?:1.8.0_181]
  [beaster]   2>at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]
  [beaster]   2> nov. 06, 2018 11:37:34 AM 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
  [beaster]   2> WARNING: Uncaught exception in thread: 
Thread[cdcr-replicator-61-thread-1,5,TGRP-CdcrBidirectionalTest]
  [beaster]   2> java.lang.AssertionError
  [beaster]   2>at 
__randomizedtesting.SeedInfo.seed([E87F434F86998C33]:0)
  [beaster]   2>at 
org.apache.solr.update.TransactionLog$LogReader.next(TransactionLog.java:677)
  [beaster]   2>at 
org.apache.solr.update.CdcrTransactionLog$CdcrLogReader.next(CdcrTransactionLog.java:304)
  [beaster]   2>at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.next(CdcrUpdateLog.java:630)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:77)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  [beaster]   2>at java.lang.Thread.run(Thread.java:748)
  [beaster]   2> 

> CdcrBidirectionalTest.testBiDir() regularly fails
> -
>
> Key: SOLR-12524
> URL: https://issues.apache.org/jira/browse/SOLR-12524
> Project: Solr
>  Issue Type: Test
>  Components: CDCR, Tests
>Reporter: Christine Poerschke
>Priority: Major
> Attachments: SOLR-12524.patch, SOLR-12524.patch, SOLR-12524.patch, 
> SOLR-12524.patch, SOLR-12524.patch, SOLR-12524.patch, beast-test-run
>
>
> e.g. from 
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/4701/consoleText
> {code}
> [junit4] ERROR   20.4s J0 | CdcrBidirectionalTest.testBiDir <<<
> [junit4]> Throwable #1: 
> com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
> uncaught exception in thread: Thread[id=28371, 
> name=cdcr-replicator-11775-thread-1, state=RUNNABLE, 
> group=TGRP-CdcrBidirectionalTest]
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50:8F8E744E68278112]:0)
> [junit4]> Caused by: java.lang.AssertionError
> [junit4]> at 
> __randomizedtesting.SeedInfo.seed([CA5584AC7009CD50]:0)
> [junit4]> at 
> org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
> [junit4]> at 
> org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
> [junit4]> at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> [junit4]> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> [junit4]> at java.lang.Thread.run(Thread.java:748)
> 

[jira] [Comment Edited] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675378#comment-16675378
 ] 

Amrit Sarkar edited comment on SOLR-12955 at 11/5/18 4:15 PM:
--

Thanks [~brot] for looking into refactoring DUP. 

Now with CdcrURP;

I am working on SOLR-12057 to deprecate CdcrURP altogether as it doesn't serve 
any strong purpose and eventually plans to phase out in potentially 8.0 or 
later. A single protected method is required to be extended in CdcrURP from 
DURP i.e. {{filterParams(SolrParams..)}}. I am attaching the potential code for 
CdcrURP for reference:
{code}
/**
 * 
 * Extends {@link org.apache.solr.update.processor.DistributedUpdateProcessor},
 * and attach the _version_ from the update to the doc,
 * for synchronizing checkpoints b/w clusters.
 * This URP to be added at target cluster in uni-directional
 * and all clusters involved in bi-directional sync.
 * 
 */
public class CdcrUpdateProcessor extends DistributedUpdateProcessor {

  public static final String CDCR_UPDATE = "cdcr.update";

  public CdcrUpdateProcessor(SolrQueryRequest req, SolrQueryResponse rsp, 
UpdateRequestProcessor next) {
super(req, rsp, next);
  }

  /**
   * 
   * Method to check if cdcr forwarded update.
   * If yes, attach the _version_ from the update to the doc,
   * for synchronizing checkpoint b/w clusters
   * 
   */
  protected ModifiableSolrParams filterParams(SolrParams params) {
ModifiableSolrParams result = super.filterParams(params);
if (params.get(CDCR_UPDATE) != null) {
  result.set(CDCR_UPDATE, "");
  result.set(CommonParams.VERSION_FIELD, 
params.get(CommonParams.VERSION_FIELD));
}
return result;
  }
}
{code}

So whichever class out of the two has the protected {{filterParams(...)}}, we 
need to extend the CdcrURP with that particular class I believe; as CDCR is a 
strict SolrCloud feature.

The patch for SOLR-12057 is almost ready and waiting for final review. Hope 
this helps.



was (Author: sarkaramr...@gmail.com):
Thanks [~brot] for looking into refactoring DUP. 

Now with CdcrURP;

I am working on SOLR-12057 to deprecate CdcrURP altogether as it doesn't serve 
any strong purpose and eventually plans to phase out in potentially 8.0 or 
later. A single protected method is required to be extended in CdcrURP from 
DURP i.e. {{filterParams(SolrParams..)}}. I am attaching the potential code for 
CdcrURP for reference:
{code}
/**
 * 
 * Extends {@link org.apache.solr.update.processor.DistributedUpdateProcessor},
 * and attach the _version_ from the update to the doc,
 * for synchronizing checkpoints b/w clusters.
 * This URP to be added at target cluster in uni-directional
 * and all clusters involved in bi-directional sync.
 * 
 */
public class CdcrUpdateProcessor extends DistributedUpdateProcessor {

  public static final String CDCR_UPDATE = "cdcr.update";

  public CdcrUpdateProcessor(SolrQueryRequest req, SolrQueryResponse rsp, 
UpdateRequestProcessor next) {
super(req, rsp, next);
  }

  /**
   * 
   * Method to check if cdcr forwarded update.
   * If yes, attach the _version_ from the update to the doc,
   * for synchronizing checkpoint b/w clusters
   * 
   */
  protected ModifiableSolrParams filterParams(SolrParams params) {
ModifiableSolrParams result = super.filterParams(params);
if (params.get(CDCR_UPDATE) != null) {
  result.set(CDCR_UPDATE, "");
  result.set(CommonParams.VERSION_FIELD, 
params.get(CommonParams.VERSION_FIELD));
}
return result;
  }
}
{code}

So whichever class out of the two has the protected {{filterParams(...)}}, we 
need to extend the CdcrURP with that particular class.

The patch for SOLR-12057 is almost ready and waiting for final review. Hope 
this helps.


> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675378#comment-16675378
 ] 

Amrit Sarkar commented on SOLR-12955:
-

Thanks [~brot] for looking into refactoring DUP. 

Now as far as with CdcrURP;

I am working on SOLR-12057 to deprecate CdcrURP altogether as it doesn't serve 
any strong purpose and eventually plans to phase out in potentially 8.0 or 
later. A single protected method is required to be extended in CdcrURP from 
DURP i.e. {{filterParams(SolrParams..)}}. I am attaching the potential code for 
CdcrURP for reference:
{code}
/**
 * 
 * Extends {@link org.apache.solr.update.processor.DistributedUpdateProcessor},
 * and attach the _version_ from the update to the doc,
 * for synchronizing checkpoints b/w clusters.
 * This URP to be added at target cluster in uni-directional
 * and all clusters involved in bi-directional sync.
 * 
 */
public class CdcrUpdateProcessor extends DistributedUpdateProcessor {

  public static final String CDCR_UPDATE = "cdcr.update";

  public CdcrUpdateProcessor(SolrQueryRequest req, SolrQueryResponse rsp, 
UpdateRequestProcessor next) {
super(req, rsp, next);
  }

  /**
   * 
   * Method to check if cdcr forwarded update.
   * If yes, attach the _version_ from the update to the doc,
   * for synchronizing checkpoint b/w clusters
   * 
   */
  protected ModifiableSolrParams filterParams(SolrParams params) {
ModifiableSolrParams result = super.filterParams(params);
if (params.get(CDCR_UPDATE) != null) {
  result.set(CDCR_UPDATE, "");
  result.set(CommonParams.VERSION_FIELD, 
params.get(CommonParams.VERSION_FIELD));
}
return result;
  }
}
{code}

So whichever class out of the two has the protected {{filterParams(...)}}, we 
need to extend the CdcrURP with that particular class.

The patch for SOLR-12057 is almost ready and waiting for final review. Hope 
this helps.


> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12955) Refactor DistributedUpdateProcessor

2018-11-05 Thread Amrit Sarkar (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12955?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16675378#comment-16675378
 ] 

Amrit Sarkar edited comment on SOLR-12955 at 11/5/18 4:11 PM:
--

Thanks [~brot] for looking into refactoring DUP. 

Now with CdcrURP;

I am working on SOLR-12057 to deprecate CdcrURP altogether as it doesn't serve 
any strong purpose and eventually plans to phase out in potentially 8.0 or 
later. A single protected method is required to be extended in CdcrURP from 
DURP i.e. {{filterParams(SolrParams..)}}. I am attaching the potential code for 
CdcrURP for reference:
{code}
/**
 * 
 * Extends {@link org.apache.solr.update.processor.DistributedUpdateProcessor},
 * and attach the _version_ from the update to the doc,
 * for synchronizing checkpoints b/w clusters.
 * This URP to be added at target cluster in uni-directional
 * and all clusters involved in bi-directional sync.
 * 
 */
public class CdcrUpdateProcessor extends DistributedUpdateProcessor {

  public static final String CDCR_UPDATE = "cdcr.update";

  public CdcrUpdateProcessor(SolrQueryRequest req, SolrQueryResponse rsp, 
UpdateRequestProcessor next) {
super(req, rsp, next);
  }

  /**
   * 
   * Method to check if cdcr forwarded update.
   * If yes, attach the _version_ from the update to the doc,
   * for synchronizing checkpoint b/w clusters
   * 
   */
  protected ModifiableSolrParams filterParams(SolrParams params) {
ModifiableSolrParams result = super.filterParams(params);
if (params.get(CDCR_UPDATE) != null) {
  result.set(CDCR_UPDATE, "");
  result.set(CommonParams.VERSION_FIELD, 
params.get(CommonParams.VERSION_FIELD));
}
return result;
  }
}
{code}

So whichever class out of the two has the protected {{filterParams(...)}}, we 
need to extend the CdcrURP with that particular class.

The patch for SOLR-12057 is almost ready and waiting for final review. Hope 
this helps.



was (Author: sarkaramr...@gmail.com):
Thanks [~brot] for looking into refactoring DUP. 

Now as far as with CdcrURP;

I am working on SOLR-12057 to deprecate CdcrURP altogether as it doesn't serve 
any strong purpose and eventually plans to phase out in potentially 8.0 or 
later. A single protected method is required to be extended in CdcrURP from 
DURP i.e. {{filterParams(SolrParams..)}}. I am attaching the potential code for 
CdcrURP for reference:
{code}
/**
 * 
 * Extends {@link org.apache.solr.update.processor.DistributedUpdateProcessor},
 * and attach the _version_ from the update to the doc,
 * for synchronizing checkpoints b/w clusters.
 * This URP to be added at target cluster in uni-directional
 * and all clusters involved in bi-directional sync.
 * 
 */
public class CdcrUpdateProcessor extends DistributedUpdateProcessor {

  public static final String CDCR_UPDATE = "cdcr.update";

  public CdcrUpdateProcessor(SolrQueryRequest req, SolrQueryResponse rsp, 
UpdateRequestProcessor next) {
super(req, rsp, next);
  }

  /**
   * 
   * Method to check if cdcr forwarded update.
   * If yes, attach the _version_ from the update to the doc,
   * for synchronizing checkpoint b/w clusters
   * 
   */
  protected ModifiableSolrParams filterParams(SolrParams params) {
ModifiableSolrParams result = super.filterParams(params);
if (params.get(CDCR_UPDATE) != null) {
  result.set(CDCR_UPDATE, "");
  result.set(CommonParams.VERSION_FIELD, 
params.get(CommonParams.VERSION_FIELD));
}
return result;
  }
}
{code}

So whichever class out of the two has the protected {{filterParams(...)}}, we 
need to extend the CdcrURP with that particular class.

The patch for SOLR-12057 is almost ready and waiting for final review. Hope 
this helps.


> Refactor DistributedUpdateProcessor
> ---
>
> Key: SOLR-12955
> URL: https://issues.apache.org/jira/browse/SOLR-12955
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Bar Rotstein
>Priority: Major
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> Lately As I was skimming through Solr's code base I noticed that 
> DistributedUpdateProcessor has a lot of nested if else statements, which 
> hampers code readability.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas

2018-11-04 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12057:

Attachment: SOLR-12057.patch

> CDCR does not replicate to Collections with TLOG Replicas
> -
>
> Key: SOLR-12057
> URL: https://issues.apache.org/jira/browse/SOLR-12057
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2
>Reporter: Webster Homer
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, 
> SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, 
> cdcr-fail-with-tlog-pull.patch, cdcr-fail-with-tlog-pull.patch
>
>
> We created a collection using TLOG replicas in our QA clouds.
> We have a locally hosted solrcloud with 2 nodes, all our collections have 2 
> shards. We use CDCR to replicate the collections from this environment to 2 
> data centers hosted in Google cloud. This seems to work fairly well for our 
> collections with NRT replicas. However the new TLOG collection has problems.
>  
> The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 
> shards per collection with 2 replicas per shard.
>  
> We never see data show up in the cloud collections, but we do see tlog files 
> show up on the cloud servers. I can see that all of the servers have cdcr 
> started, buffers are disabled.
> The cdcr source configuration is:
>  
> "requestHandler":{"/cdcr":{
>       "name":"/cdcr",
>       "class":"solr.CdcrRequestHandler",
>       "replica":[
>         {
>           
> "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";,
>           "source":"b2b-catalog-material-180124T",
>           "target":"b2b-catalog-material-180124T"},
>         {
>           
> "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";,
>           "source":"b2b-catalog-material-180124T",
>           "target":"b2b-catalog-material-180124T"}],
>       "replicator":{
>         "threadPoolSize":4,
>         "schedule":500,
>         "batchSize":250},
>       "updateLogSynchronizer":\{"schedule":6
>  
> The target configurations in the 2 clouds are the same:
> "requestHandler":{"/cdcr":{ "name":"/cdcr", 
> "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} 
>  
> All of our collections have a timestamp field, index_date. In the source 
> collection all the records have a date of 2/28/2018 but the target 
> collections have a latest date of 1/26/2018
>  
> I don't see cdcr errors in the logs, but we use logstash to search them, and 
> we're still perfecting that. 
>  
> We have a number of similar collections that behave correctly. This is the 
> only collection that is a TLOG collection. It appears that CDCR doesn't 
> support TLOG collections.
>  
> It looks like the data is getting to the target servers. I see tlog files 
> with the right timestamps. Looking at the timestamps on the documents in the 
> collection none of the data appears to have been loaded.In the solr.log I see 
> lots of /cdcr messages  action=LASTPROCESSEDVERSION,  
> action=COLLECTIONCHECKPOINT, and  action=SHARDCHECKPOINT 
>  
> no errors
>  
> Target collections autoCommit is set to  6 I tried sending a commit 
> explicitly no difference. cdcr is uploading data, but no new data appears in 
> the collection.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12057) CDCR does not replicate to Collections with TLOG Replicas

2018-11-04 Thread Amrit Sarkar (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12057?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-12057:

Attachment: SOLR-12057.patch

> CDCR does not replicate to Collections with TLOG Replicas
> -
>
> Key: SOLR-12057
> URL: https://issues.apache.org/jira/browse/SOLR-12057
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2
>Reporter: Webster Homer
>Assignee: Varun Thacker
>Priority: Major
> Attachments: SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, 
> SOLR-12057.patch, SOLR-12057.patch, SOLR-12057.patch, 
> cdcr-fail-with-tlog-pull.patch, cdcr-fail-with-tlog-pull.patch
>
>
> We created a collection using TLOG replicas in our QA clouds.
> We have a locally hosted solrcloud with 2 nodes, all our collections have 2 
> shards. We use CDCR to replicate the collections from this environment to 2 
> data centers hosted in Google cloud. This seems to work fairly well for our 
> collections with NRT replicas. However the new TLOG collection has problems.
>  
> The google cloud solrclusters have 4 nodes each (3 separate Zookeepers). 2 
> shards per collection with 2 replicas per shard.
>  
> We never see data show up in the cloud collections, but we do see tlog files 
> show up on the cloud servers. I can see that all of the servers have cdcr 
> started, buffers are disabled.
> The cdcr source configuration is:
>  
> "requestHandler":{"/cdcr":{
>       "name":"/cdcr",
>       "class":"solr.CdcrRequestHandler",
>       "replica":[
>         {
>           
> "zkHost":"[xxx-mzk01.sial.com:2181|http://xxx-mzk01.sial.com:2181/],[xxx-mzk02.sial.com:2181|http://xxx-mzk02.sial.com:2181/],[xxx-mzk03.sial.com:2181/solr|http://xxx-mzk03.sial.com:2181/solr]";,
>           "source":"b2b-catalog-material-180124T",
>           "target":"b2b-catalog-material-180124T"},
>         {
>           
> "zkHost":"[-mzk01.sial.com:2181|http://-mzk01.sial.com:2181/],[-mzk02.sial.com:2181|http://-mzk02.sial.com:2181/],[-mzk03.sial.com:2181/solr|http://-mzk03.sial.com:2181/solr]";,
>           "source":"b2b-catalog-material-180124T",
>           "target":"b2b-catalog-material-180124T"}],
>       "replicator":{
>         "threadPoolSize":4,
>         "schedule":500,
>         "batchSize":250},
>       "updateLogSynchronizer":\{"schedule":6
>  
> The target configurations in the 2 clouds are the same:
> "requestHandler":{"/cdcr":{ "name":"/cdcr", 
> "class":"solr.CdcrRequestHandler", "buffer":{"defaultState":"disabled"}}} 
>  
> All of our collections have a timestamp field, index_date. In the source 
> collection all the records have a date of 2/28/2018 but the target 
> collections have a latest date of 1/26/2018
>  
> I don't see cdcr errors in the logs, but we use logstash to search them, and 
> we're still perfecting that. 
>  
> We have a number of similar collections that behave correctly. This is the 
> only collection that is a TLOG collection. It appears that CDCR doesn't 
> support TLOG collections.
>  
> It looks like the data is getting to the target servers. I see tlog files 
> with the right timestamps. Looking at the timestamps on the documents in the 
> collection none of the data appears to have been loaded.In the solr.log I see 
> lots of /cdcr messages  action=LASTPROCESSEDVERSION,  
> action=COLLECTIONCHECKPOINT, and  action=SHARDCHECKPOINT 
>  
> no errors
>  
> Target collections autoCommit is set to  6 I tried sending a commit 
> explicitly no difference. cdcr is uploading data, but no new data appears in 
> the collection.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   3   4   5   6   7   8   9   10   >