[jira] [Commented] (SOLR-10146) Admin UI: Button to delete a shard

2017-03-03 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10146?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15894581#comment-15894581
 ] 

Amrit Sarkar commented on SOLR-10146:
-

Clean! Thanks for considering the patch, feedback and making the necessary 
changes, makes much more sense.

> Admin UI: Button to delete a shard
> --
>
> Key: SOLR-10146
> URL: https://issues.apache.org/jira/browse/SOLR-10146
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5, master (7.0)
>
> Attachments: Screenshot-1.png, Screenshot-2.png, SOLR-10146.patch, 
> SOLR-10146.patch
>
>
> Currently you can delete a replica through a small red X in the Admin UI 
> Collections tab. So you can delete all the replicas inside a shard, but you 
> cannot delete the whole shard, i.e. call the DELETESHARD Collection API.
> Add a button for this. This is useful for cleaning up e.g. after calling 
> SPLITSHARD.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10147) Admin UI -> Cloud -> Graph: Impossible to see shard state

2017-03-03 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15894649#comment-15894649
 ] 

Amrit Sarkar commented on SOLR-10147:
-

{quote}The strike-through effect is a bit hard to see on the shard level since 
there is (often) another line also cutting through the text{quote}

That is true, I tried hard to make the path-lines as light as possible. Let me 
play around with _line-through_ and see if there is any alternative.

> Admin UI -> Cloud -> Graph: Impossible to see shard state
> -
>
> Key: SOLR-10147
> URL: https://issues.apache.org/jira/browse/SOLR-10147
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
> Fix For: 6.5
>
> Attachments: color_and_style.png, screenshot-1.png, screenshot-2.png, 
> screenshot-3.png, screenshot-4.png, screenshot-5.png, screenshot-6.png, 
> SOLR-10147.patch, SOLR-10147.patch, SOLR-10147-v1.patch
>
>
> Currently in the Cloud -> Graph view there is a legend with color codes, but 
> that is for replicas only.
> We need a way to quickly see the state of the shard, in particular if it is 
> active or inactive. For testing, create a collection, then call SPLITSHARD on 
> shard1, and you'll end up with shards {{shard1}}, {{shard1_0}} and 
> {{shard1_1}}. It is not possible to see which one is active or inactive.
> Also, the replicas belonging to the inactive shard are still marked with 
> green "Active", while in reality they are "Inactive".
> The simplest would be to add a new state "Inactive" with color e.g. blue, 
> which would be used on both shard and replica level. But since an inactive 
> replica could also be "Gone" or "Down", there should be some way to indicate 
> both at the same time...



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8173) CLONE - Leader recovery process can select the wrong leader if all replicas for a shard are down and trying to recover as well as lose updates that should have been reco

2017-03-03 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8173?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15894789#comment-15894789
 ] 

Amrit Sarkar commented on SOLR-8173:


Are we planning to resolve this any time soon?

> CLONE - Leader recovery process can select the wrong leader if all replicas 
> for a shard are down and trying to recover as well as lose updates that 
> should have been recovered.
> ---
>
> Key: SOLR-8173
> URL: https://issues.apache.org/jira/browse/SOLR-8173
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Reporter: Matteo Grolla
>Assignee: Mark Miller
>Priority: Critical
>  Labels: leader, recovery
> Attachments: solr_8983.log, solr_8984.log
>
>
> I'm doing this test
> collection test is replicated on two solr nodes running on 8983, 8984
> using external zk
> initially both nodes are empty
> 1)turn on solr 8983
> 2)add,commit a doc x con solr 8983
> 3)turn off solr 8983
> 4)turn on solr 8984
> 5)shortly after (leader still not elected) turn on solr 8983
> 6)8984 is elected as leader
> 7)doc x is present on 8983 but not on 8984 (check issuing a query)
> In attachment are the solr.log files of both instances



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9838) atomic "inc" when adding doc doesn't respect field "default" from schema

2017-03-03 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15895218#comment-15895218
 ] 

Amrit Sarkar commented on SOLR-9838:


The troubled code is in AtomicUpdateDocumentMerger.java::doInc(..) :

{noformat}
  protected void doInc(SolrInputDocument toDoc, SolrInputField sif, Object 
fieldVal) {
SolrInputField numericField = toDoc.get(sif.getName());
if (numericField == null) {
  toDoc.setField(sif.getName(),  fieldVal); //need to check the default in 
schema here, instead just putting whatever is coming
} else {
 ...
  }
{noformat}

I will try to cook up a patch for the same with relevant test cases.

> atomic "inc" when adding doc doesn't respect field "default" from schema
> 
>
> Key: SOLR-9838
> URL: https://issues.apache.org/jira/browse/SOLR-9838
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> If you do an "atomic update" when adding a document for the first time, then 
> the "inc" operator acts as if the field has a default of 0.
> But if the {{}} has an *actual* default in the schema.xml (example: 
> {{default="42"}}) then that default is ignored by the atomic update code path.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9838) atomic "inc" when adding doc doesn't respect field "default" from schema

2017-03-03 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-9838:
---
Attachment: SOLR-9838.patch

Hoss,

SOLR-9838.patch uploaded which incorporates default values while performing 
"inc" operation. Fixed a single test-method too.

{code}  modified:   
solr/core/src/java/org/apache/solr/update/processor/AtomicUpdateDocumentMerger.java
modified:   
solr/core/src/test/org/apache/solr/update/processor/AtomicUpdatesTest.java{code}

Feedback will be appreciated.

> atomic "inc" when adding doc doesn't respect field "default" from schema
> 
>
> Key: SOLR-9838
> URL: https://issues.apache.org/jira/browse/SOLR-9838
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9838.patch
>
>
> If you do an "atomic update" when adding a document for the first time, then 
> the "inc" operator acts as if the field has a default of 0.
> But if the {{}} has an *actual* default in the schema.xml (example: 
> {{default="42"}}) then that default is ignored by the atomic update code path.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-06 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15898867#comment-15898867
 ] 

Amrit Sarkar commented on SOLR-10209:
-

Need advice on the following:

We were solving two problems in this:
1. Indefinite retires of the API calls when the server goes down without 
completing the request
2. Don't say the connection is list if the API is taking more than 10 sec.

(2) is done and good to go, I am working on elegant progress bar so that it can 
accommodate more than one call at single time.
For (1), we are heading towards greater problems as earlier the original API 
call was replicated, now in addition REQUESTSTATUS api is clinging on with it 
and now two APIs are filling the network call list.

There is no way to fix it other than we change the base js file i.e. app.js. 
This means we will change how the API calls are made in other pages e.g. cloud, 
core, mbeans etc. I intend not to change the base js file, and suggestions will 
be deeply appreciated on this.

> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server
> SOLR-10146: Admin UI: Button to delete a shard
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> b) "Delete" shard
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9838) atomic "inc" when adding doc doesn't respect field "default" from schema

2017-03-08 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-9838:
---
Attachment: SOLR-9838.patch

> atomic "inc" when adding doc doesn't respect field "default" from schema
> 
>
> Key: SOLR-9838
> URL: https://issues.apache.org/jira/browse/SOLR-9838
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9838.patch
>
>
> If you do an "atomic update" when adding a document for the first time, then 
> the "inc" operator acts as if the field has a default of 0.
> But if the {{}} has an *actual* default in the schema.xml (example: 
> {{default="42"}}) then that default is ignored by the atomic update code path.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9838) atomic "inc" when adding doc doesn't respect field "default" from schema

2017-03-08 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9838?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-9838:
---
Attachment: (was: SOLR-9838.patch)

> atomic "inc" when adding doc doesn't respect field "default" from schema
> 
>
> Key: SOLR-9838
> URL: https://issues.apache.org/jira/browse/SOLR-9838
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
> Attachments: SOLR-9838.patch
>
>
> If you do an "atomic update" when adding a document for the first time, then 
> the "inc" operator acts as if the field has a default of 0.
> But if the {{}} has an *actual* default in the schema.xml (example: 
> {{default="42"}}) then that default is ignored by the atomic update code path.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9530) Add an Atomic Update Processor

2017-03-08 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-9530:
---
Attachment: SOLR-9530.patch

As per discussion with Noble,

Refactored the code to optimise and remove unwanted elements.

> Add an Atomic Update Processor 
> ---
>
> Key: SOLR-9530
> URL: https://issues.apache.org/jira/browse/SOLR-9530
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-9530.patch, SOLR-9530.patch, SOLR-9530.patch, 
> SOLR-9530.patch, SOLR-9530.patch
>
>
> I'd like to explore the idea of adding a new update processor to help ingest 
> partial updates.
> Example use-case - There are two datasets with a common id field. How can I 
> merge both of them at index time?
> Proposed Solution: 
> {code}
> 
>   
> add
>   
>   
>   
> 
> {code}
> So the first JSON dump could be ingested against 
> {{http://localhost:8983/solr/gettingstarted/update/json}}
> And then the second JSON could be ingested against
> {{http://localhost:8983/solr/gettingstarted/update/json?processor=atomic}}
> The Atomic Update Processor could support all the atomic update operations 
> currently supported.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-08 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15898867#comment-15898867
 ] 

Amrit Sarkar edited comment on SOLR-10209 at 3/8/17 5:29 PM:
-

Need advice on the following:

We were solving two problems in this:
1. Indefinite retires of the API calls when the server goes down without 
completing the request
2. Don't say the connection is lost if the API is taking more than 10 sec.

(2) is done and good to go, I am working on elegant progress bar so that it can 
accommodate more than one call at single time.
For (1), we are heading towards greater problems as earlier the original API 
call was replicated, now in addition REQUESTSTATUS api is clinging on with it 
and now two APIs are filling the network call list.

There is no way to fix it other than we change the base js file i.e. app.js. 
This means we will change how the API calls are made in other pages e.g. cloud, 
core, mbeans etc. I intend not to change the base js file, and suggestions will 
be deeply appreciated on this.


was (Author: sarkaramr...@gmail.com):
Need advice on the following:

We were solving two problems in this:
1. Indefinite retires of the API calls when the server goes down without 
completing the request
2. Don't say the connection is list if the API is taking more than 10 sec.

(2) is done and good to go, I am working on elegant progress bar so that it can 
accommodate more than one call at single time.
For (1), we are heading towards greater problems as earlier the original API 
call was replicated, now in addition REQUESTSTATUS api is clinging on with it 
and now two APIs are filling the network call list.

There is no way to fix it other than we change the base js file i.e. app.js. 
This means we will change how the API calls are made in other pages e.g. cloud, 
core, mbeans etc. I intend not to change the base js file, and suggestions will 
be deeply appreciated on this.

> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server
> SOLR-10146: Admin UI: Button to delete a shard
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> b) "Delete" shard
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2017-03-14 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-9516:
---
Attachment: SOLR-9516.patch

Ishan, sorry I didn't respond earlier, didn't notice the mention.

http://host:port/solr/libs was inaccessible as it was not listed in exclusion 
pattern for SolrDispatchFilter, hence it required authentication and UI failed 
to fetch the content from that part from webapp folder.

SOLR-9516.patch uploaded with one line change in web.xml in webapp.

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 
> 07-36-29.png, SOLR-9516.patch
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2017-03-14 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15924775#comment-15924775
 ] 

Amrit Sarkar edited comment on SOLR-9516 at 3/14/17 6:59 PM:
-

Ishan, sorry I didn't respond earlier, didn't notice the mention.

http://host:port/solr/libs was inaccessible as it was not listed in exclusion 
pattern for SolrDispatchFilter, hence it required authentication and UI failed 
to fetch the content from that path from webapp folder.

SOLR-9516.patch uploaded with one line change in web.xml in webapp.


was (Author: sarkaramr...@gmail.com):
Ishan, sorry I didn't respond earlier, didn't notice the mention.

http://host:port/solr/libs was inaccessible as it was not listed in exclusion 
pattern for SolrDispatchFilter, hence it required authentication and UI failed 
to fetch the content from that part from webapp folder.

SOLR-9516.patch uploaded with one line change in web.xml in webapp.

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 
> 07-36-29.png, SOLR-9516.patch
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2017-03-14 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15924775#comment-15924775
 ] 

Amrit Sarkar edited comment on SOLR-9516 at 3/14/17 7:18 PM:
-

Ishan, sorry I didn't respond earlier, didn't notice the mention.

http://host:port/solr/libs was inaccessible as it was not listed in exclusion 
pattern for SolrDispatchFilter, hence it required authentication and UI failed 
to fetch the content from that path from webapp folder. Thank you [~ctargett] 
for pin-pointing the above and suggesting the changes.

We faced similar Kerberos 34 _Request is a Replay_ error for MBeans Request 
Handler:
{code}http://localhost:8983/solr/[collection_name]/admin/mbeans?cat=CACHE{code}
and the changes listed below rectified that, not sure why it was broken and 
thus how it got fixed.

SOLR-9516.patch uploaded with one line change in web.xml in webapp.


was (Author: sarkaramr...@gmail.com):
Ishan, sorry I didn't respond earlier, didn't notice the mention.

http://host:port/solr/libs was inaccessible as it was not listed in exclusion 
pattern for SolrDispatchFilter, hence it required authentication and UI failed 
to fetch the content from that path from webapp folder.

SOLR-9516.patch uploaded with one line change in web.xml in webapp.

> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 
> 07-36-29.png, SOLR-9516.patch
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9516) New UI doesn't work when Kerberos is enabled

2017-03-14 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15924839#comment-15924839
 ] 

Amrit Sarkar commented on SOLR-9516:


Ishan, all the buttons, commands, stats, tree, cloud, thread info, dashboard 
are working as expected positively.



> New UI doesn't work when Kerberos is enabled
> 
>
> Key: SOLR-9516
> URL: https://issues.apache.org/jira/browse/SOLR-9516
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Ishan Chattopadhyaya
>  Labels: javascript, newdev, security
> Attachments: QQ20161012-0.png, Screenshot from 2016-09-15 
> 07-36-29.png, SOLR-9516.patch
>
>
> It seems resources like http://solr1:8983/solr/libs/chosen.jquery.js 
> encounter 403 error:
> {code}
> 2016-09-15 02:01:45.272 WARN  (qtp611437735-18) [   ] 
> o.a.h.s.a.s.AuthenticationFilter Authentication exception: GSSException: 
> Failure unspecified at GSS-API level (Mechanism level: Request is a replay 
> (34))
> {code}
> The old UI is fine.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10263) Different SpellcheckComponents should have their own options

2017-03-15 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10263?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15925948#comment-15925948
 ] 

Amrit Sarkar commented on SOLR-10263:
-

[~abhidemon],

Which version of Solr are we talking about here, the latest one? 6.4.x. 

> Different SpellcheckComponents should have their own options
> 
>
> Key: SOLR-10263
> URL: https://issues.apache.org/jira/browse/SOLR-10263
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: spellchecker
>Reporter: Abhishek Kumar Singh
>Priority: Minor
>
> As of now, common spellcheck options are applied to all the 
> SpellCheckComponents.
> This can create problem in the following case:-
>  It may be the case that we want *DirectSolrSpellChecker* to ALWAYS_SUGGEST 
> spellcheck suggestions. 
> But we may want *WordBreakSpellChecker* to suggest only if the token is not 
> in the index (SUGGEST_WHEN_NOT_IN_INDEX) . 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-15 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15925965#comment-15925965
 ] 

Amrit Sarkar commented on SOLR-10209:
-

Going forward with:

bq. Don't say the connection is lost if the API is taking more than 10 sec

for now.

Will see what to do with the indefinite retries.

> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server
> SOLR-10146: Admin UI: Button to delete a shard
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> b) "Delete" shard
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-15 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15925966#comment-15925966
 ] 

Amrit Sarkar commented on SOLR-10209:
-

Going forward with:

bq. Don't say the connection is lost if the API is taking more than 10 sec

for now.

Will see what to do with the indefinite retries.

> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server
> SOLR-10146: Admin UI: Button to delete a shard
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> b) "Delete" shard
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Issue Comment Deleted] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-15 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10209:

Comment: was deleted

(was: Going forward with:

bq. Don't say the connection is lost if the API is taking more than 10 sec

for now.

Will see what to do with the indefinite retries.)

> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server
> SOLR-10146: Admin UI: Button to delete a shard
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> b) "Delete" shard
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10209:

Attachment: SOLR-10209.patch

> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server
> SOLR-10146: Admin UI: Button to delete a shard
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> b) "Delete" shard
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10229) See what it would take to shift many of our one-off schemas used for testing to managed schema and construct them as part of the tests

2017-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927692#comment-15927692
 ] 

Amrit Sarkar commented on SOLR-10229:
-

Thank you Erick for the opportunity,

Here are the key points on which we are designing the framework:

1. We will have a "mother"/"master" schema, which can be imported by the 
individual tests and can add/modify fieldTypes and field definitions through 
custom code.

2. A set of schema files with limited features/definitions will be available 
other than "mother", basic ones, one or two can be complex, we can discuss that 
later.

3. Provide reasonable and understable pref-defined functions to add/modify 
fieldTypes and field (custom code):
Erick suggested something like:
{code}
static String newFieldType = "

  
  

  ";
{code}
and then have a utility method like:
{code}
Utility.addFieldType(newFieldType);
{code}

It is human readable for the coders and it will require a straightforward 
string parsing and invoke relevant methods for the schema.

David mentioned to avoid _XML syntax/schema_ and build in JSON, does that mean 
the entire managed-schema will transformed in JSON format or we are just 
discussing the intake parameter for utility methods for the framework?

I will start working on framework first, before we discuss what to or not to 
include in our mother and other basic schemas.

> See what it would take to shift many of our one-off schemas used for testing 
> to managed schema and construct them as part of the tests
> --
>
> Key: SOLR-10229
> URL: https://issues.apache.org/jira/browse/SOLR-10229
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> The test schema files are intimidating. There are about a zillion of them, 
> and making a change in any of them risks breaking some _other_ test. That 
> leaves people three choices:
> 1> add what they need to some existing schema. Which makes schemas bigger and 
> bigger and bigger.
> 2> create a new schema file, adding to the proliferation thereof.
> 3> Look through all the existing tests to see if they have something that 
> works.
> The recent work on LUCENE-7705 is a case in point. We're adding a maxLen 
> parameter to some tokenizers. Putting those parameters into any of the 
> existing schemas, especially to test < 255 char tokens is virtually 
> guaranteed to break other tests, so the only safe thing to do is make another 
> schema file. Adding to the multiplication of files.
> As part of SOLR-5260 I tried creating the schema on the fly rather than 
> creating a new static schema file and it's not hard. WDYT about making this 
> into some better thought-out utility? 
> At present, this is pretty fuzzy, I wanted to get some reactions before 
> putting much effort into it. I expect that the utility methods would 
> eventually get a bunch of canned types. It's reasonably straightforward for 
> primitive types, if lengthy. But when you get into solr.TextField-based types 
> it gets less straight-forward.
> We could manage to just move the "intimidation" from the plethora of schema 
> files to a zillion fieldTypes in the utility to choose from...
> Also, forcing every test to define the fields up-front is arguably less 
> convenient than just having _some_ canned schemas we can use. And erroneous 
> schemas to test failure modes are probably not very good fits for any such 
> framework.
> [~steve_rowe] and [~hossman_luc...@fucit.org] in particular might have 
> something to say.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10209:

Description: 
We are having discussion on multiple jiras for requests for Collections apis 
from UI and how to improve them:

-SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses connection 
with the server-
SOLR-10146: Admin UI: Button to delete a shard
SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
when replicationFactor>1

Proposal =>

*Phase 1:*

Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
to fetch the information. There will be performance hit, but the requests will 
be safe and sound. A progress bar will be added for request status.
{noformat}
> submit the async request
if (the initial call failed or there was no status to be found)
{ report an error and suggest the user look check their system before 
resubmitting the request. Bail out in this case, no retries, no attempt to 
drive on. }
else
{ put up a progress indicator while periodically checking the status, Continue 
spinning until we can report the final status. }
{noformat}

*Phase 2:*

Add new buttons/features to collections.html

a) "Split" shard
-b) "Delete" shard-
c) "Backup" collection
d) "Restore" collection

Open to suggestions and feedbacks on this.

  was:
We are having discussion on multiple jiras for requests for Collections apis 
from UI and how to improve them:

SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses connection 
with the server
SOLR-10146: Admin UI: Button to delete a shard
SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
when replicationFactor>1

Proposal =>

*Phase 1:*

Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
to fetch the information. There will be performance hit, but the requests will 
be safe and sound. A progress bar will be added for request status.
{noformat}
> submit the async request
if (the initial call failed or there was no status to be found)
{ report an error and suggest the user look check their system before 
resubmitting the request. Bail out in this case, no retries, no attempt to 
drive on. }
else
{ put up a progress indicator while periodically checking the status, Continue 
spinning until we can report the final status. }
{noformat}

*Phase 2:*

Add new buttons/features to collections.html

a) "Split" shard
b) "Delete" shard
c) "Backup" collection
d) "Restore" collection

Open to suggestions and feedbacks on this.


> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> -SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server-
> SOLR-10146: Admin UI: Button to delete a shard
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> -b) "Delete" shard-
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10209) UI: Convert all Collections api calls to async requests, add new features/buttons

2017-03-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10209?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10209:

Description: 
We are having discussion on multiple jiras for requests for Collections apis 
from UI and how to improve them:

-SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses connection 
with the server-
-SOLR-10146: Admin UI: Button to delete a shard-
SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
when replicationFactor>1

Proposal =>

*Phase 1:*

Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
to fetch the information. There will be performance hit, but the requests will 
be safe and sound. A progress bar will be added for request status.
{noformat}
> submit the async request
if (the initial call failed or there was no status to be found)
{ report an error and suggest the user look check their system before 
resubmitting the request. Bail out in this case, no retries, no attempt to 
drive on. }
else
{ put up a progress indicator while periodically checking the status, Continue 
spinning until we can report the final status. }
{noformat}

*Phase 2:*

Add new buttons/features to collections.html

a) "Split" shard
-b) "Delete" shard-
c) "Backup" collection
d) "Restore" collection

Open to suggestions and feedbacks on this.

  was:
We are having discussion on multiple jiras for requests for Collections apis 
from UI and how to improve them:

-SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses connection 
with the server-
SOLR-10146: Admin UI: Button to delete a shard
SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
when replicationFactor>1

Proposal =>

*Phase 1:*

Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
to fetch the information. There will be performance hit, but the requests will 
be safe and sound. A progress bar will be added for request status.
{noformat}
> submit the async request
if (the initial call failed or there was no status to be found)
{ report an error and suggest the user look check their system before 
resubmitting the request. Bail out in this case, no retries, no attempt to 
drive on. }
else
{ put up a progress indicator while periodically checking the status, Continue 
spinning until we can report the final status. }
{noformat}

*Phase 2:*

Add new buttons/features to collections.html

a) "Split" shard
-b) "Delete" shard-
c) "Backup" collection
d) "Restore" collection

Open to suggestions and feedbacks on this.


> UI: Convert all Collections api calls to async requests, add new 
> features/buttons
> -
>
> Key: SOLR-10209
> URL: https://issues.apache.org/jira/browse/SOLR-10209
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI
>Reporter: Amrit Sarkar
> Attachments: SOLR-10209.patch, SOLR-10209.patch, SOLR-10209-v1.patch
>
>
> We are having discussion on multiple jiras for requests for Collections apis 
> from UI and how to improve them:
> -SOLR-9818: Solr admin UI rapidly retries any request(s) if it loses 
> connection with the server-
> -SOLR-10146: Admin UI: Button to delete a shard-
> SOLR-10201: Add Collection "creates collection", "Connection to Solr lost", 
> when replicationFactor>1
> Proposal =>
> *Phase 1:*
> Convert all Collections api calls to async requests and utilise REQUESTSTATUS 
> to fetch the information. There will be performance hit, but the requests 
> will be safe and sound. A progress bar will be added for request status.
> {noformat}
> > submit the async request
> if (the initial call failed or there was no status to be found)
> { report an error and suggest the user look check their system before 
> resubmitting the request. Bail out in this case, no retries, no attempt to 
> drive on. }
> else
> { put up a progress indicator while periodically checking the status, 
> Continue spinning until we can report the final status. }
> {noformat}
> *Phase 2:*
> Add new buttons/features to collections.html
> a) "Split" shard
> -b) "Delete" shard-
> c) "Backup" collection
> d) "Restore" collection
> Open to suggestions and feedbacks on this.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10229) See what it would take to shift many of our one-off schemas used for testing to managed schema and construct them as part of the tests

2017-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927875#comment-15927875
 ] 

Amrit Sarkar commented on SOLR-10229:
-

Alexandre,

Thank you for correcting both the points out, Sorry I mixed two different 
conversations (online-offline) altogether. Rephrasing again:

1. A "mother" schema, with most common field and fieldType definitions, will be 
loaded in the test framework once, non-dependent on any individual tests.
2. The individual tests then can pull relevant/required field and fieldType 
definitions from mother schema via Schema API ([Retrieve Schema 
Information|https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-RetrieveSchemaInformation])
 and post them to its own miniature schema for tests. The utility method can be 
named as "copyFieldAndDefinition" as suggested above.
3. For custom field and fieldType, which are not available in the mother 
schema, utility methods in framework to pass them onto Schema API.

All the endpoints of framework will take JSON format parameters. Anything am I 
missing out or understanding wrong?

> See what it would take to shift many of our one-off schemas used for testing 
> to managed schema and construct them as part of the tests
> --
>
> Key: SOLR-10229
> URL: https://issues.apache.org/jira/browse/SOLR-10229
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> The test schema files are intimidating. There are about a zillion of them, 
> and making a change in any of them risks breaking some _other_ test. That 
> leaves people three choices:
> 1> add what they need to some existing schema. Which makes schemas bigger and 
> bigger and bigger.
> 2> create a new schema file, adding to the proliferation thereof.
> 3> Look through all the existing tests to see if they have something that 
> works.
> The recent work on LUCENE-7705 is a case in point. We're adding a maxLen 
> parameter to some tokenizers. Putting those parameters into any of the 
> existing schemas, especially to test < 255 char tokens is virtually 
> guaranteed to break other tests, so the only safe thing to do is make another 
> schema file. Adding to the multiplication of files.
> As part of SOLR-5260 I tried creating the schema on the fly rather than 
> creating a new static schema file and it's not hard. WDYT about making this 
> into some better thought-out utility? 
> At present, this is pretty fuzzy, I wanted to get some reactions before 
> putting much effort into it. I expect that the utility methods would 
> eventually get a bunch of canned types. It's reasonably straightforward for 
> primitive types, if lengthy. But when you get into solr.TextField-based types 
> it gets less straight-forward.
> We could manage to just move the "intimidation" from the plethora of schema 
> files to a zillion fieldTypes in the utility to choose from...
> Also, forcing every test to define the fields up-front is arguably less 
> convenient than just having _some_ canned schemas we can use. And erroneous 
> schemas to test failure modes are probably not very good fits for any such 
> framework.
> [~steve_rowe] and [~hossman_luc...@fucit.org] in particular might have 
> something to say.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10229) See what it would take to shift many of our one-off schemas used for testing to managed schema and construct them as part of the tests

2017-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927875#comment-15927875
 ] 

Amrit Sarkar edited comment on SOLR-10229 at 3/16/17 11:45 AM:
---

Alexandre,

Thank you for correcting both the points out, Sorry I mixed two different 
conversations (online-offline) altogether. Rephrasing again:

1. A "mother" schema, with most common field and fieldType definitions, will be 
loaded in the test framework once, non-dependent on any individual tests.
2. The individual tests then can pull relevant/required field and fieldType 
definitions from mother schema via Schema API ([Retrieve Schema 
Information|https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-RetrieveSchemaInformation])
 and post them to its own miniature schema for tests. The utility method can be 
named as "copyFieldAndDefinition" as suggested above.
3. For custom field and fieldType, which are not available in the mother 
schema, utility methods in framework to pass them onto Schema API.
4. Apart from the above, Global Similarity and Default Query Operator will be 
configurable, defaults will be provided.

All the endpoints of framework will take JSON format parameters. Anything am I 
missing out or understanding wrong?


was (Author: sarkaramr...@gmail.com):
Alexandre,

Thank you for correcting both the points out, Sorry I mixed two different 
conversations (online-offline) altogether. Rephrasing again:

1. A "mother" schema, with most common field and fieldType definitions, will be 
loaded in the test framework once, non-dependent on any individual tests.
2. The individual tests then can pull relevant/required field and fieldType 
definitions from mother schema via Schema API ([Retrieve Schema 
Information|https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-RetrieveSchemaInformation])
 and post them to its own miniature schema for tests. The utility method can be 
named as "copyFieldAndDefinition" as suggested above.
3. For custom field and fieldType, which are not available in the mother 
schema, utility methods in framework to pass them onto Schema API.

All the endpoints of framework will take JSON format parameters. Anything am I 
missing out or understanding wrong?

> See what it would take to shift many of our one-off schemas used for testing 
> to managed schema and construct them as part of the tests
> --
>
> Key: SOLR-10229
> URL: https://issues.apache.org/jira/browse/SOLR-10229
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> The test schema files are intimidating. There are about a zillion of them, 
> and making a change in any of them risks breaking some _other_ test. That 
> leaves people three choices:
> 1> add what they need to some existing schema. Which makes schemas bigger and 
> bigger and bigger.
> 2> create a new schema file, adding to the proliferation thereof.
> 3> Look through all the existing tests to see if they have something that 
> works.
> The recent work on LUCENE-7705 is a case in point. We're adding a maxLen 
> parameter to some tokenizers. Putting those parameters into any of the 
> existing schemas, especially to test < 255 char tokens is virtually 
> guaranteed to break other tests, so the only safe thing to do is make another 
> schema file. Adding to the multiplication of files.
> As part of SOLR-5260 I tried creating the schema on the fly rather than 
> creating a new static schema file and it's not hard. WDYT about making this 
> into some better thought-out utility? 
> At present, this is pretty fuzzy, I wanted to get some reactions before 
> putting much effort into it. I expect that the utility methods would 
> eventually get a bunch of canned types. It's reasonably straightforward for 
> primitive types, if lengthy. But when you get into solr.TextField-based types 
> it gets less straight-forward.
> We could manage to just move the "intimidation" from the plethora of schema 
> files to a zillion fieldTypes in the utility to choose from...
> Also, forcing every test to define the fields up-front is arguably less 
> convenient than just having _some_ canned schemas we can use. And erroneous 
> schemas to test failure modes are probably not very good fits for any such 
> framework.
> [~steve_rowe] and [~hossman_luc...@fucit.org] in particular might have 
> something to say.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
Fo

[jira] [Comment Edited] (SOLR-10229) See what it would take to shift many of our one-off schemas used for testing to managed schema and construct them as part of the tests

2017-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927875#comment-15927875
 ] 

Amrit Sarkar edited comment on SOLR-10229 at 3/16/17 11:53 AM:
---

Alexandre,

Thank you for correcting both the points out, Sorry I mixed two different 
conversations (online-offline) altogether. Rephrasing again:

1. A "mother" schema, with most common field and fieldType definitions, will be 
loaded in the test framework once, non-dependent on any individual tests.
2. The individual tests then can pull relevant/required field and fieldType 
definitions from mother schema via parsing and post them to its own miniature 
schema for tests via Schema API. The utility method can be named as 
"copyFieldAndDefinition" as suggested above.
3. For custom field and fieldType, which are not available in the mother 
schema, utility methods in framework to pass them onto Schema API.
4. Apart from the above, Global Similarity and Default Query Operator will be 
configurable, defaults will be provided.

All the endpoints of framework will take JSON format parameters. Anything am I 
missing out or understanding wrong?


was (Author: sarkaramr...@gmail.com):
Alexandre,

Thank you for correcting both the points out, Sorry I mixed two different 
conversations (online-offline) altogether. Rephrasing again:

1. A "mother" schema, with most common field and fieldType definitions, will be 
loaded in the test framework once, non-dependent on any individual tests.
2. The individual tests then can pull relevant/required field and fieldType 
definitions from mother schema via Schema API ([Retrieve Schema 
Information|https://cwiki.apache.org/confluence/display/solr/Schema+API#SchemaAPI-RetrieveSchemaInformation])
 and post them to its own miniature schema for tests. The utility method can be 
named as "copyFieldAndDefinition" as suggested above.
3. For custom field and fieldType, which are not available in the mother 
schema, utility methods in framework to pass them onto Schema API.
4. Apart from the above, Global Similarity and Default Query Operator will be 
configurable, defaults will be provided.

All the endpoints of framework will take JSON format parameters. Anything am I 
missing out or understanding wrong?

> See what it would take to shift many of our one-off schemas used for testing 
> to managed schema and construct them as part of the tests
> --
>
> Key: SOLR-10229
> URL: https://issues.apache.org/jira/browse/SOLR-10229
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> The test schema files are intimidating. There are about a zillion of them, 
> and making a change in any of them risks breaking some _other_ test. That 
> leaves people three choices:
> 1> add what they need to some existing schema. Which makes schemas bigger and 
> bigger and bigger.
> 2> create a new schema file, adding to the proliferation thereof.
> 3> Look through all the existing tests to see if they have something that 
> works.
> The recent work on LUCENE-7705 is a case in point. We're adding a maxLen 
> parameter to some tokenizers. Putting those parameters into any of the 
> existing schemas, especially to test < 255 char tokens is virtually 
> guaranteed to break other tests, so the only safe thing to do is make another 
> schema file. Adding to the multiplication of files.
> As part of SOLR-5260 I tried creating the schema on the fly rather than 
> creating a new static schema file and it's not hard. WDYT about making this 
> into some better thought-out utility? 
> At present, this is pretty fuzzy, I wanted to get some reactions before 
> putting much effort into it. I expect that the utility methods would 
> eventually get a bunch of canned types. It's reasonably straightforward for 
> primitive types, if lengthy. But when you get into solr.TextField-based types 
> it gets less straight-forward.
> We could manage to just move the "intimidation" from the plethora of schema 
> files to a zillion fieldTypes in the utility to choose from...
> Also, forcing every test to define the fields up-front is arguably less 
> convenient than just having _some_ canned schemas we can use. And erroneous 
> schemas to test failure modes are probably not very good fits for any such 
> framework.
> [~steve_rowe] and [~hossman_luc...@fucit.org] in particular might have 
> something to say.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org

[jira] [Comment Edited] (SOLR-10229) See what it would take to shift many of our one-off schemas used for testing to managed schema and construct them as part of the tests

2017-03-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15927875#comment-15927875
 ] 

Amrit Sarkar edited comment on SOLR-10229 at 3/16/17 11:54 AM:
---

Alexandre,

Thank you for correcting both the points out, Sorry I mixed two different 
conversations (online-offline) altogether. Rephrasing again:

1. A "mother" schema, with most common field and fieldType definitions, will be 
loaded/parsed in the test framework once, non-dependent on any individual tests.
2. The individual tests then can pull relevant/required field and fieldType 
definitions from mother schema already parsed content and post them to its own 
miniature schema for tests via Schema API. The utility method can be named as 
"copyFieldAndDefinition" as suggested above.
3. For custom field and fieldType, which are not available in the mother 
schema, utility methods in framework to pass them onto Schema API.
4. Apart from the above, Global Similarity and Default Query Operator will be 
configurable, defaults will be provided.

All the endpoints of framework will take JSON format parameters. Anything am I 
missing out or understanding wrong?


was (Author: sarkaramr...@gmail.com):
Alexandre,

Thank you for correcting both the points out, Sorry I mixed two different 
conversations (online-offline) altogether. Rephrasing again:

1. A "mother" schema, with most common field and fieldType definitions, will be 
loaded in the test framework once, non-dependent on any individual tests.
2. The individual tests then can pull relevant/required field and fieldType 
definitions from mother schema via parsing and post them to its own miniature 
schema for tests via Schema API. The utility method can be named as 
"copyFieldAndDefinition" as suggested above.
3. For custom field and fieldType, which are not available in the mother 
schema, utility methods in framework to pass them onto Schema API.
4. Apart from the above, Global Similarity and Default Query Operator will be 
configurable, defaults will be provided.

All the endpoints of framework will take JSON format parameters. Anything am I 
missing out or understanding wrong?

> See what it would take to shift many of our one-off schemas used for testing 
> to managed schema and construct them as part of the tests
> --
>
> Key: SOLR-10229
> URL: https://issues.apache.org/jira/browse/SOLR-10229
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
>
> The test schema files are intimidating. There are about a zillion of them, 
> and making a change in any of them risks breaking some _other_ test. That 
> leaves people three choices:
> 1> add what they need to some existing schema. Which makes schemas bigger and 
> bigger and bigger.
> 2> create a new schema file, adding to the proliferation thereof.
> 3> Look through all the existing tests to see if they have something that 
> works.
> The recent work on LUCENE-7705 is a case in point. We're adding a maxLen 
> parameter to some tokenizers. Putting those parameters into any of the 
> existing schemas, especially to test < 255 char tokens is virtually 
> guaranteed to break other tests, so the only safe thing to do is make another 
> schema file. Adding to the multiplication of files.
> As part of SOLR-5260 I tried creating the schema on the fly rather than 
> creating a new static schema file and it's not hard. WDYT about making this 
> into some better thought-out utility? 
> At present, this is pretty fuzzy, I wanted to get some reactions before 
> putting much effort into it. I expect that the utility methods would 
> eventually get a bunch of canned types. It's reasonably straightforward for 
> primitive types, if lengthy. But when you get into solr.TextField-based types 
> it gets less straight-forward.
> We could manage to just move the "intimidation" from the plethora of schema 
> files to a zillion fieldTypes in the utility to choose from...
> Also, forcing every test to define the fields up-front is arguably less 
> convenient than just having _some_ canned schemas we can use. And erroneous 
> schemas to test failure modes are probably not very good fits for any such 
> framework.
> [~steve_rowe] and [~hossman_luc...@fucit.org] in particular might have 
> something to say.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11267) Add support for "add-distinct" atomic update operation

2017-10-17 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16207118#comment-16207118
 ] 

Amrit Sarkar commented on SOLR-11267:
-

[~ichattopadhyaya],

Let me know if you get a chance to go over the patch and suggest some changes 
on it. Thanks.

> Add support for "add-distinct" atomic update operation
> --
>
> Key: SOLR-11267
> URL: https://issues.apache.org/jira/browse/SOLR-11267
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: SOLR-11267.patch, SOLR-11267.patch
>
>
> Often, a multivalued field is used as a set of values. Since multivalued 
> fields are more like lists than sets, users do two consecutive operations, 
> remove and add, to insert an element into the field and also maintain the 
> set's property of only having unique elements.
> Proposing a new single operation, called "add-distinct" (which essentially 
> means "add-if-doesn't exist") for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10533) Improve checks for which fields can be returned

2017-10-17 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10533:

Attachment: SOLR-10533.patch

Putting up the first rough draft to get it going. 

Kindly mind, there are no tests are written, nor I have verified I covered 
every aspect where check on {{stored()}} is applicable or field can be 
returned. there can very well be instances, like that. 

Also I need to make sure, the changes I made, the new check I introduced makes 
sense there or not too.

> Improve checks for which fields can be returned
> ---
>
> Key: SOLR-10533
> URL: https://issues.apache.org/jira/browse/SOLR-10533
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-10533.patch
>
>
> I tried using {{DocBasedVersionConstraintsProcessorFactory}} on a field which 
> was defined as :
> {code}
> 
> {code}
> The long fieldType has docValues enabled and since useDocValuesAsStored is 
> true by default in the latest schema I can retrieve this field.
> But when I start Solr with this update processor I get the following error
> {code}
>  Caused by: field myVersionField must be defined in schema, be stored, and be 
> single valued.
> {code}
> Here's the following check in the update processor where the error originates 
> from:
> {code}
> if (userVersionField == null || !userVersionField.stored() || 
> userVersionField.multiValued()) {
>   throw new SolrException(SERVER_ERROR,
>   "field " + versionField + " must be defined in schema, be stored, 
> and be single valued.");
> }
> {code}
> We should improve the condition to also check if the field docValues is true 
> and useDocValuesAsStored is true then don't throw this error.
> Hoss pointed out in an offline discussion that this issue could be there in 
> other places in the codebase so keep this issue broad and not just tackle 
> DocBasedVersionConstraintsProcessorFactory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11267) Add support for "add-distinct" atomic update operation

2017-10-17 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16185528#comment-16185528
 ] 

Amrit Sarkar edited comment on SOLR-11267 at 10/17/17 9:42 PM:
---

[~ichattopadhyaya],

I cooked up a little patch to support "add-distinct". I also believe this will 
be a very wealthy addition in atomic requests as users have to parse the SET 
from the list on their application code today.

Design: if field not present, do conventional "add" atomic operation or else:
  if passed values are list, check each value present already and 
then add
  else if singular, check value present already and then add

Included small test to verify that. Looking forward to your review and feedback.


was (Author: sarkaramr...@gmail.com):
[~ichattopadhyaya],

I cooked up a little patch to support "add-distinct". I also believe this will 
be a very wealthy addition in atomic requests as users have to parse the SET 
from the list on their application code today.

Design: if field not present, do conventional "add" atomic operation or else:
  if passed values are list, check each value present already and 
then add
  else if singular, check each value present already and then add

Included small test to verify that. Looking forward to your review and feedback.

> Add support for "add-distinct" atomic update operation
> --
>
> Key: SOLR-11267
> URL: https://issues.apache.org/jira/browse/SOLR-11267
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-11267.patch, SOLR-11267.patch
>
>
> Often, a multivalued field is used as a set of values. Since multivalued 
> fields are more like lists than sets, users do two consecutive operations, 
> remove and add, to insert an element into the field and also maintain the 
> set's property of only having unique elements.
> Proposing a new single operation, called "add-distinct" (which essentially 
> means "add-if-doesn't exist") for this.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11326) CDCR bootstrap should not download tlog's from source

2017-10-18 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11326?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11326:

Attachment: SOLR-11326.patch
WITHOUT-FIX.patch

> CDCR bootstrap should not download tlog's from source
> -
>
> Key: SOLR-11326
> URL: https://issues.apache.org/jira/browse/SOLR-11326
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Attachments: SOLR-11326.patch, SOLR-11326.patch, SOLR-11326.patch, 
> WITHOUT-FIX.patch
>
>
> While analyzing two separate fails on SOLR-11278 I see that during bootstrap 
> the tlog's from the source is getting download
> snippet1:
> {code}
>[junit4]   2> 42931 INFO  (qtp1525032019-69) [n:127.0.0.1:53178_solr 
> c:cdcr-source s:shard1 r:core_node1 x:cdcr-source_shard1_replica1] 
> o.a.s.h.CdcrReplicatorManager Submitting bootstrap task to executor
>[junit4]   2> 42934 INFO  
> (cdcr-bootstrap-status-32-thread-1-processing-n:127.0.0.1:53178_solr 
> x:cdcr-source_shard1_replica1 s:shard1 c:cdcr-source r:core_node1) 
> [n:127.0.0.1:53178_solr c:cdcr-source s:shard1 r:core_node1 
> x:cdcr-source_shard1_replica1] o.a.s.h.CdcrReplicatorManager Attempting to 
> bootstrap target collection: cdcr-target shard: shard1 leader: 
> http://127.0.0.1:53170/solr/cdcr-target_shard1_replica1/
>[junit4]   2> 43003 INFO  (qtp1525032019-69) [n:127.0.0.1:53178_solr 
> c:cdcr-source s:shard1 r:core_node1 x:cdcr-source_shard1_replica1] 
> o.a.s.c.S.Request [cdcr-source_shard1_replica1]  webapp=/solr 
> path=/replication 
> params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 
> QTime=0
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Master's generation: 12
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Master's version: 
> 1503514968639
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Slave's generation: 1
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Slave's version: 0
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Starting replication 
> process
>[junit4]   2> 43041 INFO  (qtp1525032019-71) [n:127.0.0.1:53178_solr 
> c:cdcr-source s:shard1 r:core_node1 x:cdcr-source_shard1_replica1] 
> o.a.s.h.ReplicationHandler Adding tlog files to list: [{size=4649, 
> name=tlog.000.1576549701811961856}, {size=4770, 
> name=tlog.001.1576549702515556352}, {size=4770, 
> name=tlog.002.1576549702628802560}, {size=4770, 
> name=tlog.003.1576549702720028672}, {size=4770, 
> name=tlog.004.1576549702799720448}, {size=4770, 
> name=tlog.005.1576549702894092288}, {size=4770, 
> name=tlog.006.1576549703029358592}, {size=4770, 
> name=tlog.007.1576549703126876160}, {size=4770, 
> name=tlog.008.1576549703208665088}, {size=4770, 
> name=tlog.009.1576549703295696896}
> {code}
> snippet2:
> {code}
>  17070[junit4]   2> 677606 INFO  (qtp22544544-5725) [] 
> o.a.s.h.CdcrReplicatorManager Attempting to bootstrap target collection: 
> cdcr-target, shard: shard1^M
>  17071[junit4]   2> 677608 INFO  (qtp22544544-5725) [] 
> o.a.s.h.CdcrReplicatorManager Submitting bootstrap task to executor^M
> 17091[junit4]   2> 677627 INFO  (qtp22544544-5724) [] 
> o.a.s.c.S.Request [cdcr-source_shard1_replica_n1]  webapp=/solr 
> path=/replication 
> params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 
> QTime=0^M
>  17092[junit4]   2> 677628 INFO  (recoveryExecutor-1024-thread-1) [ 

[jira] [Commented] (SOLR-11326) CDCR bootstrap should not download tlog's from source

2017-10-18 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11326?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16209219#comment-16209219
 ] 

Amrit Sarkar commented on SOLR-11326:
-

[~varunthacker],

Roger that!

I have uploaded two patches:
a) WITHOUT FIX and a test-method in {{CdcrBootstrapTest}} to prove tlogs get 
copied over to {{target}} when Bootstrap is completed.
b) WITH FIX and test-method in {{CdcrBootstrapTest}} to prove tlogs DOESN'T get 
copied over to {{target}} when Bootstrap is completed.

Apart from the changes, I fixed the log lines in {{CdcrBootstrapTest}}, which 
was misleading.

PLEASE NOTE: I have wrote the test on top of {{master}} branch and we have 
{{debugging}} code from SOLR-11467. If we remove the debugging code and apply 
this patch, it won't work. If we commit this and then try to remove SOLR-11467 
debugging code, that patch won't work. Please let me know if we need to handle 
this in better way or I need to write the patch without SOLR-11467 debugging 
patch.

> CDCR bootstrap should not download tlog's from source
> -
>
> Key: SOLR-11326
> URL: https://issues.apache.org/jira/browse/SOLR-11326
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Varun Thacker
> Attachments: SOLR-11326.patch, SOLR-11326.patch, SOLR-11326.patch, 
> WITHOUT-FIX.patch
>
>
> While analyzing two separate fails on SOLR-11278 I see that during bootstrap 
> the tlog's from the source is getting download
> snippet1:
> {code}
>[junit4]   2> 42931 INFO  (qtp1525032019-69) [n:127.0.0.1:53178_solr 
> c:cdcr-source s:shard1 r:core_node1 x:cdcr-source_shard1_replica1] 
> o.a.s.h.CdcrReplicatorManager Submitting bootstrap task to executor
>[junit4]   2> 42934 INFO  
> (cdcr-bootstrap-status-32-thread-1-processing-n:127.0.0.1:53178_solr 
> x:cdcr-source_shard1_replica1 s:shard1 c:cdcr-source r:core_node1) 
> [n:127.0.0.1:53178_solr c:cdcr-source s:shard1 r:core_node1 
> x:cdcr-source_shard1_replica1] o.a.s.h.CdcrReplicatorManager Attempting to 
> bootstrap target collection: cdcr-target shard: shard1 leader: 
> http://127.0.0.1:53170/solr/cdcr-target_shard1_replica1/
>[junit4]   2> 43003 INFO  (qtp1525032019-69) [n:127.0.0.1:53178_solr 
> c:cdcr-source s:shard1 r:core_node1 x:cdcr-source_shard1_replica1] 
> o.a.s.c.S.Request [cdcr-source_shard1_replica1]  webapp=/solr 
> path=/replication 
> params={qt=/replication&wt=javabin&version=2&command=indexversion} status=0 
> QTime=0
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Master's generation: 12
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Master's version: 
> 1503514968639
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Slave's generation: 1
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Slave's version: 0
>[junit4]   2> 43004 INFO  
> (recoveryExecutor-6-thread-1-processing-n:127.0.0.1:53170_solr 
> x:cdcr-target_shard1_replica1 s:shard1 c:cdcr-target r:core_node1) 
> [n:127.0.0.1:53170_solr c:cdcr-target s:shard1 r:core_node1 
> x:cdcr-target_shard1_replica1] o.a.s.h.IndexFetcher Starting replication 
> process
>[junit4]   2> 43041 INFO  (qtp1525032019-71) [n:127.0.0.1:53178_solr 
> c:cdcr-source s:shard1 r:core_node1 x:cdcr-source_shard1_replica1] 
> o.a.s.h.ReplicationHandler Adding tlog files to list: [{size=4649, 
> name=tlog.000.1576549701811961856}, {size=4770, 
> name=tlog.001.1576549702515556352}, {size=4770, 
> name=tlog.002.1576549702628802560}, {size=4770, 
> name=tlog.003.1576549702720028672}, {size=4770, 
> name=tlog.004.1576549702799720448}, {size=4770, 
> name=tlog.005.1576549702894092288}, {size=4770, 
> name=tlog.006.1576549703029358592}, {size=4770, 
> name=tlog.0

[jira] [Updated] (SOLR-10533) Improve checks for which fields can be returned

2017-10-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10533:

Attachment: SOLR-10533.patch

Removed all schema level stored changes as not sure what kind of changes needs 
to be done at the time of creating Field objects.

Kept the relevant ones and writing test cases for the same.

> Improve checks for which fields can be returned
> ---
>
> Key: SOLR-10533
> URL: https://issues.apache.org/jira/browse/SOLR-10533
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-10533.patch, SOLR-10533.patch
>
>
> I tried using {{DocBasedVersionConstraintsProcessorFactory}} on a field which 
> was defined as :
> {code}
> 
> {code}
> The long fieldType has docValues enabled and since useDocValuesAsStored is 
> true by default in the latest schema I can retrieve this field.
> But when I start Solr with this update processor I get the following error
> {code}
>  Caused by: field myVersionField must be defined in schema, be stored, and be 
> single valued.
> {code}
> Here's the following check in the update processor where the error originates 
> from:
> {code}
> if (userVersionField == null || !userVersionField.stored() || 
> userVersionField.multiValued()) {
>   throw new SolrException(SERVER_ERROR,
>   "field " + versionField + " must be defined in schema, be stored, 
> and be single valued.");
> }
> {code}
> We should improve the condition to also check if the field docValues is true 
> and useDocValuesAsStored is true then don't throw this error.
> Hoss pointed out in an offline discussion that this issue could be there in 
> other places in the codebase so keep this issue broad and not just tackle 
> DocBasedVersionConstraintsProcessorFactory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10533) Improve checks for which fields can be returned

2017-10-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-10533:

Attachment: SOLR-10533.patch

Added test for {{CSVResponseWriter}}. For rest of the test cases, either a new 
{{solrconfig.xml}} or knowledge of the entire component is needed like 
MLPParser. [~varunthacker] you can review it and let me know if we need to add 
test-cases, it is pretty straightforward.

Also, I need some time to understand either we need to make the changes in 
schema package, Field objects creation and usage.

> Improve checks for which fields can be returned
> ---
>
> Key: SOLR-10533
> URL: https://issues.apache.org/jira/browse/SOLR-10533
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-10533.patch, SOLR-10533.patch, SOLR-10533.patch
>
>
> I tried using {{DocBasedVersionConstraintsProcessorFactory}} on a field which 
> was defined as :
> {code}
> 
> {code}
> The long fieldType has docValues enabled and since useDocValuesAsStored is 
> true by default in the latest schema I can retrieve this field.
> But when I start Solr with this update processor I get the following error
> {code}
>  Caused by: field myVersionField must be defined in schema, be stored, and be 
> single valued.
> {code}
> Here's the following check in the update processor where the error originates 
> from:
> {code}
> if (userVersionField == null || !userVersionField.stored() || 
> userVersionField.multiValued()) {
>   throw new SolrException(SERVER_ERROR,
>   "field " + versionField + " must be defined in schema, be stored, 
> and be single valued.");
> }
> {code}
> We should improve the condition to also check if the field docValues is true 
> and useDocValuesAsStored is true then don't throw this error.
> Hoss pointed out in an offline discussion that this issue could be there in 
> other places in the codebase so keep this issue broad and not just tackle 
> DocBasedVersionConstraintsProcessorFactory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-10533) Improve checks for which fields can be returned

2017-10-22 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10533?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16214358#comment-16214358
 ] 

Amrit Sarkar edited comment on SOLR-10533 at 10/22/17 3:53 PM:
---

Added test for {{CSVResponseWriter}}. For rest of the test cases, either a new 
{{solrconfig.xml}} or knowledge of the entire component is needed like 
MLPParser. [~varunthacker] you can review it and let me know if we need to add 
test-cases, it is pretty straightforward.

Also, I need some time to understand either we need to make the changes in 
schema package, field and fieldType objects creation and usage.


was (Author: sarkaramr...@gmail.com):
Added test for {{CSVResponseWriter}}. For rest of the test cases, either a new 
{{solrconfig.xml}} or knowledge of the entire component is needed like 
MLPParser. [~varunthacker] you can review it and let me know if we need to add 
test-cases, it is pretty straightforward.

Also, I need some time to understand either we need to make the changes in 
schema package, Field objects creation and usage.

> Improve checks for which fields can be returned
> ---
>
> Key: SOLR-10533
> URL: https://issues.apache.org/jira/browse/SOLR-10533
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-10533.patch, SOLR-10533.patch, SOLR-10533.patch
>
>
> I tried using {{DocBasedVersionConstraintsProcessorFactory}} on a field which 
> was defined as :
> {code}
> 
> {code}
> The long fieldType has docValues enabled and since useDocValuesAsStored is 
> true by default in the latest schema I can retrieve this field.
> But when I start Solr with this update processor I get the following error
> {code}
>  Caused by: field myVersionField must be defined in schema, be stored, and be 
> single valued.
> {code}
> Here's the following check in the update processor where the error originates 
> from:
> {code}
> if (userVersionField == null || !userVersionField.stored() || 
> userVersionField.multiValued()) {
>   throw new SolrException(SERVER_ERROR,
>   "field " + versionField + " must be defined in schema, be stored, 
> and be single valued.");
> }
> {code}
> We should improve the condition to also check if the field docValues is true 
> and useDocValuesAsStored is true then don't throw this error.
> Hoss pointed out in an offline discussion that this issue could be there in 
> other places in the codebase so keep this issue broad and not just tackle 
> DocBasedVersionConstraintsProcessorFactory.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"

2017-11-15 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16253371#comment-16253371
 ] 

Amrit Sarkar commented on SOLR-11613:
-

[~elyograg] maybe "core" / "collection" than index:

"The solrconfig.xml file for this collection does not have an operational 
dataimport handler defined!"

Let me know what suits best, this will a very small patch, I can drive through.

> Improve error in admin UI "Sorry, no dataimport-handler defined"
> 
>
> Key: SOLR-11613
> URL: https://issues.apache.org/jira/browse/SOLR-11613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>Priority: Minor
>  Labels: newdev
>
> When the config has no working dataimport handlers, clicking on the 
> "dataimport" tab for a core/collection shows an error message that states 
> "Sorry, no dataimport-handler defined".  This is a little bit vague.
> One idea for an improved message:  "The solrconfig.xml file for this index 
> does not have an operational dataimport handler defined."



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"

2017-11-15 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11613:

Attachment: SOLR-11613.patch

> Improve error in admin UI "Sorry, no dataimport-handler defined"
> 
>
> Key: SOLR-11613
> URL: https://issues.apache.org/jira/browse/SOLR-11613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11613.patch
>
>
> When the config has no working dataimport handlers, clicking on the 
> "dataimport" tab for a core/collection shows an error message that states 
> "Sorry, no dataimport-handler defined".  This is a little bit vague.
> One idea for an improved message:  "The solrconfig.xml file for this index 
> does not have an operational dataimport handler defined."



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11613) Improve error in admin UI "Sorry, no dataimport-handler defined"

2017-11-15 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16253406#comment-16253406
 ] 

Amrit Sarkar commented on SOLR-11613:
-

I see, I see. Best for both cases; uploaded the patch with 1 line change. Thank 
you for the reasoning above.

> Improve error in admin UI "Sorry, no dataimport-handler defined"
> 
>
> Key: SOLR-11613
> URL: https://issues.apache.org/jira/browse/SOLR-11613
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.1
>Reporter: Shawn Heisey
>Priority: Minor
>  Labels: newdev
> Attachments: SOLR-11613.patch
>
>
> When the config has no working dataimport handlers, clicking on the 
> "dataimport" tab for a core/collection shows an error message that states 
> "Sorry, no dataimport-handler defined".  This is a little bit vague.
> One idea for an improved message:  "The solrconfig.xml file for this index 
> does not have an operational dataimport handler defined."



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11650) Credentials used for BasicAuth displayed in clear text on slave nodes

2017-11-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255321#comment-16255321
 ] 

Amrit Sarkar commented on SOLR-11650:
-

I can see the hashed value of the password, its a cakewalk to retrieve password 
from that. This should be addressed promptly.

> Credentials used for BasicAuth displayed in clear text on slave nodes
> -
>
> Key: SOLR-11650
> URL: https://issues.apache.org/jira/browse/SOLR-11650
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.6.2
>Reporter: Constantin Bugneac
>Priority: Critical
> Attachments: Screen Shot 2017-11-16 at 10.48.38.png
>
>
> Pre-requisites:
> Have in place Solr configured in master slave replication with BasicAuth 
> enabled.
> Issue: 
> In UI on slave (under Replication tab of core) the master url is displayed 
> with username and password used for BasicAuth in clear text.
> Example:
> master url:https://solr:sdjudf3t...@solr-master.local.com:8983/solr/mycore
> (see attached the screenshot)
> Suggestion/Idea:
> At least mask the password with  ***



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11650) Credentials used for BasicAuth displayed in clear text on slave nodes

2017-11-16 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11650:

Attachment: SOLR-11650.patch

Potential patch,

I don't have the bandwidth right now to test this out, once I have, will 
validate whether we can use this patch or post an updated one.

> Credentials used for BasicAuth displayed in clear text on slave nodes
> -
>
> Key: SOLR-11650
> URL: https://issues.apache.org/jira/browse/SOLR-11650
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Authentication
>Affects Versions: 6.6.2
>Reporter: Constantin Bugneac
>Priority: Critical
> Attachments: SOLR-11650.patch, Screen Shot 2017-11-16 at 10.48.38.png
>
>
> Pre-requisites:
> Have in place Solr configured in master slave replication with BasicAuth 
> enabled.
> Issue: 
> In UI on slave (under Replication tab of core) the master url is displayed 
> with username and password used for BasicAuth in clear text.
> Example:
> master url:https://solr:sdjudf3t...@solr-master.local.com:8983/solr/mycore
> (see attached the screenshot)
> Suggestion/Idea:
> At least mask the password with  ***



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API

2017-11-16 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-11652:
---

 Summary: Cdcr TLogs doesn't get purged for Source collection 
Leader when Buffer is disabled from CDCR API
 Key: SOLR-11652
 URL: https://issues.apache.org/jira/browse/SOLR-11652
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Amrit Sarkar


Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
from CDCR API.

More details to follow.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11601) solr.LatLonPointSpatialField : sorting by geodist fails

2017-11-16 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16255877#comment-16255877
 ] 

Amrit Sarkar commented on SOLR-11601:
-

Hi Clemens,

It doesn't fails it is *intended behavior.* I replicated your scenario on my 
system and it threw this stack trace:

{code}
Caused by: org.apache.solr.common.SolrException: A ValueSource isn't directly 
available from this field. Instead try a query using the distance as the score.
at 
org.apache.solr.schema.AbstractSpatialFieldType.getValueSource(AbstractSpatialFieldType.java:334)
at 
org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:384)
at 
org.apache.solr.search.FunctionQParser.parseValueSourceList(FunctionQParser.java:227)
at 
org.apache.solr.search.function.distance.GeoDistValueSourceParser.parse(GeoDistValueSourceParser.java:54)
at 
org.apache.solr.search.FunctionQParser.parseValueSource(FunctionQParser.java:370)
at org.apache.solr.search.FunctionQParser.parse(FunctionQParser.java:82)
at org.apache.solr.search.QParser.getQuery(QParser.java:168)
at 
org.apache.solr.search.SortSpecParsing.parseSortSpecImpl(SortSpecParsing.java:120)
... 37 more
{code}

When I looked at:   at 
org.apache.solr.schema.AbstractSpatialFieldType.getValueSource(AbstractSpatialFieldType.java:334)

{code}
  @Override
  public ValueSource getValueSource(SchemaField field, QParser parser) {
//This is different from Solr 3 LatLonType's approach which uses the 
MultiValueSource concept to directly expose
// the x & y pair of FieldCache value sources.
throw new SolrException(SolrException.ErrorCode.BAD_REQUEST,
"A ValueSource isn't directly available from this field. Instead try a 
query using the distance as the score.");
  }
{code}

_This function only implements this particular use-case and throws that 
particular exception._

You should keep using 
{{sfield=b4_location__geo_si&pt=47.36667,8.55&sort=geodist() asc}} as it is 
neat too, as comparison to geodist(...,...,...).

> solr.LatLonPointSpatialField : sorting by geodist fails
> ---
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Priority: Blocker
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si&pt=47.36667,8.55&sort=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support

2017-11-17 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11412:

Attachment: (was: CDCR-bidir.png)

> Documentation changes for SOLR-11003: Bi-directional CDCR support
> -
>
> Key: SOLR-11412
> URL: https://issues.apache.org/jira/browse/SOLR-11412
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, documentation
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch
>
>
> Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its 
> conclusion. The relevant changes in documentation needs to be done.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support

2017-11-17 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16257702#comment-16257702
 ] 

Amrit Sarkar commented on SOLR-11412:
-

Fixed the patch and added Erick's SOLR-11635 to bi-dir CDCR configurations. 
Also updated Cdcr-bidir.png.

[~varunthacker] this is ready to go, and awaiting your review and feedback.

> Documentation changes for SOLR-11003: Bi-directional CDCR support
> -
>
> Key: SOLR-11412
> URL: https://issues.apache.org/jira/browse/SOLR-11412
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, documentation
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch
>
>
> Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its 
> conclusion. The relevant changes in documentation needs to be done.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support

2017-11-17 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11412:

Attachment: CDCR_bidir.png
SOLR-11412.patch

> Documentation changes for SOLR-11003: Bi-directional CDCR support
> -
>
> Key: SOLR-11412
> URL: https://issues.apache.org/jira/browse/SOLR-11412
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, documentation
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: CDCR_bidir.png, SOLR-11412.patch, SOLR-11412.patch, 
> SOLR-11412.patch, SOLR-11412.patch
>
>
> Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its 
> conclusion. The relevant changes in documentation needs to be done.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs

2017-11-17 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16257705#comment-16257705
 ] 

Amrit Sarkar commented on SOLR-8389:


[~prusko],

Thank you for coming up with the patch. Allow me sometime to go through the 
improvement and would definitely seek your help and collaboration.

Thanks
Amrit Sarkar

> Convert CDCR peer cluster and other configurations into collection properties 
> modifiable via APIs
> -
>
> Key: SOLR-8389
> URL: https://issues.apache.org/jira/browse/SOLR-8389
> Project: Solr
>  Issue Type: Improvement
>  Components: CDCR, SolrCloud
>Reporter: Shalin Shekhar Mangar
> Attachments: SOLR-8389.patch
>
>
> CDCR configuration is kept inside solrconfig.xml which makes it difficult to 
> add or change peer cluster configuration.
> I propose to move all CDCR config to collection level properties in cluster 
> state so that they can be modified using the existing modify collection API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-18 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11598:

Attachment: SOLR-11598.patch

On seeing the {{SortDoc}} implementation for Single, Double, .. Quad in 
{{ExportWriter.java}}; it seems repeated code for me since most of the code is 
already implemented in {{SortDoc}} expect {{compareTo}} function which I did on 
the newly uploaded patch. All the tests are getting passed. 

Also increasing the max sort fields to 10, as repeated tests on large dataset 
with increased sort fields showed very little difference in performance. 
Looking at the code closely, seems the performance difference is linear, than 
exponential / polynomial :: {{lessThan}} and {{compareTo}} methods.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> This is a big limitation for me, as I am working on a feature with a tight 
> deadline where I need to support 10 dimensional rollups. I did not read any 
> limitation on the sorting in the documentation and we went ahead with the 
> installation of 6.6.1. Now we are blocked with this limitation.
> This is a Jira to track this work.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:5

[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-19 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258392#comment-16258392
 ] 

Amrit Sarkar commented on SOLR-11598:
-

[~aroopganguly],

bq. I will perform tests with the patch and share results if permitted.
I think everyone would be pleased if you share. I tested with 1M records and 
didn't see almost any performance degradation but I think we need to verify 
this on larger dataset.

bq. Also, if you have determined this to have O(N) performance characteristic, 
are you planning to make it a lot larger and not bounded under 10? 
There maybe a very good chance I am missing some factor in terms of performance 
on sorting n-dimensional variables like [~joel.bernstein] mentioned. I think 
after analysing your test results, we can safely conclude whether we can 
increase the bound or even 10 is high.

Thanks.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> This is a big limitation for me, as I am working on a feature with a tight 
> deadline where I need to support 10 dimensional rollups. I did not read any 
> limitation on the sorting in the documentation and we went ahead with the 
> installation of 6.6.1. Now we are blocked with this limitation.
> This is a Jira to track this work.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.Scop

[jira] [Comment Edited] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only

2017-11-19 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258474#comment-16258474
 ] 

Amrit Sarkar edited comment on SOLR-11600 at 11/19/17 12:44 PM:


Examples are listed under 
https://lucene.apache.org/solr/guide/6_6/streaming-expressions.html#StreamingExpressions-StreamingRequestsandResponses
 and http://joelsolr.blogspot.in/2015/04/the-streaming-api-solrjio-basics.html.

I have cooked one example against {{master}} branch, which strictly required 
httpClient::4.5.3

{code}
package stream.example;

import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.io.SolrClientCache;
import org.apache.solr.client.solrj.io.Tuple;
import org.apache.solr.client.solrj.io.eval.DivideEvaluator;
import org.apache.solr.client.solrj.io.stream.CloudSolrStream;
import org.apache.solr.client.solrj.io.stream.SelectStream;
import org.apache.solr.client.solrj.io.stream.StreamContext;
import org.apache.solr.client.solrj.io.stream.TupleStream;
import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.IOException;
import java.lang.invoke.MethodHandles;
import java.util.ArrayList;
import java.util.List;

public class QuerySolr {

private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());

static StreamFactory streamFactory = new StreamFactory()
.withCollectionZkHost("collection1","localhost:9983")
.withFunctionName("select", SelectStream.class)
.withFunctionName("search", CloudSolrStream.class)
.withFunctionName("div", DivideEvaluator.class);

public static void main(String args[]) throws IOException, 
SolrServerException {

SelectStream stream = (SelectStream)streamFactory
.constructStream("select(\n" +
"  search(collection1, fl=\"id,A_i,B_i\", q=\"*:*\", 
sort=\"id asc\"),\n" +
"  id as UNIQUE_KEY,\n" +
"  div(A_i,B_i) as divRes\n" +
")");

attachStreamFactory(stream);

List tuples = getTuples(stream);
for (Tuple tuple : tuples) {
log.info("tuple: " + tuple.getMap());
System.out.println("tuple: " + tuple.getMap());
}
System.exit(0);
}

private static void attachStreamFactory(TupleStream tupleStream) {
StreamContext context = new StreamContext();
context.setSolrClientCache(new SolrClientCache());
context.setStreamFactory(streamFactory);
tupleStream.setStreamContext(context);
}

private static List getTuples(TupleStream tupleStream) throws 
IOException {
tupleStream.open();
List tuples = new ArrayList();
for(;;) {
Tuple t = tupleStream.read();
if(t.EOF) {
break;
} else {
tuples.add(t);
}
}
tupleStream.close();
return tuples;
}
}
{code}

I need {{System.exit(0);}} to terminate the program, so pretty sure some 
httpclient is not getting closed properly or such.

*_Also, the patch above is absolutely not required to make this work_*, we can 
move forward with above examples and streams can be constructed without adding 
constructors to each stream source, decorators or evaluators. The only 
condition is we have to pass our own {{streamFactory}}.

Hope it helps.

P.S. Please disregard the PATCH, it serves no purpose.


was (Author: sarkaramr...@gmail.com):
Examples are listed under 
https://lucene.apache.org/solr/guide/6_6/streaming-expressions.html#StreamingExpressions-StreamingRequestsandResponses
 and http://joelsolr.blogspot.in/2015/04/the-streaming-api-solrjio-basics.html.

I have cook one example against {{master}} branch, which strictly required 
httpClient::4.5.3

{code}
package stream.example;

import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.io.SolrClientCache;
import org.apache.solr.client.solrj.io.Tuple;
import org.apache.solr.client.solrj.io.eval.DivideEvaluator;
import org.apache.solr.client.solrj.io.stream.CloudSolrStream;
import org.apache.solr.client.solrj.io.stream.SelectStream;
import org.apache.solr.client.solrj.io.stream.StreamContext;
import org.apache.solr.client.solrj.io.stream.TupleStream;
import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.IOException;
import java.lang.invoke.MethodHandles;
import java.util.ArrayList;
import java.util.List;

public class QuerySolr {

private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());

static StreamFactory streamFactory = new StreamFactory()
.withCollectionZkHost(

[jira] [Commented] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only

2017-11-19 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16258474#comment-16258474
 ] 

Amrit Sarkar commented on SOLR-11600:
-

Examples are listed under 
https://lucene.apache.org/solr/guide/6_6/streaming-expressions.html#StreamingExpressions-StreamingRequestsandResponses
 and http://joelsolr.blogspot.in/2015/04/the-streaming-api-solrjio-basics.html.

I have cook one example against {{master}} branch, which strictly required 
httpClient::4.5.3

{code}
package stream.example;

import org.apache.solr.client.solrj.SolrServerException;
import org.apache.solr.client.solrj.io.SolrClientCache;
import org.apache.solr.client.solrj.io.Tuple;
import org.apache.solr.client.solrj.io.eval.DivideEvaluator;
import org.apache.solr.client.solrj.io.stream.CloudSolrStream;
import org.apache.solr.client.solrj.io.stream.SelectStream;
import org.apache.solr.client.solrj.io.stream.StreamContext;
import org.apache.solr.client.solrj.io.stream.TupleStream;
import org.apache.solr.client.solrj.io.stream.expr.StreamFactory;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import java.io.IOException;
import java.lang.invoke.MethodHandles;
import java.util.ArrayList;
import java.util.List;

public class QuerySolr {

private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());

static StreamFactory streamFactory = new StreamFactory()
.withCollectionZkHost("collection1","localhost:9983")
.withFunctionName("select", SelectStream.class)
.withFunctionName("search", CloudSolrStream.class)
.withFunctionName("div", DivideEvaluator.class);

public static void main(String args[]) throws IOException, 
SolrServerException {

SelectStream stream = (SelectStream)streamFactory
.constructStream("select(\n" +
"  search(collection1, fl=\"id,A_i,B_i\", q=\"*:*\", 
sort=\"id asc\"),\n" +
"  id as UNIQUE_KEY,\n" +
"  div(A_i,B_i) as divRes\n" +
")");

attachStreamFactory(stream);

List tuples = getTuples(stream);
for (Tuple tuple : tuples) {
log.info("tuple: " + tuple.getMap());
System.out.println("tuple: " + tuple.getMap());
}
System.exit(0);
}

private static void attachStreamFactory(TupleStream tupleStream) {
StreamContext context = new StreamContext();
context.setSolrClientCache(new SolrClientCache());
context.setStreamFactory(streamFactory);
tupleStream.setStreamContext(context);
}

private static List getTuples(TupleStream tupleStream) throws 
IOException {
tupleStream.open();
List tuples = new ArrayList();
for(;;) {
Tuple t = tupleStream.read();
if(t.EOF) {
break;
} else {
tuples.add(t);
}
}
tupleStream.close();
return tuples;
}
}
{code}

I need {{System.exit(0);}} to terminate the program, so pretty sure some 
httpclient is not getting closed properly or such.

*_Also, the patch above is absolutely not required to make this work_*, we can 
move forward with above examples and streams can be constructed without adding 
constructors to each stream source, decorators or evaluators. The only 
condition is we have to pass our own {{streamFactory}}.

Hope it helps.

P.S. Please disregard the PATCH, it serves no purpose.

> Add Constructor to SelectStream which takes StreamEvaluators as argument. 
> Current schema forces one to enter a stream expression string only 
> -
>
> Key: SOLR-11600
> URL: https://issues.apache.org/jira/browse/SOLR-11600
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ, streaming expressions
>Affects Versions: 6.6.1, 7.1
>Reporter: Aroop
>Priority: Trivial
>  Labels: easyfix
> Attachments: SOLR-11600.patch
>
>
> The use case is to be able able to supply stream evaluators over a rollup 
> stream in the following manner, but with instead with Strongly typed objects 
> and not steaming-expression strings.
> {code:bash}
> curl --data-urlencode 'expr=select(
> id,
> div(sum(cat1_i),sum(cat2_i)) as metric1,
> coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as 
> metric2,
> rollup(
> search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s 
> asc"),
> over="cat_s",sum(cat1_i),sum(cat2_i)
> ))' http://localhost:8983/solr/col1/stream
> {code}
> the current code base does not allow one to pro

[jira] [Updated] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API

2017-11-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11652:

Description: 
Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
from CDCR API.

Steps to reproduce:

1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED.
2. Index bunch of documents into source; make sure we have generated tlogs in 
decent numbers (>20)
3. Disable BUFFER on source and keep on indexing
4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps on 
accumulating ever.

  was:
Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
from CDCR API.

More details to follow.


> Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is 
> disabled from CDCR API
> 
>
> Key: SOLR-11652
> URL: https://issues.apache.org/jira/browse/SOLR-11652
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>
> Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
> from CDCR API.
> Steps to reproduce:
> 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED.
> 2. Index bunch of documents into source; make sure we have generated tlogs in 
> decent numbers (>20)
> 3. Disable BUFFER on source and keep on indexing
> 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps 
> on accumulating ever.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API

2017-11-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11652:

Description: 
Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
from CDCR API.

Steps to reproduce:

1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED.
2. Index bunch of documents into source; make sure we have generated tlogs in 
decent numbers (>20)
3. Disable BUFFER via API on source and keep on indexing
4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps on 
accumulating ever.

  was:
Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
from CDCR API.

Steps to reproduce:

1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED.
2. Index bunch of documents into source; make sure we have generated tlogs in 
decent numbers (>20)
3. Disable BUFFER on source and keep on indexing
4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps on 
accumulating ever.


> Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is 
> disabled from CDCR API
> 
>
> Key: SOLR-11652
> URL: https://issues.apache.org/jira/browse/SOLR-11652
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>
> Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
> from CDCR API.
> Steps to reproduce:
> 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED.
> 2. Index bunch of documents into source; make sure we have generated tlogs in 
> decent numbers (>20)
> 3. Disable BUFFER via API on source and keep on indexing
> 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps 
> on accumulating ever.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only

2017-11-20 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259537#comment-16259537
 ] 

Amrit Sarkar commented on SOLR-11600:
-

Meanwhile I had a second look on the description of yours again; you are 
aspiring proper Java constructors. Well it is bit challenging considering it 
{{StreamOperation}} is an interface and not exactly class which we can pass 
incoming raw string value. I will see what can be done. 

> Add Constructor to SelectStream which takes StreamEvaluators as argument. 
> Current schema forces one to enter a stream expression string only 
> -
>
> Key: SOLR-11600
> URL: https://issues.apache.org/jira/browse/SOLR-11600
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ, streaming expressions
>Affects Versions: 6.6.1, 7.1
>Reporter: Aroop
>Priority: Trivial
>  Labels: easyfix
> Attachments: SOLR-11600.patch
>
>
> The use case is to be able able to supply stream evaluators over a rollup 
> stream in the following manner, but with instead with Strongly typed objects 
> and not steaming-expression strings.
> {code:bash}
> curl --data-urlencode 'expr=select(
> id,
> div(sum(cat1_i),sum(cat2_i)) as metric1,
> coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as 
> metric2,
> rollup(
> search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s 
> asc"),
> over="cat_s",sum(cat1_i),sum(cat2_i)
> ))' http://localhost:8983/solr/col1/stream
> {code}
> the current code base does not allow one to provide selectedEvaluators in a 
> constructor, so one cannot prepare their select stream via java code:
> {code:java}
> public class SelectStream extends TupleStream implements Expressible {
> private static final long serialVersionUID = 1L;
> private TupleStream stream;
> private StreamContext streamContext;
> private Map selectedFields;
> private Map selectedEvaluators;
> private List operations;
> public SelectStream(TupleStream stream, List selectedFields) 
> throws IOException {
> this.stream = stream;
> this.selectedFields = new HashMap();
> Iterator var3 = selectedFields.iterator();
> while(var3.hasNext()) {
> String selectedField = (String)var3.next();
> this.selectedFields.put(selectedField, selectedField);
> }
> this.operations = new ArrayList();
> this.selectedEvaluators = new HashMap();
> }
> public SelectStream(TupleStream stream, Map 
> selectedFields) throws IOException {
> this.stream = stream;
> this.selectedFields = selectedFields;
> this.operations = new ArrayList();
> this.selectedEvaluators = new HashMap();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API

2017-11-20 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16259966#comment-16259966
 ] 

Amrit Sarkar commented on SOLR-11652:
-

More details on the behavior:

1. Cdcr target leader's tlogs doesn't get purged unless issues action=START at 
target
2. Cdcr source leader's tlogs doesn't get purged when DISABLEBUFFER from API.
3. Cdcr source if restarted, with DISABLEBUFFER from API earlier, behaves 
normally.
4. The #3 point is expected, as source will read buffer config from ZK and load 
and mark tlogs at beginning, similarly reading from solrconfig.xml

> Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is 
> disabled from CDCR API
> 
>
> Key: SOLR-11652
> URL: https://issues.apache.org/jira/browse/SOLR-11652
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>
> Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
> from CDCR API.
> Steps to reproduce:
> 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED.
> 2. Index bunch of documents into source; make sure we have generated tlogs in 
> decent numbers (>20)
> 3. Disable BUFFER via API on source and keep on indexing
> 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps 
> on accumulating ever.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11635) CDCR Source configuration example in the ref guide leaves out important settings

2017-11-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11635:

Attachment: cdcr-doc.patch

I am attaching a patch for the CDCR doc. Correcting "Initial Startup" section; 
to issue Cdcr START on target too.

> CDCR Source configuration example in the ref guide leaves out important 
> settings
> 
>
> Key: SOLR-11635
> URL: https://issues.apache.org/jira/browse/SOLR-11635
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
> Attachments: cdcr-doc.patch
>
>
> If you blindly copy/paste the Source config from the example, your 
> transaction logs on the Source replicas will not be managed correctly.
> Plus another couple of improvements, in particular a caution about why 
> buffering should be disabled most of the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11635) CDCR Source configuration example in the ref guide leaves out important settings

2017-11-20 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11635?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260004#comment-16260004
 ] 

Amrit Sarkar commented on SOLR-11635:
-

[~varunthacker]

Not really, the action: Start triggers the processStateManager, bufferManager, 
replicator and other cdcr components to get in sync and start doing replication 
to target with parameters available. On target, since no replication to other 
DC needs to be done, the casual language "There is no need to run the 
/cdcr?action=START command on the Target" is used maybe.

> CDCR Source configuration example in the ref guide leaves out important 
> settings
> 
>
> Key: SOLR-11635
> URL: https://issues.apache.org/jira/browse/SOLR-11635
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
> Attachments: cdcr-doc.patch
>
>
> If you blindly copy/paste the Source config from the example, your 
> transaction logs on the Source replicas will not be managed correctly.
> Plus another couple of improvements, in particular a caution about why 
> buffering should be disabled most of the time.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only

2017-11-21 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260797#comment-16260797
 ] 

Amrit Sarkar commented on SOLR-11600:
-

Thank you [~joel.bernstein] for the explanation; 

> Each expression has it's own set of rules for the parameters that it accepts 
> so we can get very specific with how type safety is handled
I completely understand this by the following example
{code}
replace( fieldA, add( fieldB, if( eq(fieldC,0), 0, 1)))
{code}
This nested evaluation and operation is not possible to create with current 
Java constructors available, as the constructors of evaluators and operations 
have most just one type of constructor with {{StreamExpression}} 
(StreamExpressionParameter interface) parameter which the evaluators or 
operators doesn't implement (they implement Expressible interface).
{code}
  public AddEvaluator(StreamExpression expression, StreamFactory factory) 
throws IOException{
super(expression, factory);

if(containedEvaluators.size() < 1){
  throw new IOException(String.format(Locale.ROOT,"Invalid expression %s - 
expecting at least one value but found 
%d",expression,containedEvaluators.size()));
}
  }
{code}

To accomodate the above request, strongly types java objects for all, we need 
to create rule-based constructors for all the evaluators and operators, so that 
those can be used in {{SelectStream}}.

> Add Constructor to SelectStream which takes StreamEvaluators as argument. 
> Current schema forces one to enter a stream expression string only 
> -
>
> Key: SOLR-11600
> URL: https://issues.apache.org/jira/browse/SOLR-11600
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ, streaming expressions
>Affects Versions: 6.6.1, 7.1
>Reporter: Aroop
>Priority: Trivial
>  Labels: easyfix
> Attachments: SOLR-11600.patch
>
>
> The use case is to be able able to supply stream evaluators over a rollup 
> stream in the following manner, but with instead with Strongly typed objects 
> and not steaming-expression strings.
> {code:bash}
> curl --data-urlencode 'expr=select(
> id,
> div(sum(cat1_i),sum(cat2_i)) as metric1,
> coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as 
> metric2,
> rollup(
> search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s 
> asc"),
> over="cat_s",sum(cat1_i),sum(cat2_i)
> ))' http://localhost:8983/solr/col1/stream
> {code}
> the current code base does not allow one to provide selectedEvaluators in a 
> constructor, so one cannot prepare their select stream via java code:
> {code:java}
> public class SelectStream extends TupleStream implements Expressible {
> private static final long serialVersionUID = 1L;
> private TupleStream stream;
> private StreamContext streamContext;
> private Map selectedFields;
> private Map selectedEvaluators;
> private List operations;
> public SelectStream(TupleStream stream, List selectedFields) 
> throws IOException {
> this.stream = stream;
> this.selectedFields = new HashMap();
> Iterator var3 = selectedFields.iterator();
> while(var3.hasNext()) {
> String selectedField = (String)var3.next();
> this.selectedFields.put(selectedField, selectedField);
> }
> this.operations = new ArrayList();
> this.selectedEvaluators = new HashMap();
> }
> public SelectStream(TupleStream stream, Map 
> selectedFields) throws IOException {
> this.stream = stream;
> this.selectedFields = selectedFields;
> this.operations = new ArrayList();
> this.selectedEvaluators = new HashMap();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only

2017-11-21 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16260797#comment-16260797
 ] 

Amrit Sarkar edited comment on SOLR-11600 at 11/21/17 2:34 PM:
---

Thank you [~joel.bernstein] for the explanation; 

bq. Each expression has it's own set of rules for the parameters that it 
accepts so we can get very specific with how type safety is handled
I completely understand this by the following example
{code}
replace( fieldA, add( fieldB, if( eq(fieldC,0), 0, 1)))
{code}
This nested evaluation and operation is not possible to create with current 
Java constructors available, as the constructors of evaluators and operations 
have most just one type of constructor with {{StreamExpression}} 
(StreamExpressionParameter interface) parameter which the evaluators or 
operators doesn't implement (they implement Expressible interface).
{code}
  public AddEvaluator(StreamExpression expression, StreamFactory factory) 
throws IOException{
super(expression, factory);

if(containedEvaluators.size() < 1){
  throw new IOException(String.format(Locale.ROOT,"Invalid expression %s - 
expecting at least one value but found 
%d",expression,containedEvaluators.size()));
}
  }
{code}

To accomodate the above request, strongly types java objects for all, we need 
to create rule-based constructors for all the evaluators and operators, so that 
those can be used in {{SelectStream}}.


was (Author: sarkaramr...@gmail.com):
Thank you [~joel.bernstein] for the explanation; 

> Each expression has it's own set of rules for the parameters that it accepts 
> so we can get very specific with how type safety is handled
I completely understand this by the following example
{code}
replace( fieldA, add( fieldB, if( eq(fieldC,0), 0, 1)))
{code}
This nested evaluation and operation is not possible to create with current 
Java constructors available, as the constructors of evaluators and operations 
have most just one type of constructor with {{StreamExpression}} 
(StreamExpressionParameter interface) parameter which the evaluators or 
operators doesn't implement (they implement Expressible interface).
{code}
  public AddEvaluator(StreamExpression expression, StreamFactory factory) 
throws IOException{
super(expression, factory);

if(containedEvaluators.size() < 1){
  throw new IOException(String.format(Locale.ROOT,"Invalid expression %s - 
expecting at least one value but found 
%d",expression,containedEvaluators.size()));
}
  }
{code}

To accomodate the above request, strongly types java objects for all, we need 
to create rule-based constructors for all the evaluators and operators, so that 
those can be used in {{SelectStream}}.

> Add Constructor to SelectStream which takes StreamEvaluators as argument. 
> Current schema forces one to enter a stream expression string only 
> -
>
> Key: SOLR-11600
> URL: https://issues.apache.org/jira/browse/SOLR-11600
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ, streaming expressions
>Affects Versions: 6.6.1, 7.1
>Reporter: Aroop
>Priority: Trivial
>  Labels: easyfix
> Attachments: SOLR-11600.patch
>
>
> The use case is to be able able to supply stream evaluators over a rollup 
> stream in the following manner, but with instead with Strongly typed objects 
> and not steaming-expression strings.
> {code:bash}
> curl --data-urlencode 'expr=select(
> id,
> div(sum(cat1_i),sum(cat2_i)) as metric1,
> coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as 
> metric2,
> rollup(
> search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s 
> asc"),
> over="cat_s",sum(cat1_i),sum(cat2_i)
> ))' http://localhost:8983/solr/col1/stream
> {code}
> the current code base does not allow one to provide selectedEvaluators in a 
> constructor, so one cannot prepare their select stream via java code:
> {code:java}
> public class SelectStream extends TupleStream implements Expressible {
> private static final long serialVersionUID = 1L;
> private TupleStream stream;
> private StreamContext streamContext;
> private Map selectedFields;
> private Map selectedEvaluators;
> private List operations;
> public SelectStream(TupleStream stream, List selectedFields) 
> throws IOException {
> this.stream = stream;
> this.selectedFields = new HashMap();
> Iterator var3 = selectedFields.iterator();
> while(var3.hasNext()) {
> String selectedField = (String)var3.next();
> this.selectedFields.put(selectedField, sel

[jira] [Updated] (SOLR-11600) Add Constructor to SelectStream which takes StreamEvaluators as argument. Current schema forces one to enter a stream expression string only

2017-11-06 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11600?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11600:

Attachment: SOLR-11600.patch

Uploaded first draft patch without tests. It can be coded better. The thing to 
notice is: we have to pass our own StreamFactory everytime we make request from 
SolrJ. 

Pretty sure Joel will have better solution than this.

> Add Constructor to SelectStream which takes StreamEvaluators as argument. 
> Current schema forces one to enter a stream expression string only 
> -
>
> Key: SOLR-11600
> URL: https://issues.apache.org/jira/browse/SOLR-11600
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrJ, streaming expressions
>Affects Versions: 6.6.1, 7.1
>Reporter: Aroop
>Priority: Trivial
>  Labels: easyfix
> Fix For: 6.6.1, 7.0
>
> Attachments: SOLR-11600.patch
>
>
> The use case is to be able able to supply stream evaluators over a rollup 
> stream in the following manner, but with instead with Strongly typed objects 
> and not steaming-expression strings.
> {code:bash}
> curl --data-urlencode 'expr=select(
> id,
> div(sum(cat1_i),sum(cat2_i)) as metric1,
> coalesce(div(sum(cat1_i),if(eq(sum(cat2_i),0),null,sum(cat2_i))),0) as 
> metric2,
> rollup(
> search(col1, q=*:*, fl="id,cat1_i,cat2_i,cat_s", qt="/export", sort="cat_s 
> asc"),
> over="cat_s",sum(cat1_i),sum(cat2_i)
> ))' http://localhost:8983/solr/col1/stream
> {code}
> the current code base does not allow one to provide selectedEvaluators in a 
> constructor, so one cannot prepare their select stream via java code:
> {code:java}
> public class SelectStream extends TupleStream implements Expressible {
> private static final long serialVersionUID = 1L;
> private TupleStream stream;
> private StreamContext streamContext;
> private Map selectedFields;
> private Map selectedEvaluators;
> private List operations;
> public SelectStream(TupleStream stream, List selectedFields) 
> throws IOException {
> this.stream = stream;
> this.selectedFields = new HashMap();
> Iterator var3 = selectedFields.iterator();
> while(var3.hasNext()) {
> String selectedField = (String)var3.next();
> this.selectedFields.put(selectedField, selectedField);
> }
> this.operations = new ArrayList();
> this.selectedEvaluators = new HashMap();
> }
> public SelectStream(TupleStream stream, Map 
> selectedFields) throws IOException {
> this.stream = stream;
> this.selectedFields = selectedFields;
> this.operations = new ArrayList();
> this.selectedEvaluators = new HashMap();
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11409) A ref guide page on setting up solr on aws

2017-11-07 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16242310#comment-16242310
 ] 

Amrit Sarkar commented on SOLR-11409:
-

[~varunthacker],

I see the change {{dataDir}}; we need absolute path for dataDir; since 
zookeeper_data is not yet been created, let's do the below

{code}
# create data dir for ZooKeeper, edit zoo.cfg, uncomment autopurge parameters
$ mkdir ~/zookeeper_data
$ vim conf/zoo.cfg
# -- uncomment --
autopurge.snapRetainCount=3
autopurge.purgeInterval=1
# -- edit --
dataDir=/home/ec2-user/zookeeper_data
{code}

I can create a patch for it, but it would be minor for you to commit; so let me 
know.

> A ref guide page on setting up solr on aws
> --
>
> Key: SOLR-11409
> URL: https://issues.apache.org/jira/browse/SOLR-11409
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Assignee: Cassandra Targett
>Priority: Minor
> Fix For: 7.2, master (8.0)
>
> Attachments: SOLR-11409.patch, SOLR-11409_followup_minor.patch, 
> quick-start-aws-key.png, quick-start-aws-security-1.png, 
> quick-start-aws-security-2.png
>
>
> It will be nice if we have a dedicated page on installing solr on aws . 
> At the end we could even link to 
> http://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9272) Auto resolve zkHost for bin/solr zk for running Solr

2017-11-07 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16242335#comment-16242335
 ] 

Amrit Sarkar commented on SOLR-9272:


Looking for any suggestions. Not comforable saying this, but we can ignore 
tests for this utility, as I can see hardcoded "http://localhost:."; for 
default Solr url and not being tested anywhere. Pretty sure if tested under 
{{SolrTestCaseJ}}, it will randomise SSL config.

Let me know [~erickerickson] [~janhoy]

> Auto resolve zkHost for bin/solr zk for running Solr
> 
>
> Key: SOLR-9272
> URL: https://issues.apache.org/jira/browse/SOLR-9272
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: scripts and tools
>Affects Versions: 6.2
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: newdev
> Attachments: SOLR-9272.patch, SOLR-9272.patch, SOLR-9272.patch, 
> SOLR-9272.patch, SOLR-9272.patch
>
>
> Spinoff from SOLR-9194:
> We can skip requiring {{-z}} for {{bin/solr zk}} for a Solr that is already 
> running. We can optionally accept the {{-p}} parameter instead, and with that 
> use StatusTool to fetch the {{cloud/ZooKeeper}} property from there. It's 
> easier to remember solr port than zk string.
> Example:
> {noformat}
> bin/solr start -c -p 9090
> bin/solr zk ls / -p 9090
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11625) Solr may remove live index on Solr shutdown

2017-11-09 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11625?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16246678#comment-16246678
 ] 

Amrit Sarkar commented on SOLR-11625:
-

[~mar-kolya],

I tried to replicate the issue on t2x.2xlarge AWS instance, with heavy indexing 
(10 simultaneous indexing threads pushing 1000 doc batches) and restarting 
single node cluster with embedded zookeeper. I was not able to get the 
"InterruptException" or the "old index directories ..." error.

Can you share more details on the test scenario? Number of nodes, indexing rate 
etc. Thank you in advance.

> Solr may remove live index on Solr shutdown
> ---
>
> Key: SOLR-11625
> URL: https://issues.apache.org/jira/browse/SOLR-11625
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6.1
>Reporter: Nikolay Martynov
>
> This has been observed in the wild:
> {noformat}
> 2017-11-07 02:35:46.909 ERROR (qtp1724399560-8090) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.SolrCore 
> :java.nio.channels.ClosedByInterruptException
>   at 
> java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202)
>   at sun.nio.ch.FileChannelImpl.size(FileChannelImpl.java:315)
>   at 
> org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:242)
>   at 
> org.apache.lucene.store.NRTCachingDirectory.openInput(NRTCachingDirectory.java:192)
>   at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:356)
>   at 
> org.apache.solr.core.SolrCore.cleanupOldIndexDirectories(SolrCore.java:3044)
>   at org.apache.solr.core.SolrCore.close(SolrCore.java:1575)
>   at org.apache.solr.servlet.HttpSolrCall.destroy(HttpSolrCall.java:582)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:374)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at 
> org.eclipse.jetty.rewrite.handler.RewriteHandler.handle(RewriteHandler.java:335)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
>   at org.eclipse.jetty.server.Server.handle(Server.java:534)
>   at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
>   at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
>   at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
>   at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
>   at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.executeProduceConsume(ExecuteProduceConsume.java:303)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceConsume(ExecuteProduceConsume.java:148)
>   at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:136)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
>   at java.lang.Thread.run(Thread.java:748)
> 2017-11-07 02:35:46.912 INFO  
> (OldIndexDirectoryCleanupThreadForCore-xxx_shard4_replica8) [c:xxx s:shard4 
> r:core_node399 x:xxx_shard4_replica8] o.a.s.c.DirectoryFactory Found 1 old 
> index directories to clean-up under 
> /opt/solr/server/solr/xxx_shard4_replica8/data/ afterReload=false

[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-10 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11598:

Attachment: SOLR-11598-6_6.patch
SOLR-11598-master.patch

[~aroopganguly],

I have attached patches against {{master}} and {{branch_6_6}} branches 
supporting maximum 8 fields instead of current 4, so that we can analyse how 
the performance gets affected. I have also included very basic tests for 
{{ExportWriter}}; but effective.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>  Labels: patch
> Attachments: SOLR-11598-6_6.patch, SOLR-11598-master.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> This is a big limitation for me, as I am working on a feature with a tight 
> deadline where I need to support 10 dimensional rollups. I did not read any 
> limitation on the sorting in the documentation and we went ahead with the 
> installation of 6.6.1. Now we are blocked with this limitation.
> This is a Jira to track this work.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.se

[jira] [Updated] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2017-11-10 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11598:

Attachment: SOLR-11598-6_6-streamtests

Added another "experimental" patch: {{SOLR-11598-6_6-streamtests}} against 
{{branch_6_6}} with *nocommit* supporting stream expressions (unique & rollup) 
can take more than 4 sort fields now.

Please mind, these patches are for pure experimental performance analysis 
purpose.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> This is a big limitation for me, as I am working on a feature with a tight 
> deadline where I need to support 10 dimensional rollups. I did not read any 
> limitation on the sorting in the documentation and we went ahead with the 
> installation of 6.6.1. Now we are blocked with this limitation.
> This is a Jira to track this work.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.s

[jira] [Commented] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support

2017-11-22 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16262486#comment-16262486
 ] 

Amrit Sarkar commented on SOLR-11412:
-

+1. It is lot of scrolling up and down right now. Happy with the 4 seb-sections 
too.

> Documentation changes for SOLR-11003: Bi-directional CDCR support
> -
>
> Key: SOLR-11412
> URL: https://issues.apache.org/jira/browse/SOLR-11412
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, documentation
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: CDCR_bidir.png, SOLR-11412.patch, SOLR-11412.patch, 
> SOLR-11412.patch, SOLR-11412.patch
>
>
> Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its 
> conclusion. The relevant changes in documentation needs to be done.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11601) solr.LatLonPointSpatialField : sorting by geodist fails

2017-11-22 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16262493#comment-16262493
 ] 

Amrit Sarkar commented on SOLR-11601:
-

I am using SolrJ 6.6:

How about this:
{code}
query.set("sfield","b4_location_geo_si");
query.set("pt","47.36667,8.55");
query.setSort( "geodist()", SolrQuery.ORDER.asc);
{code}

I don't any other way, to be honest.

> solr.LatLonPointSpatialField : sorting by geodist fails
> ---
>
> Key: SOLR-11601
> URL: https://issues.apache.org/jira/browse/SOLR-11601
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.6
>Reporter: Clemens Wyss
>Priority: Blocker
>
> Im switching my schemas from derprecated solr.LatLonType to 
> solr.LatLonPointSpatialField.
> Now my sortquery (which used to work with solr.LatLonType):
> *sort=geodist(b4_location__geo_si,47.36667,8.55) asc*
> raises the error
> {color:red}*"sort param could not be parsed as a query, and is not a field 
> that exists in the index: geodist(b4_location__geo_si,47.36667,8.55)"*{color}
> Invoking sort using syntax 
> {color:#14892c}sfield=b4_location__geo_si&pt=47.36667,8.55&sort=geodist() asc
> works as expected though...{color}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API

2017-11-24 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16265208#comment-16265208
 ] 

Amrit Sarkar commented on SOLR-11652:
-

I had a chance to chat with [~erickerickson], [~varunthacker] to discuss the 
significance of "buffering" in CDC replication.

Motivation for buffering in CDCR: listed on SOLR-11069 by Renaud:

_The original goal of the buffer on cdcr is to indeed keep indefinitely the 
tlogs until the buffer is deactivated 
(https://lucene.apache.org/solr/guide/7_1/cross-data-center-replication-cdcr.html#the-buffer-element.
 This was useful for example during maintenance operations, to ensure that the 
source cluster will keep all the tlogs until the target clsuter is properly 
initialised. In this scenario, one will activate the buffer on the source. The 
source will start to store all the tlogs (and does not purge them). Once the 
target cluster is initialised, and has register a tlog pointer on the source, 
one can deactivate the buffer on the source and the tlog will start to be 
purged once they are read by the target cluster._

What I understood looking at the code besides what Renaud explained:

_Buffer is always enabled on non-leader nodes of source. In source DC, sync b/w 
leaders and followers is maintained by buffer. If leader goes down, and someone 
else picks up, it uses bufferLog to determine the current version point._

Essentially buffering was introduced to remind source that no updates has been 
sent over, because target is not ready, or CDCR is not started. The 
LastProcessedVersion for source is -1 when buffer enabled, suggesting no 
updates has been forwarded and it has to keep track of all tlogs. Once 
disabled, it starts to show the correct version which has been replicated to 
target.

In Solr 6.2, Bootstrapping is introduced which very well takes care of the 
above use-case, i.e. Source is up and running and have already received bunch 
of updates / documents and either we have not started CDCR or target is not 
available only until now. Whenever CDC replication is started (action=START 
invoked), Bootstrap is called implicitly, which copies the entire index folder 
(not tlogs) to the target. This is much faster and effective than earlier setup 
where all the updates from the beginning were sent to target linearly in batch 
size defined in the cdcr config. This earlier setup was achieved by Buffering 
(the tlogs from beginning).

Today, if we see the current CDCR documentation page, buffering is "disabled" 
by default in both source and target. We don't see any purpose served by Cdcr 
buffering and it is quite an overhead considering it can take a lot heap space 
(tlogs ptr) and forever retention of tlogs on the disk when enabled. Also 
today, even if we disable buffer from API on source , considering it was 
enabled at startup, tlogs are never purged on leader node of shards of source, 
refer jira: SOLR-11652

We propose to make Buffer state default "DISABLED" in the code 
(CdcrBufferManager) and deprecate its APIs (ENABLE / DISABLE buffer). It will 
still be running for non-leader nodes on source implicitly and no user 
intervention is required whatsoever.

> Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is 
> disabled from CDCR API
> 
>
> Key: SOLR-11652
> URL: https://issues.apache.org/jira/browse/SOLR-11652
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
>
> Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
> from CDCR API.
> Steps to reproduce:
> 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED.
> 2. Index bunch of documents into source; make sure we have generated tlogs in 
> decent numbers (>20)
> 3. Disable BUFFER via API on source and keep on indexing
> 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps 
> on accumulating ever.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11671) CdcrUpdateLog should be enabled smartly for Cdcr configured collection

2017-11-24 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-11671:
---

 Summary: CdcrUpdateLog should be enabled smartly for Cdcr 
configured collection
 Key: SOLR-11671
 URL: https://issues.apache.org/jira/browse/SOLR-11671
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: CDCR
Affects Versions: 7.2
Reporter: Amrit Sarkar


{{CdcrUpdateLog}} should be configured smartly by itself collection config has 
CDCR Request Handler specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11671) CdcrUpdateLog should be enabled smartly for Cdcr configured collection

2017-11-24 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11671:

Description: {{CdcrUpdateLog}} should be configured smartly by itself when 
collection config has *CDCR Request Handler* specified.  (was: 
{{CdcrUpdateLog}} should be configured smartly by itself collection config has 
CDCR Request Handler specified.)

> CdcrUpdateLog should be enabled smartly for Cdcr configured collection
> --
>
> Key: SOLR-11671
> URL: https://issues.apache.org/jira/browse/SOLR-11671
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2
>Reporter: Amrit Sarkar
>
> {{CdcrUpdateLog}} should be configured smartly by itself when collection 
> config has *CDCR Request Handler* specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11671) CdcrUpdateLog should be enabled smartly for Cdcr configured collection

2017-11-24 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16265240#comment-16265240
 ] 

Amrit Sarkar commented on SOLR-11671:
-

Patch attached where UpdateHandler looks through all its request handlers, and 
if found implementation for {{CdcrRequestHandler}}, assigns CdcrUpdateLog with 
the passed arguments in solrconfig.xml. I understand {{UpdateHandler}} is an 
abstract class, but the implementation for check-for-Cdcr would be there and 
there itself.

If given +1 on approach, will change cdcr related tests everywhere binding to 
it.

> CdcrUpdateLog should be enabled smartly for Cdcr configured collection
> --
>
> Key: SOLR-11671
> URL: https://issues.apache.org/jira/browse/SOLR-11671
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2
>Reporter: Amrit Sarkar
>
> {{CdcrUpdateLog}} should be configured smartly by itself when collection 
> config has *CDCR Request Handler* specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11671) CdcrUpdateLog should be enabled smartly for Cdcr configured collection

2017-11-24 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11671:

Attachment: SOLR-11671.patch

> CdcrUpdateLog should be enabled smartly for Cdcr configured collection
> --
>
> Key: SOLR-11671
> URL: https://issues.apache.org/jira/browse/SOLR-11671
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.2
>Reporter: Amrit Sarkar
> Attachments: SOLR-11671.patch
>
>
> {{CdcrUpdateLog}} should be configured smartly by itself when collection 
> config has *CDCR Request Handler* specified.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11652) Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is disabled from CDCR API

2017-11-24 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11652?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11652:

Attachment: SOLR-11652.patch

Please mind on the patch, I have commented out the relevant code from the 
module. I can remove them completely if that is how deprecation of APIs are 
done.

> Cdcr TLogs doesn't get purged for Source collection Leader when Buffer is 
> disabled from CDCR API
> 
>
> Key: SOLR-11652
> URL: https://issues.apache.org/jira/browse/SOLR-11652
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Amrit Sarkar
> Attachments: SOLR-11652.patch
>
>
> Cdcr transactions logs doesn't get purged on leader EVER when Buffer DISABLED 
> from CDCR API.
> Steps to reproduce:
> 1. Setup source and target collection cluster and START CDCR, BUFFER ENABLED.
> 2. Index bunch of documents into source; make sure we have generated tlogs in 
> decent numbers (>20)
> 3. Disable BUFFER via API on source and keep on indexing
> 4. Tlogs starts to get purges on follower nodes of Source, but Leader keeps 
> on accumulating ever.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11705) Java Class Cast Exception while loading custom plugin

2017-11-30 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11705?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272310#comment-16272310
 ] 

Amrit Sarkar commented on SOLR-11705:
-

Details?

> Java Class Cast Exception while loading custom plugin
> -
>
> Key: SOLR-11705
> URL: https://issues.apache.org/jira/browse/SOLR-11705
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 7.1
>Reporter: As Ma
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11676) nrt replicas is always 1 when not specified

2017-11-30 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272773#comment-16272773
 ] 

Amrit Sarkar commented on SOLR-11676:
-

Varun I can see what are you saying:

{{CreateCollectionCmd}}::
{code}
  int numNrtReplicas = message.getInt(NRT_REPLICAS, 
message.getInt(REPLICATION_FACTOR, numTlogReplicas>0?0:1));
{code}

But this code suggests, it will pick {{replicationFactor}} positively. I will 
put a debugger and test.



> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11676) nrt replicas is always 1 when not specified

2017-11-30 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272856#comment-16272856
 ] 

Amrit Sarkar commented on SOLR-11676:
-

Figured out. Attached patch, verified its working. {{ClusterStateTest}} is very 
poorly written in terms of verifying actual collection properties passed.

{code}
modified:   
solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java
modified:   
solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java
{code} 

> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11676) nrt replicas is always 1 when not specified

2017-11-30 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11676:

Attachment: SOLR-11676.patch

> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-11676.patch
>
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11676) nrt replicas is always 1 when not specified

2017-11-30 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16272856#comment-16272856
 ] 

Amrit Sarkar edited comment on SOLR-11676 at 11/30/17 4:07 PM:
---

Figured out. Attached patch, verified its working. {{ClusterStateTest}} is very 
poorly written in terms of verifying actual collection properties passed.

{code}
modified:   
solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java
modified:   
solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java
{code} 

If we decide to write tests for the same, it will be tad difficult.


was (Author: sarkaramr...@gmail.com):
Figured out. Attached patch, verified its working. {{ClusterStateTest}} is very 
poorly written in terms of verifying actual collection properties passed.

{code}
modified:   
solr/core/src/java/org/apache/solr/cloud/OverseerCollectionMessageHandler.java
modified:   
solr/core/src/java/org/apache/solr/cloud/overseer/ClusterStateMutator.java
{code} 

> nrt replicas is always 1 when not specified
> ---
>
> Key: SOLR-11676
> URL: https://issues.apache.org/jira/browse/SOLR-11676
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
> Attachments: SOLR-11676.patch
>
>
> I created 1 2 shard X 2 replica collection . Here's the log entry for it
> {code}
> 2017-11-27 06:43:47.071 INFO  (qtp159259014-22) [   ] 
> o.a.s.h.a.CollectionsHandler Invoked Collection Action :create with params 
> replicationFactor=2&routerName=compositeId&collection.configName=_default&maxShardsPerNode=2&name=test_recovery&router.name=compositeId&action=CREATE&numShards=2&wt=json&_=1511764995711
>  and sendToOCPQueue=true
> {code}
> And then when I look at the state.json file I see nrtReplicas is set to 1. 
> Any combination of numShards and replicationFactor without explicitly 
> specifying the "nrtReplicas" param puts the "nrtReplicas" as 1 instead of 
> using the replicationFactor value
> {code}
> {"test_recovery":{
> "pullReplicas":"0",
> "replicationFactor":"2",
> ...
> "nrtReplicas":"1",
> "tlogReplicas":"0",
> ..
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11718) Deprecate CDCR Buffer APIs

2017-12-03 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-11718:
---

 Summary: Deprecate CDCR Buffer APIs
 Key: SOLR-11718
 URL: https://issues.apache.org/jira/browse/SOLR-11718
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: CDCR
Affects Versions: 7.1
Reporter: Amrit Sarkar
 Fix For: 7.2


Kindly see the discussion on SOLR-11652.

Today, if we see the current CDCR documentation page, buffering is "disabled" 
by default in both source and target. We don't see any purpose served by Cdcr 
buffering and it is quite an overhead considering it can take a lot heap space 
(tlogs ptr) and forever retention of tlogs on the disk when enabled. Also 
today, even if we disable buffer from API on source , considering it was 
enabled at startup, tlogs are never purged on leader node of shards of source, 
refer jira: SOLR-11652



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11718) Deprecate CDCR Buffer APIs

2017-12-03 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11718?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11718:

Attachment: SOLR-11652.patch

Please mind on the patch, I have commented out the relevant code from the 
module. I can remove them completely if that is how deprecation of APIs are 
done.

> Deprecate CDCR Buffer APIs
> --
>
> Key: SOLR-11718
> URL: https://issues.apache.org/jira/browse/SOLR-11718
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.1
>Reporter: Amrit Sarkar
> Fix For: 7.2
>
> Attachments: SOLR-11652.patch
>
>
> Kindly see the discussion on SOLR-11652.
> Today, if we see the current CDCR documentation page, buffering is "disabled" 
> by default in both source and target. We don't see any purpose served by Cdcr 
> buffering and it is quite an overhead considering it can take a lot heap 
> space (tlogs ptr) and forever retention of tlogs on the disk when enabled. 
> Also today, even if we disable buffer from API on source , considering it was 
> enabled at startup, tlogs are never purged on leader node of shards of 
> source, refer jira: SOLR-11652



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2017-12-05 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-11724:
---

 Summary: Cdcr Bootstrapping does not cause "index copying" to 
follower nodes on Target
 Key: SOLR-11724
 URL: https://issues.apache.org/jira/browse/SOLR-11724
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: CDCR
Affects Versions: 7.1
Reporter: Amrit Sarkar


Please find the discussion on:
http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html

If we index significant documents in to Source, stop indexing and then start 
CDCR; bootstrapping only copies the index to leader node of shards of the 
collection, and followers never receive the documents / index until and unless 
atleast document is inserted again on source; which propels to target and 
target collection trigger index replication to followers.

This behavior needs to be addressed in proper manner, either at target 
collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11724) Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target

2017-12-05 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278527#comment-16278527
 ] 

Amrit Sarkar commented on SOLR-11724:
-

[~shalinmangar] wanted to check with you whether this is the intended behavior.

> Cdcr Bootstrapping does not cause "index copying" to follower nodes on Target
> -
>
> Key: SOLR-11724
> URL: https://issues.apache.org/jira/browse/SOLR-11724
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.1
>Reporter: Amrit Sarkar
>
> Please find the discussion on:
> http://lucene.472066.n3.nabble.com/Issue-with-CDCR-bootstrapping-in-Solr-7-1-td4365258.html
> If we index significant documents in to Source, stop indexing and then start 
> CDCR; bootstrapping only copies the index to leader node of shards of the 
> collection, and followers never receive the documents / index until and 
> unless atleast document is inserted again on source; which propels to target 
> and target collection trigger index replication to followers.
> This behavior needs to be addressed in proper manner, either at target 
> collection or while bootstrapping.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11412) Documentation changes for SOLR-11003: Bi-directional CDCR support

2017-12-05 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16278528#comment-16278528
 ] 

Amrit Sarkar commented on SOLR-11412:
-

Thank you [~ctargett] for curating and commiting.

> Documentation changes for SOLR-11003: Bi-directional CDCR support
> -
>
> Key: SOLR-11412
> URL: https://issues.apache.org/jira/browse/SOLR-11412
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR, documentation
>Reporter: Amrit Sarkar
>Assignee: Cassandra Targett
> Fix For: 7.2, master (8.0)
>
> Attachments: CDCR_bidir.png, SOLR-11412-split.patch, 
> SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, SOLR-11412.patch, 
> SOLR-11412.patch
>
>
> Since SOLR-11003: Bi-directional CDCR scenario support, is reaching its 
> conclusion. The relevant changes in documentation needs to be done.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11278) CdcrBootstrapTest failing intermittently

2017-09-05 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16153335#comment-16153335
 ] 

Amrit Sarkar commented on SOLR-11278:
-

More detailing,

{code}
  [beaster]   2> 76078 INFO  
(zkCallback-171-thread-1-processing-n:127.0.0.1:39903_solr) 
[n:127.0.0.1:39903_solr] o.a.s.h.CdcrProcessStateManager Received new CDCR 
process state from watcher: STARTED @ cdcr-source:shard1
  [beaster]   2> 76079 INFO  (qtp499216332-625) [n:127.0.0.1:42370_solr 
c:cdcr-target s:shard1 r:core_node2 x:cdcr-target_shard1_replica_n1] 
o.a.s.h.CdcrRequestHandler Boostrap is issued now. Request :: 
{action=BOOTSTRAP&qt=/cdcr&masterUrl=http://127.0.0.1:39903/solr/cdcr-source_shard1_replica_n1/&wt=javabin&version=2}
 : collection : cdcr-target
  [beaster]   2> 76079 INFO  (qtp499216332-625) [n:127.0.0.1:42370_solr 
c:cdcr-target s:shard1 r:core_node2 x:cdcr-target_shard1_replica_n1] 
o.a.s.h.CdcrRequestHandler bs runnable : 
org.apache.solr.handler.CdcrRequestHandler$$Lambda$256/1608384375@cfd51e6
  [beaster]   2> 76079 INFO  (qtp499216332-625) [n:127.0.0.1:42370_solr 
c:cdcr-target s:shard1 r:core_node2 x:cdcr-target_shard1_replica_n1] 
o.a.s.h.CdcrRequestHandler bs service : 
com.codahale.metrics.InstrumentedExecutorService@23aae09
  [beaster]   2> 76079 INFO  (qtp499216332-625) [n:127.0.0.1:42370_solr 
c:cdcr-target s:shard1 r:core_node2 x:cdcr-target_shard1_replica_n1] 
o.a.s.c.S.Request [cdcr-target_shard1_replica_n1]  webapp=/solr path=/cdcr 
params={qt=/cdcr&masterUrl=http://127.0.0.1:39903/solr/cdcr-source_shard1_replica_n1/&action=BOOTSTRAP&wt=javabin&version=2}
 status=0 QTime=0
  [beaster]   2> 76079 INFO  (qtp1089963867-718) [n:127.0.0.1:39903_solr 
c:cdcr-source s:shard1 r:core_node2 x:cdcr-source_shard1_replica_n1] 
o.a.s.c.S.Request [cdcr-source_shard1_replica_n1]  webapp=/solr path=/cdcr 
params={qt=/cdcr&_stateVer_=cdcr-source:7&action=queues&wt=javabin&version=2} 
status=0 QTime=0
  [beaster]   1> Cdcr queue response: 
{responseHeader={status=0,QTime=0},queues={127.0.0.1:37043/solr={cdcr-target={queueSize=-1703936001,lastTimestamp=}}},tlogTotalSize=4773,tlogTotalCount=1,updateLogSynchronizer=stopped}
  [beaster]   2> 76081 INFO  (qtp499216332-629) [n:127.0.0.1:42370_solr 
c:cdcr-target s:shard1 r:core_node2 x:cdcr-target_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 start 
commit{_version_=1577620265858236416,optimize=false,openSearcher=true,waitSearcher=true,expungeDeletes=false,softCommit=false,prepareCommit=false}
  [beaster]   2> 76082 INFO  (qtp499216332-629) [n:127.0.0.1:42370_solr 
c:cdcr-target s:shard1 r:core_node2 x:cdcr-target_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 No uncommitted changes. Skipping IW.commit.
  [beaster]   2> 76082 INFO  
(updateExecutor-143-thread-1-processing-n:127.0.0.1:42370_solr 
x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) 
[n:127.0.0.1:42370_solr c:cdcr-target s:shard1 r:core_node2 
x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler what the fuck is 
happening :: 
Thread[updateExecutor-143-thread-1-processing-n:127.0.0.1:42370_solr 
x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target 
r:core_node2,5,TGRP-CdcrBootstrapTest]
  [beaster]   2> 76083 INFO  
(updateExecutor-143-thread-1-processing-n:127.0.0.1:42370_solr 
x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) 
[n:127.0.0.1:42370_solr c:cdcr-target s:shard1 r:core_node2 
x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler what' the lock this 
time :: true :: thread :: org.apache.solr.handler.CdcrRequestHandler@259eac66
  [beaster]   2> 76083 INFO  
(updateExecutor-143-thread-1-processing-n:127.0.0.1:42370_solr 
x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) 
[n:127.0.0.1:42370_solr c:cdcr-target s:shard1 r:core_node2 
x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler we reached this 
point :: BOOTSTRAP will go on, locked :: true
  [beaster]   2> 76083 INFO  (qtp499216332-628) [n:127.0.0.1:42370_solr 
c:cdcr-target s:shard1 r:core_node2 x:cdcr-target_shard1_replica_n1] 
o.a.s.c.S.Request [cdcr-target_shard1_replica_n1]  webapp=/solr path=/cdcr 
params={qt=/cdcr&action=BOOTSTRAP_STATUS&wt=javabin&version=2} status=0 QTime=0
  [beaster]   2> 76082 INFO  (qtp499216332-629) [n:127.0.0.1:42370_solr 
c:cdcr-target s:shard1 r:core_node2 x:cdcr-target_shard1_replica_n1] 
o.a.s.u.DirectUpdateHandler2 end_commit_flush
  [beaster]   2> 76083 INFO  (qtp499216332-629) [n:127.0.0.1:42370_solr 
c:cdcr-target s:shard1 r:core_node2 x:cdcr-target_shard1_replica_n1] 
o.a.s.c.S.Request [cdcr-target_shard1_replica_n1]  webapp=/solr path=/update 
params={_stateVer_=cdcr-target:4&waitSearcher=true&commit=true&softCommit=false&wt=javabin&version=2}
 status=0 QTime=2
  [beaster]   2> 76084 WARN  
(cdcr-bootstrap-status-177-thread-1-processing-n:127.0.0.1:39903_solr 
x:cdcr-source_shard1_replica_n1 s:shard1 c:cdcr-source 

[jira] [Updated] (SOLR-11278) CdcrBootstrapTest failing intermittently

2017-09-18 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11278:

Attachment: SOLR-11278.patch

I had an offline discussion with Shalin and Varun and we are able to figure out 
what's wrong in the Cdcr Bootstrap.

* since issuing bootstrap is an asynchronous call, there is a probable race 
around condition where after issuing a bootstrap, it immediately checks for 
bootstrap status and if not found, another bootstrap gets issued.
* this 2nd bootstrap fails to acquire lock issues cancel boostrap
* since the bootstrap at target is now "cancelled", the bootstrap status in 
CdcrReplicatorManager goes into  infinite loop rigorous as the condition 
"cancelled" is not handled.

In the patch both "*submitted*" and "*cancelled*" bootstrap status conditions 
and 'what to do next' is covered, which will nullify the extensive bootstrap 
calling and even the bootstrap should complete successfully. 

> CdcrBootstrapTest failing intermittently
> 
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.0, 6.6.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: test
> Attachments: master-bs.patch, SOLR-11278-awaits-fix.patch, 
> SOLR-11278-cancel-bootstrap-on-stop.patch, SOLR-11278.patch, 
> SOLR-11278.patch, test_results
>
>
> {{CdcrBootstrapTest}} is failing while running beasts for significant 
> iterations.
> The bootstrapping is failing in the test, after the first batch is indexed 
> for each {{testmethod}}, which results in documents mismatch ::
> {code}
>   [beaster]   2> 39167 ERROR 
> (updateExecutor-39-thread-1-processing-n:127.0.0.1:42155_solr 
> x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) 
> [n:127.0.0.1:42155_solr c:cdcr-target s:shard1 r:core_node2 
> x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Bootstrap 
> operation failed
>   [beaster]   2> java.util.concurrent.ExecutionException: 
> java.lang.AssertionError
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.lambda$handleBootstrapAction$0(CdcrRequestHandler.java:654)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>   [beaster]   2>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   [beaster]   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   [beaster]   2>  at java.lang.Thread.run(Thread.java:748)
>   [beaster]   2> Caused by: java.lang.AssertionError
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:813)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:724)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>   [beaster]   2>  ... 5 more
> {code}
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11278) CdcrBootstrapTest failing intermittently

2017-09-18 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16170958#comment-16170958
 ] 

Amrit Sarkar edited comment on SOLR-11278 at 9/19/17 12:58 AM:
---

I had an offline discussion with Shalin and Varun and we are able to figure out 
what's wrong in the Cdcr Bootstrap.

* since issuing bootstrap is an asynchronous call, there is a probable race 
around condition where after issuing a bootstrap, it immediately checks for 
bootstrap status and if not found, another bootstrap gets issued.
* this 2nd bootstrap fails to acquire lock issues cancel boostrap
* since the bootstrap at target is now "cancelled", the bootstrap status in 
CdcrReplicatorManager goes into rigorous infinite loop as the condition 
"cancelled" is not handled.

In the patch both "*submitted*" and "*cancelled*" bootstrap status conditions 
and 'what to do next' is covered, which will nullify the extensive bootstrap 
calling and even the bootstrap should complete successfully. 


was (Author: sarkaramr...@gmail.com):
I had an offline discussion with Shalin and Varun and we are able to figure out 
what's wrong in the Cdcr Bootstrap.

* since issuing bootstrap is an asynchronous call, there is a probable race 
around condition where after issuing a bootstrap, it immediately checks for 
bootstrap status and if not found, another bootstrap gets issued.
* this 2nd bootstrap fails to acquire lock issues cancel boostrap
* since the bootstrap at target is now "cancelled", the bootstrap status in 
CdcrReplicatorManager goes into  infinite loop rigorous as the condition 
"cancelled" is not handled.

In the patch both "*submitted*" and "*cancelled*" bootstrap status conditions 
and 'what to do next' is covered, which will nullify the extensive bootstrap 
calling and even the bootstrap should complete successfully. 

> CdcrBootstrapTest failing intermittently
> 
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.0, 6.6.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: test
> Attachments: master-bs.patch, SOLR-11278-awaits-fix.patch, 
> SOLR-11278-cancel-bootstrap-on-stop.patch, SOLR-11278.patch, 
> SOLR-11278.patch, test_results
>
>
> {{CdcrBootstrapTest}} is failing while running beasts for significant 
> iterations.
> The bootstrapping is failing in the test, after the first batch is indexed 
> for each {{testmethod}}, which results in documents mismatch ::
> {code}
>   [beaster]   2> 39167 ERROR 
> (updateExecutor-39-thread-1-processing-n:127.0.0.1:42155_solr 
> x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) 
> [n:127.0.0.1:42155_solr c:cdcr-target s:shard1 r:core_node2 
> x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Bootstrap 
> operation failed
>   [beaster]   2> java.util.concurrent.ExecutionException: 
> java.lang.AssertionError
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.lambda$handleBootstrapAction$0(CdcrRequestHandler.java:654)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>   [beaster]   2>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   [beaster]   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   [beaster]   2>  at java.lang.Thread.run(Thread.java:748)
>   [beaster]   2> Caused by: java.lang.AssertionError
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:813)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:724)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>   [beaster]   2>  ... 5 more
> {code}
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.

[jira] [Commented] (SOLR-11278) CdcrBootstrapTest failing intermittently

2017-09-19 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172036#comment-16172036
 ] 

Amrit Sarkar commented on SOLR-11278:
-

Thanks Varun, 

Successfully beasted: ant beast -Dtestcase=CdcrBootstrapTest -Dbeast.iters=500 
-Dtests.iters=3 for the committed patch.

> CdcrBootstrapTest failing intermittently
> 
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.0, 6.6.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: test
> Attachments: master-bs.patch, SOLR-11278-awaits-fix.patch, 
> SOLR-11278-cancel-bootstrap-on-stop.patch, SOLR-11278.patch, 
> SOLR-11278.patch, test_results
>
>
> {{CdcrBootstrapTest}} is failing while running beasts for significant 
> iterations.
> The bootstrapping is failing in the test, after the first batch is indexed 
> for each {{testmethod}}, which results in documents mismatch ::
> {code}
>   [beaster]   2> 39167 ERROR 
> (updateExecutor-39-thread-1-processing-n:127.0.0.1:42155_solr 
> x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) 
> [n:127.0.0.1:42155_solr c:cdcr-target s:shard1 r:core_node2 
> x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Bootstrap 
> operation failed
>   [beaster]   2> java.util.concurrent.ExecutionException: 
> java.lang.AssertionError
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.lambda$handleBootstrapAction$0(CdcrRequestHandler.java:654)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>   [beaster]   2>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   [beaster]   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   [beaster]   2>  at java.lang.Thread.run(Thread.java:748)
>   [beaster]   2> Caused by: java.lang.AssertionError
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:813)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:724)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>   [beaster]   2>  ... 5 more
> {code}
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11373) Logging Lucene's info stream is turned off in default log4j.properties

2017-09-19 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11373:

Attachment: SOLR-11373.patch

Patch attached :

{code}
modified:   solr/example/resources/log4j.properties
{code}

> Logging Lucene's info stream is turned off in default log4j.properties
> --
>
> Key: SOLR-11373
> URL: https://issues.apache.org/jira/browse/SOLR-11373
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: logging
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
> Fix For: master (8.0), 7.1
>
> Attachments: SOLR-11373.patch
>
>
> The log4j.properties turns off logging for infoStream instead of setting it 
> to INFO. There's even a comment saying:
> {code}
> # set to INFO to enable infostream log messages
> {code}
> Due to this bug, even if you enable infoStream in solrconfig.xml, infoStream 
> isn't logged unless you also change log4j.properties.
> We should match the config in log4j.properties to the comment and then people 
> can use the solrconfig.xml to enable infoStream.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11379) Config API to switch on/off lucene's logging infoStream

2017-09-19 Thread Amrit Sarkar (JIRA)
Amrit Sarkar created SOLR-11379:
---

 Summary: Config API to switch on/off lucene's logging infoStream 
 Key: SOLR-11379
 URL: https://issues.apache.org/jira/browse/SOLR-11379
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Amrit Sarkar
Priority: Minor


To enable infoStream logging into solr, you need to edit solrconfig.xml and 
reload core;

We intend to introduce config api to enable/disable infostream logging in near 
future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs

2017-09-19 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172562#comment-16172562
 ] 

Amrit Sarkar commented on SOLR-8389:


Shalin,

I will give it a go to get this done. Will post patch and more information once 
I have something of substance.

- Amrit

> Convert CDCR peer cluster and other configurations into collection properties 
> modifiable via APIs
> -
>
> Key: SOLR-8389
> URL: https://issues.apache.org/jira/browse/SOLR-8389
> Project: Solr
>  Issue Type: Improvement
>  Components: CDCR, SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.0
>
>
> CDCR configuration is kept inside solrconfig.xml which makes it difficult to 
> add or change peer cluster configuration.
> I propose to move all CDCR config to collection level properties in cluster 
> state so that they can be modified using the existing modify collection API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-8389) Convert CDCR peer cluster and other configurations into collection properties modifiable via APIs

2017-09-19 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-8389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16172562#comment-16172562
 ] 

Amrit Sarkar edited comment on SOLR-8389 at 9/20/17 12:55 AM:
--

Shalin,

I will give it a go to get this done. Will post patch and more information once 
I have something of substance.


was (Author: sarkaramr...@gmail.com):
Shalin,

I will give it a go to get this done. Will post patch and more information once 
I have something of substance.

- Amrit

> Convert CDCR peer cluster and other configurations into collection properties 
> modifiable via APIs
> -
>
> Key: SOLR-8389
> URL: https://issues.apache.org/jira/browse/SOLR-8389
> Project: Solr
>  Issue Type: Improvement
>  Components: CDCR, SolrCloud
>Reporter: Shalin Shekhar Mangar
> Fix For: 6.0
>
>
> CDCR configuration is kept inside solrconfig.xml which makes it difficult to 
> add or change peer cluster configuration.
> I propose to move all CDCR config to collection level properties in cluster 
> state so that they can be modified using the existing modify collection API.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11278) CdcrBootstrapTest failing intermittently

2017-09-20 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11278:

Attachment: SOLR-11278.patch

Uploading another patch which blocks any other bootstrap call b/w one is 
submitted and executed. Used CountDownLatch internally in the function.

> CdcrBootstrapTest failing intermittently
> 
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.0, 6.6.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: test
> Attachments: master-bs.patch, SOLR-11278-awaits-fix.patch, 
> SOLR-11278-cancel-bootstrap-on-stop.patch, SOLR-11278.patch, 
> SOLR-11278.patch, SOLR-11278.patch, test_results
>
>
> {{CdcrBootstrapTest}} is failing while running beasts for significant 
> iterations.
> The bootstrapping is failing in the test, after the first batch is indexed 
> for each {{testmethod}}, which results in documents mismatch ::
> {code}
>   [beaster]   2> 39167 ERROR 
> (updateExecutor-39-thread-1-processing-n:127.0.0.1:42155_solr 
> x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) 
> [n:127.0.0.1:42155_solr c:cdcr-target s:shard1 r:core_node2 
> x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Bootstrap 
> operation failed
>   [beaster]   2> java.util.concurrent.ExecutionException: 
> java.lang.AssertionError
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.lambda$handleBootstrapAction$0(CdcrRequestHandler.java:654)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>   [beaster]   2>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   [beaster]   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   [beaster]   2>  at java.lang.Thread.run(Thread.java:748)
>   [beaster]   2> Caused by: java.lang.AssertionError
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:813)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:724)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>   [beaster]   2>  ... 5 more
> {code}
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11278) CdcrBootstrapTest failing intermittently

2017-09-20 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174015#comment-16174015
 ] 

Amrit Sarkar edited comment on SOLR-11278 at 9/20/17 11:49 PM:
---

Uploading another patch which makes sure if we request bootstrap status after 
submitting a bootstrap, we get the correct status: RUNNING. Used CountDownLatch 
internally in the function.


was (Author: sarkaramr...@gmail.com):
Uploading another patch which blocks any other bootstrap call b/w one is 
submitted and executed. Used CountDownLatch internally in the function.

> CdcrBootstrapTest failing intermittently
> 
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.0, 6.6.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: test
> Attachments: master-bs.patch, SOLR-11278-awaits-fix.patch, 
> SOLR-11278-cancel-bootstrap-on-stop.patch, SOLR-11278.patch, 
> SOLR-11278.patch, SOLR-11278.patch, test_results
>
>
> {{CdcrBootstrapTest}} is failing while running beasts for significant 
> iterations.
> The bootstrapping is failing in the test, after the first batch is indexed 
> for each {{testmethod}}, which results in documents mismatch ::
> {code}
>   [beaster]   2> 39167 ERROR 
> (updateExecutor-39-thread-1-processing-n:127.0.0.1:42155_solr 
> x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) 
> [n:127.0.0.1:42155_solr c:cdcr-target s:shard1 r:core_node2 
> x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Bootstrap 
> operation failed
>   [beaster]   2> java.util.concurrent.ExecutionException: 
> java.lang.AssertionError
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.lambda$handleBootstrapAction$0(CdcrRequestHandler.java:654)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>   [beaster]   2>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   [beaster]   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   [beaster]   2>  at java.lang.Thread.run(Thread.java:748)
>   [beaster]   2> Caused by: java.lang.AssertionError
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:813)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:724)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>   [beaster]   2>  ... 5 more
> {code}
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-09-20 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174051#comment-16174051
 ] 

Amrit Sarkar commented on SOLR-11003:
-

Ok!

{{CdcrBidirectionalTest}} is failing miserably every now and then while we do 
beast tests. I see:
{code}
o.a.s.h.CdcrReplicator Forwarded 496 updates to target cdcr-cluster1
  [beaster]   2> 19147 ERROR 
(cdcr-replicator-31-thread-1-processing-n:127.0.0.1:46505_solr 
x:cdcr-cluster1_shard1_replica_n1 s:shard1 c:cdcr-cluster1 r:core_node2) 
[n:127.0.0.1:46505_solr c:cdcr-cluster1 s:shard1 r:core_node2 
x:cdcr-cluster1_shard1_replica_n1] o.a.s.c.u.ExecutorUtil Uncaught exception 
java.lang.AssertionError thrown by thread: 
cdcr-replicator-31-thread-1-processing-n:127.0.0.1:46505_solr 
x:cdcr-cluster1_shard1_replica_n1 s:shard1 c:cdcr-cluster1 r:core_node2
  [beaster]   2> java.lang.Exception: Submitter stack trace
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:163)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$start$1(CdcrReplicatorScheduler.java:76)
  [beaster]   2>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
  [beaster]   2>at 
java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
  [beaster]   2>at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  [beaster]   2>at java.lang.Thread.run(Thread.java:748)
  [beaster]   2> 19155 INFO  (qtp620825517-65) [n:127.0.0.1:46505_solr 
c:cdcr-cluster1 s:shard1 r:core_node2 x:cdcr-cluster1_shard1_replica_n1] 
o.a.s.c.S.Request [cdcr-cluster1_shard1_replica_n1]  webapp=/solr path=/update 
params={_stateVer_=cdcr-cluster1:5&cdcr.update=&wt=javabin&version=2} status=0 
QTime=23
  [beaster]   2> 19156 INFO  
(cdcr-replicator-35-thread-1-processing-n:127.0.0.1:46044_solr 
x:cdcr-cluster2_shard1_replica_n1 s:shard1 c:cdcr-cluster2 r:core_node2) 
[n:127.0.0.1:46044_solr c:cdcr-cluster2 s:shard1 r:core_node2 
x:cdcr-cluster2_shard1_replica_n1] o.a.s.h.CdcrReplicator Forwarded 495 updates 
to target cdcr-cluster1
  [beaster]   2> Sht 21, 2017 6:02:10 PD 
com.carrotsearch.randomizedtesting.RandomizedRunner$QueueUncaughtExceptionsHandler
 uncaughtException
  [beaster]   2> WARNING: Uncaught exception in thread: 
Thread[cdcr-replicator-31-thread-1,5,TGRP-CdcrBidirectionalTest]
  [beaster]   2> java.lang.AssertionError
  [beaster]   2>at 
__randomizedtesting.SeedInfo.seed([AE4E9FB83368594B]:0)
  [beaster]   2>at 
org.apache.solr.update.TransactionLog$LogReader.next(TransactionLog.java:588)
  [beaster]   2>at 
org.apache.solr.update.CdcrTransactionLog$CdcrLogReader.next(CdcrTransactionLog.java:143)
  [beaster]   2>at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.next(CdcrUpdateLog.java:633)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:77)
  [beaster]   2>at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
  [beaster]   2>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  [beaster]   2>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  [beaster]   2>at java.lang.Thread.run(Thread.java:748)
  [beaster]   2> 
{code}
Some issue with concurrency in tlogs. some issue with tlog positions.

This results in:
{code}
 [beaster]   2> NOTE: reproduce with: ant test  
-Dtestcase=CdcrBidirectionalTest -Dtests.method=testBiDir 
-Dtests.seed=AE4E9FB83368594B -Dtests.slow=true -Dtests.locale=sq-AL 
-Dtests.timezone=Asia/Thimphu -Dtests.asserts=true 
-Dtests.file.encoding=ANSI_X3.4-1968
  [beaster] [00:01:51.287] ERROR   24.8s | CdcrBidirectionalTest.testBiDir <<<
  [beaster]> Throwable #1: java.lang.AssertionError: cluster 2 docs 
mismatch expected:<0> but was:<2>
  [beaster]>at org.junit.Assert.fail(Assert.java:93)
  [beaster]>at org.junit.Assert.failNotEquals(Assert.java:647)
  [beaster]>at org.junit.Assert.assertEquals(Assert.java:128)
  [beaster]>at org.junit.Assert.assertEquals(Assert.java:472)
  [beaster]>at 
org.apache.solr.cloud.CdcrBidirectionalTest.testBiDir(CdcrBidirectionalTest.java:199)
  [beaster]>

[jira] [Commented] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-09-20 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16174066#comment-16174066
 ] 

Amrit Sarkar commented on SOLR-11003:
-

The test failures are irreproducible with the attached seeds; putting extensive 
logging and trying to understand what can be done next.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11003) Enabling bi-directional CDCR active-active clusters

2017-09-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11003:

Attachment: SOLR-11003.patch

The problem was with the refracting of the code where the common portions of 
TLog and CdcrTLog takien into Utils class.

I reverted back to old code now, w/o utils, will figure out how to refractor. 
Yes, there is repetitive code but I think that's necessary considering we are 
about to put an extra entry for cdcr updates.

{code}
modified:   
solr/core/src/java/org/apache/solr/handler/CdcrReplicator.java
modified:   
solr/core/src/java/org/apache/solr/update/CdcrTransactionLog.java
modified:   
solr/core/src/java/org/apache/solr/update/TransactionLog.java
new file:   
solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/schema.xml
new file:   
solr/core/src/test-files/solr/configsets/cdcr-cluster1/conf/solrconfig.xml
new file:   
solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/schema.xml
new file:   
solr/core/src/test-files/solr/configsets/cdcr-cluster2/conf/solrconfig.xml
new file:   
solr/core/src/test/org/apache/solr/cloud/CdcrBidirectionalTest.java
{code}

Beasts of 100 rounds are passed successfuly.

> Enabling bi-directional CDCR active-active clusters
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11003) Enabling bi-directional CDCR on cluster for better failover

2017-09-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11003:

Summary: Enabling bi-directional CDCR on cluster for better failover  (was: 
Enabling bi-directional CDCR active-active clusters)

> Enabling bi-directional CDCR on cluster for better failover
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time. 
> ClusterACollectionA => ClusterBCollectionB | ClusterBCollectionB => 
> ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11003) Enabling bi-directional CDCR on cluster for better failover

2017-09-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11003:

Description: 
The latest version of Solr CDCR across collections / clusters is in 
active-passive format, where we can index into source collection and the 
updates gets forwarded to the passive one and vice-versa is not supported.

https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
https://issues.apache.org/jira/browse/SOLR-6273

We are try to get a  design ready to index in both collections and the updates 
gets reflected across the collections in real-time (given the backlog of 
replicating updates to other data center). ClusterACollectionA => 
ClusterBCollectionB | ClusterBCollectionB => ClusterACollectionA.

The best use-case would be to we keep indexing in ClusterACollectionA which 
forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets down, 
we point the indexer and searcher application to ClusterBCollectionB. Once 
ClusterACollectionA is up, depending on updates count, they will be 
bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
keep indexing on the ClusterBCollectionB.

  was:
The latest version of Solr CDCR across collections / clusters is in 
active-passive format, where we can index into source collection and the 
updates gets forwarded to the passive one and vice-versa is not supported.

https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
https://issues.apache.org/jira/browse/SOLR-6273

We are try to get a  design ready to index in both collections and the updates 
gets reflected across the collections in real-time. ClusterACollectionA => 
ClusterBCollectionB | ClusterBCollectionB => ClusterACollectionA.

The best use-case would be to we keep indexing in ClusterACollectionA which 
forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets down, 
we point the indexer and searcher application to ClusterBCollectionB. Once 
ClusterACollectionA is up, depending on updates count, they will be 
bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
keep indexing on the ClusterBCollectionB.


> Enabling bi-directional CDCR on cluster for better failover
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time (given the backlog 
> of replicating updates to other data center). ClusterACollectionA => 
> ClusterBCollectionB | ClusterBCollectionB => ClusterACollectionA.
> The best use-case would be to we keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11003) Enabling bi-directional CDCR on cluster for better failover

2017-09-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11003?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11003:

Description: 
The latest version of Solr CDCR across collections / clusters is in 
active-passive format, where we can index into source collection and the 
updates gets forwarded to the passive one and vice-versa is not supported.

https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
https://issues.apache.org/jira/browse/SOLR-6273

We are try to get a  design ready to index in both collections and the updates 
gets reflected across the collections in real-time (given the backlog of 
replicating updates to other data center). ClusterACollectionA => 
ClusterBCollectionB | ClusterBCollectionB => ClusterACollectionA.

The STRONG RECOMMENDED way to keep indexing in ClusterACollectionA which 
forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets down, 
we point the indexer and searcher application to ClusterBCollectionB. Once 
ClusterACollectionA is up, depending on updates count, they will be 
bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
keep indexing on the ClusterBCollectionB.

  was:
The latest version of Solr CDCR across collections / clusters is in 
active-passive format, where we can index into source collection and the 
updates gets forwarded to the passive one and vice-versa is not supported.

https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
https://issues.apache.org/jira/browse/SOLR-6273

We are try to get a  design ready to index in both collections and the updates 
gets reflected across the collections in real-time (given the backlog of 
replicating updates to other data center). ClusterACollectionA => 
ClusterBCollectionB | ClusterBCollectionB => ClusterACollectionA.

The best use-case would be to we keep indexing in ClusterACollectionA which 
forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets down, 
we point the indexer and searcher application to ClusterBCollectionB. Once 
ClusterACollectionA is up, depending on updates count, they will be 
bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
keep indexing on the ClusterBCollectionB.


> Enabling bi-directional CDCR on cluster for better failover
> ---
>
> Key: SOLR-11003
> URL: https://issues.apache.org/jira/browse/SOLR-11003
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
> Attachments: sample-configs.zip, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, SOLR-11003.patch, 
> SOLR-11003-tlogutils.patch
>
>
> The latest version of Solr CDCR across collections / clusters is in 
> active-passive format, where we can index into source collection and the 
> updates gets forwarded to the passive one and vice-versa is not supported.
> https://lucene.apache.org/solr/guide/6_6/cross-data-center-replication-cdcr.html
> https://issues.apache.org/jira/browse/SOLR-6273
> We are try to get a  design ready to index in both collections and the 
> updates gets reflected across the collections in real-time (given the backlog 
> of replicating updates to other data center). ClusterACollectionA => 
> ClusterBCollectionB | ClusterBCollectionB => ClusterACollectionA.
> The STRONG RECOMMENDED way to keep indexing in ClusterACollectionA which 
> forwards the updates to ClusterBCollectionB. If ClusterACollectionA gets 
> down, we point the indexer and searcher application to ClusterBCollectionB. 
> Once ClusterACollectionA is up, depending on updates count, they will be 
> bootstrapped or forwarded to ClusterACollectionA from ClusterBCollectionB and 
> keep indexing on the ClusterBCollectionB.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11278) CdcrBootstrapTest failing intermittently

2017-09-22 Thread Amrit Sarkar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Amrit Sarkar updated SOLR-11278:

Attachment: SOLR-11278.patch

Slight modifications on discussion with Shalin offline. Patch uploaded.

> CdcrBootstrapTest failing intermittently
> 
>
> Key: SOLR-11278
> URL: https://issues.apache.org/jira/browse/SOLR-11278
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: CDCR
>Affects Versions: 7.0, 6.6.1
>Reporter: Amrit Sarkar
>Assignee: Varun Thacker
>Priority: Critical
>  Labels: test
> Attachments: master-bs.patch, SOLR-11278-awaits-fix.patch, 
> SOLR-11278-cancel-bootstrap-on-stop.patch, SOLR-11278.patch, 
> SOLR-11278.patch, SOLR-11278.patch, SOLR-11278.patch, test_results
>
>
> {{CdcrBootstrapTest}} is failing while running beasts for significant 
> iterations.
> The bootstrapping is failing in the test, after the first batch is indexed 
> for each {{testmethod}}, which results in documents mismatch ::
> {code}
>   [beaster]   2> 39167 ERROR 
> (updateExecutor-39-thread-1-processing-n:127.0.0.1:42155_solr 
> x:cdcr-target_shard1_replica_n1 s:shard1 c:cdcr-target r:core_node2) 
> [n:127.0.0.1:42155_solr c:cdcr-target s:shard1 r:core_node2 
> x:cdcr-target_shard1_replica_n1] o.a.s.h.CdcrRequestHandler Bootstrap 
> operation failed
>   [beaster]   2> java.util.concurrent.ExecutionException: 
> java.lang.AssertionError
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.report(FutureTask.java:122)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.get(FutureTask.java:192)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler.lambda$handleBootstrapAction$0(CdcrRequestHandler.java:654)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedRunnable.run(InstrumentedExecutorService.java:176)
>   [beaster]   2>  at 
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   [beaster]   2>  at 
> java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   [beaster]   2>  at 
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>   [beaster]   2>  at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>   [beaster]   2>  at java.lang.Thread.run(Thread.java:748)
>   [beaster]   2> Caused by: java.lang.AssertionError
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:813)
>   [beaster]   2>  at 
> org.apache.solr.handler.CdcrRequestHandler$BootstrapCallable.call(CdcrRequestHandler.java:724)
>   [beaster]   2>  at 
> com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
>   [beaster]   2>  ... 5 more
> {code}
> {code}
>   [beaster] [01:37:16.282] FAILURE  153s | 
> CdcrBootstrapTest.testBootstrapWithSourceCluster <<<
>   [beaster]> Throwable #1: java.lang.AssertionError: Document mismatch on 
> target after sync expected:<2000> but was:<1000>
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10564) NPE in QueryComponent when RTG

2017-10-05 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16193794#comment-16193794
 ] 

Amrit Sarkar commented on SOLR-10564:
-

ah, ok. [~ysee...@gmail.com], I see the changes in place and this no longer a 
problem. Thanks for listing down the related jiras.

> NPE in QueryComponent when RTG
> --
>
> Key: SOLR-10564
> URL: https://issues.apache.org/jira/browse/SOLR-10564
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 6.5
>Reporter: Markus Jelsma
> Fix For: 7.0
>
> Attachments: screenshot-1.png, screenshot-2.png, screenshot-3.png, 
> screenshot-4.png, screenshot-5.png, SOLR-10564.patch, SOLR-10564.patch
>
>
> The following URL:
> {code}
> /get?fl=queries,prob_*,view_score,feedback_score&ids=
> {code}
> Kindly returns the document.
> This once, however:
> {code}
> /select?qt=/get&fl=queries,prob_*,view_score,feedback_score&ids=
> {code}
> throws:
> {code}
> 2017-04-25 10:23:26.222 ERROR (qtp1873653341-28693) [c:documents s:shard1 
> r:core_node3 x:documents_shard1_replica1] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.QueryComponent.unmarshalSortValues(QueryComponent.java:1226)
> at 
> org.apache.solr.handler.component.QueryComponent.mergeIds(QueryComponent.java:1077)
> at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:777)
> at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:756)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:428)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:173)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2440)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:723)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:529)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:347)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:298)
> {code}
> This is thrown when i do it manually, but the error does not appear when Solr 
> issues those same queries under the hood.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11409) A ref guide page on setting up solr on aws

2017-10-07 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11409?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16195655#comment-16195655
 ] 

Amrit Sarkar commented on SOLR-11409:
-

Setting up single node solr on AWS is almost same as setting up on local 
machine. Initial AWS EC2 instance setup, {{wget solr-X.X.X.tar.gz}} and 
{{install java}}, that's it. Rest the same.

[~varunthacker],

What exactly the aim of this ref guide page should be? A multi-node solr cloud 
setup? If yes, 3 total nodes, 1 node for zookeeper, and 2 for solr nodes, and 
configruing security for the respective nodes?

We can have two sub-pages; single node and multi-node. Looking forward to your 
thoughts.

> A ref guide page on setting up solr on aws
> --
>
> Key: SOLR-11409
> URL: https://issues.apache.org/jira/browse/SOLR-11409
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Varun Thacker
>Priority: Minor
>
> It will be nice if we have a dedicated page on installing solr on aws . 
> At the end we could even link to 
> http://lucene.apache.org/solr/guide/6_6/taking-solr-to-production.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



<    5   6   7   8   9   10   11   12   >