[jira] [Commented] (SOLR-13528) Rate limiting in Solr

2020-07-27 Thread Atri Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166148#comment-17166148
 ] 

Atri Sharma commented on SOLR-13528:


Hi Varun,

Thanks for your inputs. Here are my responses:

 
 # Yes, the request expiration time is meant for a stable way to expire 
requests in the wait queue, but I agree that a hard limit on the size of the 
wait queue is needed. I will add it in the PR>
 # The rate limiting is being done in SolrDispatchFilter – which AFAIK should 
be able to handle all cases of failures since it is the main request entry 
point?

> Rate limiting in Solr
> -
>
> Key: SOLR-13528
> URL: https://issues.apache.org/jira/browse/SOLR-13528
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Assignee: Atri Sharma
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In relation to SOLR-13527, Solr also needs a way to throttle update and 
> search requests based on usage metrics. This is the umbrella JIRA for both 
> update and search rate limiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13528) Rate limiting in Solr

2020-07-27 Thread Varun Thacker (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166137#comment-17166137
 ] 

Varun Thacker commented on SOLR-13528:
--

At Slack, we ran into this scenario a few times last year - A node in the 
cluster would go bad ( maybe a faulty disk ) . This node would end up queuing 
tens of thousands of outstanding requests because Jetty would keep accepting 
requests. This then caused issues in our proxy layer ( which is written in Java 
and uses SolrJ ) by eating up all the threads .

Why I mention this is I read this comment in the code - "If available, acquire 
slot and proceed -- else asynchronously queue the request." - So maybe having 
an upper bound to the wait queue could be useful? Or is 
`queryRequestExpirationTimeInMS` for that? 

 

We wrote a super simple SearchHandler to implement this. This might not be 
applicable in a general sense since we were in control of the backend and the 
solr cluster which helped us simplify out design.

the backend would pass a "max_searches_per_node" that would get dynamically set 
based on the collection / cluster we were running.

the search handler would check if it's a top level request and if that nodes 
concurrent searches is above the threashold or not. If the request is over the 
limit we would throw a SolrException and the backend could take appropriate 
steps

Two things that we learnt while developing this was
 * If you throw an exception for rate limit , make sure that error code isn't 
retried by SolrJ
 * Preferring a SearchHandler over a QueryComponent - If you write a 
QueryComponent it's impossible to decrement the counter if the request fails in 
say the HighlightComponent

> Rate limiting in Solr
> -
>
> Key: SOLR-13528
> URL: https://issues.apache.org/jira/browse/SOLR-13528
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Assignee: Atri Sharma
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In relation to SOLR-13527, Solr also needs a way to throttle update and 
> search requests based on usage metrics. This is the umbrella JIRA for both 
> update and search rate limiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Lucene/Solr QA (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166128#comment-17166128
 ] 

Lucene/Solr QA commented on SOLR-11611:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} SOLR-11611 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/solr/HowToContribute#Creating_the_patch_file for help. 
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-11611 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/13008523/SOLR-11611.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/786/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Blocker
> Fix For: 8.6.1
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Houston Putman (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166055#comment-17166055
 ] 

Houston Putman commented on SOLR-11611:
---

Yeah, this should make it in if we can verify it soon.

Thanks for finding this and contributing a fix Jakob!

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Blocker
> Fix For: 8.6.1
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11611:

Priority: Blocker  (was: Major)

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Blocker
> Fix For: 8.6.1
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-11611:

Fix Version/s: (was: 8.1)
   (was: master (9.0))
   8.6.1

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.6.1
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166051#comment-17166051
 ] 

Ishan Chattopadhyaya commented on SOLR-11611:
-

[~uschindler]would it be possible to review this, please?

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166050#comment-17166050
 ] 

Ishan Chattopadhyaya commented on SOLR-11611:
-

If confirmed, this looks like a serious issue and the patch attached *looks* 
good to me (haven't tested it, though). [~houstonputman] can we have this for 
8.6.1?

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Jakob Furrer (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Furrer updated SOLR-11611:

Attachment: SOLR-11611.patch

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14671) NumberFormatException when accessing ZK Status page in 8.6.0

2020-07-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166033#comment-17166033
 ] 

Jan Høydahl commented on SOLR-14671:


https://github.com/apache/lucene-solr/pull/1701

> NumberFormatException when accessing ZK Status page in 8.6.0
> 
>
> Key: SOLR-14671
> URL: https://issues.apache.org/jira/browse/SOLR-14671
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 8.6
>Reporter: Henrik
>Assignee: Jan Høydahl
>Priority: Major
>
> The Admin -> Cloud -> ZK Status page in Solr 8.6.0 does not show any data 
> except for the error message "null".  I get the following exception in 
> solr.log:
> {code:java}
> ERROR [20200721T081301,835] qtp478489615-21 
> org.apache.solr.handler.RequestHandlerBase - java.lang.NumberFormatException: 
> null
> at java.base/java.lang.Integer.parseInt(Integer.java:620)
> at java.base/java.lang.Integer.parseInt(Integer.java:776)
> at 
> org.apache.solr.common.cloud.ZkDynamicConfig$Server.parseLine(ZkDynamicConfig.java:142)
> at 
> org.apache.solr.common.cloud.ZkDynamicConfig.lambda$parseLines$0(ZkDynamicConfig.java:58)
> at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
> at 
> java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> at 
> java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
> at 
> org.apache.solr.common.cloud.ZkDynamicConfig.parseLines(ZkDynamicConfig.java:53)
> at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at org.eclipse.jetty.server.Server.handle(Server.java:500)
> at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
> at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
> at 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
> 

[jira] [Assigned] (SOLR-14671) NumberFormatException when accessing ZK Status page in 8.6.0

2020-07-27 Thread Jira


 [ 
https://issues.apache.org/jira/browse/SOLR-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl reassigned SOLR-14671:
--

Assignee: Jan Høydahl

> NumberFormatException when accessing ZK Status page in 8.6.0
> 
>
> Key: SOLR-14671
> URL: https://issues.apache.org/jira/browse/SOLR-14671
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 8.6
>Reporter: Henrik
>Assignee: Jan Høydahl
>Priority: Major
>
> The Admin -> Cloud -> ZK Status page in Solr 8.6.0 does not show any data 
> except for the error message "null".  I get the following exception in 
> solr.log:
> {code:java}
> ERROR [20200721T081301,835] qtp478489615-21 
> org.apache.solr.handler.RequestHandlerBase - java.lang.NumberFormatException: 
> null
> at java.base/java.lang.Integer.parseInt(Integer.java:620)
> at java.base/java.lang.Integer.parseInt(Integer.java:776)
> at 
> org.apache.solr.common.cloud.ZkDynamicConfig$Server.parseLine(ZkDynamicConfig.java:142)
> at 
> org.apache.solr.common.cloud.ZkDynamicConfig.lambda$parseLines$0(ZkDynamicConfig.java:58)
> at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
> at 
> java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> at 
> java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
> at 
> org.apache.solr.common.cloud.ZkDynamicConfig.parseLines(ZkDynamicConfig.java:53)
> at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at org.eclipse.jetty.server.Server.handle(Server.java:500)
> at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
> at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:273)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:311)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:103)
> at 
> org.eclipse.jetty.io.ChannelEndPoint$2.run(ChannelEndPoint.java:117)
> at 
> org.eclipse.jetty.util.thread.strategy.EatWhatYouKill.runTask(E

[jira] [Commented] (SOLR-14671) NumberFormatException when accessing ZK Status page in 8.6.0

2020-07-27 Thread Jira


[ 
https://issues.apache.org/jira/browse/SOLR-14671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166032#comment-17166032
 ] 

Jan Høydahl commented on SOLR-14671:


This is different from the bug in SOLR-14463 even if it is also about parsing 
integers where value may be missing from ZK. In this case the bug is introduced 
in SOLR-14371 in that we assume that the dynamic config always contains 
clientPort. But it is does not, so we should be careful with parsing as integer 
and instead perahaps return null. Attaching a PR.

> NumberFormatException when accessing ZK Status page in 8.6.0
> 
>
> Key: SOLR-14671
> URL: https://issues.apache.org/jira/browse/SOLR-14671
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Admin UI, SolrCloud
>Affects Versions: 8.6
>Reporter: Henrik
>Priority: Major
>
> The Admin -> Cloud -> ZK Status page in Solr 8.6.0 does not show any data 
> except for the error message "null".  I get the following exception in 
> solr.log:
> {code:java}
> ERROR [20200721T081301,835] qtp478489615-21 
> org.apache.solr.handler.RequestHandlerBase - java.lang.NumberFormatException: 
> null
> at java.base/java.lang.Integer.parseInt(Integer.java:620)
> at java.base/java.lang.Integer.parseInt(Integer.java:776)
> at 
> org.apache.solr.common.cloud.ZkDynamicConfig$Server.parseLine(ZkDynamicConfig.java:142)
> at 
> org.apache.solr.common.cloud.ZkDynamicConfig.lambda$parseLines$0(ZkDynamicConfig.java:58)
> at java.base/java.util.Iterator.forEachRemaining(Iterator.java:133)
> at 
> java.base/java.util.Spliterators$IteratorSpliterator.forEachRemaining(Spliterators.java:1801)
> at 
> java.base/java.util.stream.ReferencePipeline$Head.forEach(ReferencePipeline.java:658)
> at 
> org.apache.solr.common.cloud.ZkDynamicConfig.parseLines(ZkDynamicConfig.java:53)
> at 
> org.apache.solr.handler.admin.ZookeeperStatusHandler.handleRequestBody(ZookeeperStatusHandler.java:83)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:214)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdmin(HttpSolrCall.java:854)
> at 
> org.apache.solr.servlet.HttpSolrCall.handleAdminRequest(HttpSolrCall.java:818)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:566)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:415)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1596)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:545)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:590)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:235)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:1610)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextHandle(ScopedHandler.java:233)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1300)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:188)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:485)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:1580)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.nextScope(ScopedHandler.java:186)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1215)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:221)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:146)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:127)
> at org.eclipse.jetty.server.Server.handle(Server.java:500)
> at 
> org.eclipse.jetty.server.HttpChannel.lambda$handle$1(HttpChannel.java:383)
> at org.eclipse.jetty.server.HttpChannel.dispatch(HttpChannel.java:547)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:375)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:27

[jira] [Updated] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Jakob Furrer (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Furrer updated SOLR-11611:

Attachment: (was: SOLR-11611.patch)

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Jakob Furrer (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Furrer updated SOLR-11611:

Status: Patch Available  (was: Open)

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Jakob Furrer (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166023#comment-17166023
 ] 

Jakob Furrer commented on SOLR-11611:
-

Please review and commit the patch I created based on the branch 8.6.0.
[^SOLR-11611.patch]

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Jakob Furrer (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Furrer updated SOLR-11611:

Attachment: SOLR-11611.patch

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11611.patch, SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9321) Port documentation task to gradle

2020-07-27 Thread Tomoko Uchida (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomoko Uchida resolved LUCENE-9321.
---
Resolution: Resolved

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17166014#comment-17166014
 ] 

ASF subversion and git services commented on LUCENE-9321:
-

Commit 5d46361024bc414e61aee1b36dcb3edd570695dc in lucene-solr's branch 
refs/heads/master from Tomoko Uchida
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5d46361 ]

LUCENE-9321: Fix offline link base url for snapshot build (#1695)



> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Jakob Furrer (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Furrer updated SOLR-11611:

Attachment: SOLR-11611.patch

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.1, master (9.0)
>
> Attachments: SOLR-11611.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-11611) Starting Solr using solr.cmd fails under Windows, when the path contains a space

2020-07-27 Thread Jakob Furrer (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-11611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jakob Furrer updated SOLR-11611:

Summary: Starting Solr using solr.cmd fails under Windows, when the path 
contains a space  (was: Starting Solr using solr.cmd fails in Windows, when the 
path contains a parenthesis)

> Starting Solr using solr.cmd fails under Windows, when the path contains a 
> space
> 
>
> Key: SOLR-11611
> URL: https://issues.apache.org/jira/browse/SOLR-11611
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCLI
>Affects Versions: 7.1, 7.4
> Environment: Microsoft Windows [Version 10.0.15063]
> java version "1.8.0_144"
> Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
>Reporter: Jakob Furrer
>Priority: Major
> Fix For: 8.1, master (9.0)
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Starting Solr using solr.cli fails in Windows, when the path contains spaces.
> Use the following example to reproduce the error:
> {quote}C:\>c:
> C:\>cd "C:\Program Files (x86)\Company Name\ProductName Solr\bin"
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>dir
>  Volume in Laufwerk C: hat keine Bezeichnung.
>  Volumeseriennummer: 8207-3B8B
>  Verzeichnis von C:\Program Files (x86)\Company Name\ProductName Solr\bin
> 06.11.2017  15:52  .
> 06.11.2017  15:52  ..
> 06.11.2017  15:39  init.d
> 03.11.2017  17:32 8 209 post
> 03.11.2017  17:3275 963 solr
> 06.11.2017  14:2469 407 solr.cmd
>3 Datei(en),153 579 Bytes
>3 Verzeichnis(se), 51 191 619 584 Bytes frei
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>solr.cmd start
> *"\Company" kann syntaktisch an dieser Stelle nicht verarbeitet werden.*
> C:\Program Files (x86)\Company Name\ProductName Solr\bin>{quote}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12845) Add a default cluster policy

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165960#comment-17165960
 ] 

ASF subversion and git services commented on SOLR-12845:


Commit 70b879b03022c38a9e230d024290a05c66d3e9c3 in lucene-solr's branch 
refs/heads/branch_8_6 from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=70b879b ]

SOLR-12845: Adding upgrade notes for the reversion of default autoscaling.


> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-12845.patch, SOLR-12845.patch, Screen Shot 
> 2020-07-27 at 10.57.33 AM.png, Screenshot from 2020-07-18 21-07-34.png
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-13169) Move Replica Docs need improvement (V1 and V2 introspect)

2020-07-27 Thread Gus Heck (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gus Heck resolved SOLR-13169.
-
Fix Version/s: 8.6
   Resolution: Fixed

> Move Replica Docs need improvement (V1 and V2 introspect)
> -
>
> Key: SOLR-13169
> URL: https://issues.apache.org/jira/browse/SOLR-13169
> Project: Solr
>  Issue Type: Improvement
>  Components: v2 API
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-13169.patch, screenshot-1.png, testing.txt
>
>
> At a minimum required parameters should be noted equally in both places. 
> Conversation with [~ab] indicates that there are also some discrepancies in 
> what is and is not actually required in docs vs code. ("in MoveReplicaCmd if 
> you specify “replica” then “shard” is completely ignored")
> Also in v2 it seems shard might be inferred from the URL and in that case 
> it's not clear if the URL or the json takes precedence.
> From introspect:
> {code:java}
> "move-replica": {
> "type": "object",
> "documentation": 
> "https://lucene.apache.org/solr/guide/collections-api.html#movereplica";,
> "description": "This command moves a replica from one 
> node to a new node. In case of shared filesystems the `dataDir` and `ulogDir` 
> may be reused.",
> "properties": {
> "replica": {
> "type": "string",
> "description": "The name of the replica"
> },
> "shard": {
> "type": "string",
> "description": "The name of the shard"
> },
> "sourceNode": {
> "type": "string",
> "description": "The name of the node that 
> contains the replica."
> },
> "targetNode": {
> "type": "string",
> "description": "The name of the destination node. 
> This parameter is required."
> },
> "waitForFinalState": {
> "type": "boolean",
> "default": "false",
> "description": "Wait for the moved replica to 
> become active."
> },
> "timeout": {
> "type": "integer",
> "default": 600,
> "description": "Timeout to wait for replica to 
> become active. For very large replicas this may need to be increased."
> },
> "inPlaceMove": {
> "type": "boolean",
> "default": "true",
> "description": "For replicas that use shared 
> filesystems allow 'in-place' move that reuses shared data."
> }
> {code}
> From ref guide for V1:
> MOVEREPLICA Parameters
> collection
> The name of the collection. This parameter is required.
> shard
> The name of the shard that the replica belongs to. This parameter is required.
> replica
> The name of the replica. This parameter is required.
> sourceNode
> The name of the node that contains the replica. This parameter is required.
> targetNode
> The name of the destination node. This parameter is required.
> async
> Request ID to track this action which will be processed asynchronously.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13169) Move Replica Docs need improvement (V1 and V2 introspect)

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165957#comment-17165957
 ] 

ASF subversion and git services commented on SOLR-13169:


Commit 9390e03ad5b4271fac3dd392cf8df59fe04c8bf0 in lucene-solr's branch 
refs/heads/branch_8x from Gus Heck
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=9390e03 ]

SOLR-13169 Improve docs for MOVEREPLICA (#1698)

(cherry picked from commit b00d747eb6a94ab5775258b032e621f998ec44ba)
(cherry picked from commit 396490b65ca1af6ff1f1157a9896c9528c234eea)

> Move Replica Docs need improvement (V1 and V2 introspect)
> -
>
> Key: SOLR-13169
> URL: https://issues.apache.org/jira/browse/SOLR-13169
> Project: Solr
>  Issue Type: Improvement
>  Components: v2 API
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13169.patch, screenshot-1.png, testing.txt
>
>
> At a minimum required parameters should be noted equally in both places. 
> Conversation with [~ab] indicates that there are also some discrepancies in 
> what is and is not actually required in docs vs code. ("in MoveReplicaCmd if 
> you specify “replica” then “shard” is completely ignored")
> Also in v2 it seems shard might be inferred from the URL and in that case 
> it's not clear if the URL or the json takes precedence.
> From introspect:
> {code:java}
> "move-replica": {
> "type": "object",
> "documentation": 
> "https://lucene.apache.org/solr/guide/collections-api.html#movereplica";,
> "description": "This command moves a replica from one 
> node to a new node. In case of shared filesystems the `dataDir` and `ulogDir` 
> may be reused.",
> "properties": {
> "replica": {
> "type": "string",
> "description": "The name of the replica"
> },
> "shard": {
> "type": "string",
> "description": "The name of the shard"
> },
> "sourceNode": {
> "type": "string",
> "description": "The name of the node that 
> contains the replica."
> },
> "targetNode": {
> "type": "string",
> "description": "The name of the destination node. 
> This parameter is required."
> },
> "waitForFinalState": {
> "type": "boolean",
> "default": "false",
> "description": "Wait for the moved replica to 
> become active."
> },
> "timeout": {
> "type": "integer",
> "default": 600,
> "description": "Timeout to wait for replica to 
> become active. For very large replicas this may need to be increased."
> },
> "inPlaceMove": {
> "type": "boolean",
> "default": "true",
> "description": "For replicas that use shared 
> filesystems allow 'in-place' move that reuses shared data."
> }
> {code}
> From ref guide for V1:
> MOVEREPLICA Parameters
> collection
> The name of the collection. This parameter is required.
> shard
> The name of the shard that the replica belongs to. This parameter is required.
> replica
> The name of the replica. This parameter is required.
> sourceNode
> The name of the node that contains the replica. This parameter is required.
> targetNode
> The name of the destination node. This parameter is required.
> async
> Request ID to track this action which will be processed asynchronously.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13169) Move Replica Docs need improvement (V1 and V2 introspect)

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165956#comment-17165956
 ] 

ASF subversion and git services commented on SOLR-13169:


Commit 7b2c868ddcec2fe899fe2502dfacd3b702dd5550 in lucene-solr's branch 
refs/heads/branch_8_6 from Gus Heck
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7b2c868 ]

SOLR-13169 Improve docs for MOVEREPLICA (#1699)

(cherry picked from commit b00d747eb6a94ab5775258b032e621f998ec44ba)

(cherry picked from commit 396490b65ca1af6ff1f1157a9896c9528c234eea)

> Move Replica Docs need improvement (V1 and V2 introspect)
> -
>
> Key: SOLR-13169
> URL: https://issues.apache.org/jira/browse/SOLR-13169
> Project: Solr
>  Issue Type: Improvement
>  Components: v2 API
>Reporter: Gus Heck
>Assignee: Gus Heck
>Priority: Major
> Attachments: SOLR-13169.patch, screenshot-1.png, testing.txt
>
>
> At a minimum required parameters should be noted equally in both places. 
> Conversation with [~ab] indicates that there are also some discrepancies in 
> what is and is not actually required in docs vs code. ("in MoveReplicaCmd if 
> you specify “replica” then “shard” is completely ignored")
> Also in v2 it seems shard might be inferred from the URL and in that case 
> it's not clear if the URL or the json takes precedence.
> From introspect:
> {code:java}
> "move-replica": {
> "type": "object",
> "documentation": 
> "https://lucene.apache.org/solr/guide/collections-api.html#movereplica";,
> "description": "This command moves a replica from one 
> node to a new node. In case of shared filesystems the `dataDir` and `ulogDir` 
> may be reused.",
> "properties": {
> "replica": {
> "type": "string",
> "description": "The name of the replica"
> },
> "shard": {
> "type": "string",
> "description": "The name of the shard"
> },
> "sourceNode": {
> "type": "string",
> "description": "The name of the node that 
> contains the replica."
> },
> "targetNode": {
> "type": "string",
> "description": "The name of the destination node. 
> This parameter is required."
> },
> "waitForFinalState": {
> "type": "boolean",
> "default": "false",
> "description": "Wait for the moved replica to 
> become active."
> },
> "timeout": {
> "type": "integer",
> "default": 600,
> "description": "Timeout to wait for replica to 
> become active. For very large replicas this may need to be increased."
> },
> "inPlaceMove": {
> "type": "boolean",
> "default": "true",
> "description": "For replicas that use shared 
> filesystems allow 'in-place' move that reuses shared data."
> }
> {code}
> From ref guide for V1:
> MOVEREPLICA Parameters
> collection
> The name of the collection. This parameter is required.
> shard
> The name of the shard that the replica belongs to. This parameter is required.
> replica
> The name of the replica. This parameter is required.
> sourceNode
> The name of the node that contains the replica. This parameter is required.
> targetNode
> The name of the destination node. This parameter is required.
> async
> Request ID to track this action which will be processed asynchronously.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-07-27 Thread Mark Robert Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165946#comment-17165946
 ] 

Mark Robert Miller commented on SOLR-14636:
---

Tests have been upgraded to {color:#00875a}*extremely stable*{color}.

> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *test-framework*: *extremely stable* with {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: *extremely stable* with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/dataimporthandler*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/dataimporthandler-extras*: {color:#00875a}*extremely 
> stable*{color} with *{color:#de350b}ignores{color}*
>  * *contrib/extraction*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/jaegertracer-configurator*: {color:#00875a}*extremely 
> stable*{color} with {color:#de350b}*ignores*{color}
>  * *contrib/langid*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/prometheus-exporter*: {color:#00875a}*extremely stable*{color} 
> with {color:#de350b}*ignores*{color}
>  * *contrib/velocity*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
> _* Running tests quickly and efficiently with strict policing will more 
> frequently find bugs and requires a period of hardening._
>  _** Non Nightly currently, Nightly comes last._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-07-27 Thread Mark Robert Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Robert Miller updated SOLR-14636:
--
Description: 
SolrCloud powers critical infrastructure and needs the ability to run quickly 
with stability. This reference implementation will allow for this.

*location*: [https://github.com/apache/lucene-solr/tree/reference_impl]

*status*: alpha

*speed*: ludicrous

*tests***:
 * *core*: {color:#00875a}*extremely stable*{color} with 
*{color:#de350b}ignores{color}*
 * *solrj*: {color:#00875a}*extremely stable*{color} with 
{color:#de350b}*ignores*{color}
 * *test-framework*: *extremely stable* with {color:#de350b}*ignores*{color}
 * *contrib/analysis-extras*: *extremely stable* with 
{color:#de350b}*ignores*{color}
 * *contrib/analytics*: {color:#00875a}*extremely stable*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/clustering*: {color:#00875a}*extremely stable*{color} with 
*{color:#de350b}ignores{color}*
 * *contrib/dataimporthandler*: {color:#00875a}*extremely stable*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/dataimporthandler-extras*: {color:#00875a}*extremely stable*{color} 
with *{color:#de350b}ignores{color}*
 * *contrib/extraction*: {color:#00875a}*extremely stable*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/jaegertracer-configurator*: {color:#00875a}*extremely 
stable*{color} with {color:#de350b}*ignores*{color}
 * *contrib/langid*: {color:#00875a}*extremely stable*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/prometheus-exporter*: {color:#00875a}*extremely stable*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/velocity*: {color:#00875a}*extremely stable*{color} with 
{color:#de350b}*ignores*{color}

_* Running tests quickly and efficiently with strict policing will more 
frequently find bugs and requires a period of hardening._
 _** Non Nightly currently, Nightly comes last._

  was:
SolrCloud powers critical infrastructure and needs the ability to run quickly 
with stability. This reference implementation will allow for this.

*location*: [https://github.com/apache/lucene-solr/tree/reference_impl]

*status*: alpha

*speed*: ludicrous

*tests***:
 * *core*: {color:#00875a}*solid*{color} with *{color:#de350b}ignores{color}*
 * *solrj*: {color:#00875a}*solid*{color} with {color:#de350b}*ignores*{color}
 * *test-framework*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/analysis-extras*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/analytics*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/clustering*: {color:#00875a}*solid*{color} with 
*{color:#de350b}ignores{color}*
 * *contrib/dataimporthandler*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/dataimporthandler-extras*: {color:#00875a}*solid*{color} with 
*{color:#de350b}ignores{color}*
 * *contrib/extraction*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/jaegertracer-configurator*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/langid*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/prometheus-exporter*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/velocity*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}

_* Running tests quickly and efficiently with strict policing will more 
frequently find bugs and requires a period of hardening._
 _** Non Nightly currently, Nightly comes last._


> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *test-framework*: *extremely stable* with {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: *extremely stable* with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*extremely stable*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*extremely stable*{color} with 
> *{color:#de350b}ignores{color}*
>  * 

[jira] [Commented] (SOLR-13528) Rate limiting in Solr

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165935#comment-17165935
 ] 

Ishan Chattopadhyaya commented on SOLR-13528:
-

This week I'm busy with some personal work. I shall be able to take a look at 
the PR by end of this week.

> Rate limiting in Solr
> -
>
> Key: SOLR-13528
> URL: https://issues.apache.org/jira/browse/SOLR-13528
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Assignee: Atri Sharma
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In relation to SOLR-13527, Solr also needs a way to throttle update and 
> search requests based on usage metrics. This is the umbrella JIRA for both 
> update and search rate limiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14673) Add command line tool for executing Streaming Expressions

2020-07-27 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14673:
--
Attachment: SOLR-14673.patch

> Add command line tool for executing Streaming Expressions
> -
>
> Key: SOLR-14673
> URL: https://issues.apache.org/jira/browse/SOLR-14673
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-14673.patch, SOLR-14673.patch, SOLR-14673.patch, 
> SOLR-14673.patch
>
>
> This ticket will provide a simple command line tool that will run a Streaming 
> Expression from the command line and return the results as a delimited result 
> set. This will allow Streaming Expressions to be used from the command line 
> to extract data as well as load data into Solr. 
> Sample syntax:
> {code:java}
> bin/expr expr_file{code}
> This will run the expression in _expr_file_.
> Output will be to standard out as delimited records.  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14673) Add command line tool for executing Streaming Expressions

2020-07-27 Thread Joel Bernstein (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-14673:
--
Attachment: SOLR-14673.patch

> Add command line tool for executing Streaming Expressions
> -
>
> Key: SOLR-14673
> URL: https://issues.apache.org/jira/browse/SOLR-14673
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
>Priority: Major
> Attachments: SOLR-14673.patch, SOLR-14673.patch, SOLR-14673.patch
>
>
> This ticket will provide a simple command line tool that will run a Streaming 
> Expression from the command line and return the results as a delimited result 
> set. This will allow Streaming Expressions to be used from the command line 
> to extract data as well as load data into Solr. 
> Sample syntax:
> {code:java}
> bin/expr expr_file{code}
> This will run the expression in _expr_file_.
> Output will be to standard out as delimited records.  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Assigned] (SOLR-13528) Rate limiting in Solr

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya reassigned SOLR-13528:
---

Assignee: Atri Sharma

> Rate limiting in Solr
> -
>
> Key: SOLR-13528
> URL: https://issues.apache.org/jira/browse/SOLR-13528
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Assignee: Atri Sharma
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In relation to SOLR-13527, Solr also needs a way to throttle update and 
> search requests based on usage metrics. This is the umbrella JIRA for both 
> update and search rate limiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165910#comment-17165910
 ] 

Ishan Chattopadhyaya commented on SOLR-14636:
-

On a Raspberry Pi 4B (1.5 GHz quad core), I see the following times:
{code:java}
   [junit4] JVM J0: 2.92 ..   452.00 =   449.08s
   [junit4] JVM J1: 2.94 ..   451.80 =   448.86s
   [junit4] JVM J2: 3.02 ..   451.99 =   448.97s
   [junit4] Execution time total: 7 minutes 32 seconds
   [junit4] Tests summary: 914 suites (239 ignored), 3718 tests, 939 ignored 
(841 assumptions)
   [junit4] Could not remove temporary path: 
/home/pi/lucene-solr/solr/build/solr-core/test/J2 
(java.nio.file.DirectoryNotEmptyException: Remaining files: 
[/home/pi/lucene-solr/solr/build/solr-core/test/J2/temp])
 [echo] 5 slowest tests:
[junit4:tophints]  19.13s | org.apache.solr.legacy.TestNumericRangeQuery64
[junit4:tophints]  17.23s | org.apache.solr.core.snapshots.TestSolrCoreSnapshots
[junit4:tophints]  14.66s | org.apache.solr.TestSimpleTrackingShardHandler
[junit4:tophints]  13.34s | org.apache.solr.search.function.TestFunctionQuery
[junit4:tophints]  11.85s | 
org.apache.solr.uninverting.TestNumericTerms32-check-totals:
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by 
org.codehaus.groovy.reflection.CachedClass 
(file:/home/pi/.ivy2/cache/org.codehaus.groovy/groovy-all/jars/groovy-all-2.4.17.jar)
 to method java.lang.Object.finalize()
WARNING: Please consider reporting this to the maintainers of 
org.codehaus.groovy.reflection.CachedClass
WARNING: Use --illegal-access=warn to enable warnings of further illegal 
reflective access operations
WARNING: All illegal access operations will be denied in a future 
releasecommon.test:BUILD SUCCESSFUL
Total time: 8 minutes 34 seconds

 {code}

> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*solid*{color} with *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*solid*{color} with {color:#de350b}*ignores*{color}
>  * *test-framework*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*solid*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/dataimporthandler*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/dataimporthandler-extras*: {color:#00875a}*solid*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/extraction*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/jaegertracer-configurator*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/langid*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/prometheus-exporter*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/velocity*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
> _* Running tests quickly and efficiently with strict policing will more 
> frequently find bugs and requires a period of hardening._
>  _** Non Nightly currently, Nightly comes last._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13528) Rate limiting in Solr

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165907#comment-17165907
 ] 

Ishan Chattopadhyaya edited comment on SOLR-13528 at 7/27/20, 6:39 PM:
---

{quote}Should I be continuing work on that?
{quote}
Absolutely! I don't see the existence of the reference branch as a reason for 
discontinuing the good work here towards a feature that Solr needs. We can have 
further discussions on how to take the reference impl forward at a later point 
when it is more stable.

Having said that, it might be worth looking into what approach has been taken 
there for the QOS filter and see if there's something we want to do here on the 
same/similar lines.


was (Author: ichattopadhyaya):
{quote}Should I be continuing work on that?
{quote}
Absolutely! I don't see the existence of the reference branch as a reason for 
discontinuing the good work here towards a feature that Solr needs. We can have 
further discussions on how to take the reference impl forward at a later point 
when it is more stable.

Having said that, it might be worth looking into what approach has been taken 
there for the QOS filter and see if there's something we need to do here on the 
same/similar lines.

> Rate limiting in Solr
> -
>
> Key: SOLR-13528
> URL: https://issues.apache.org/jira/browse/SOLR-13528
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In relation to SOLR-13527, Solr also needs a way to throttle update and 
> search requests based on usage metrics. This is the umbrella JIRA for both 
> update and search rate limiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-13528) Rate limiting in Solr

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165907#comment-17165907
 ] 

Ishan Chattopadhyaya edited comment on SOLR-13528 at 7/27/20, 6:39 PM:
---

{quote}Should I be continuing work on that?
{quote}
Absolutely! I don't see the existence of the reference branch as a reason for 
discontinuing the good work here towards a feature that Solr needs. We can have 
further discussions on how to take the reference impl forward at a later point 
when it is more stable.

Having said that, it might be worth looking into what approach has been taken 
there for the QOS filter and see if there's something we need to do here on the 
same/similar lines.


was (Author: ichattopadhyaya):
{quote}Should I be continuing work on that?
{quote}
Absolutely! I don't see the existence of the reference branch as a reason for 
discontinuing the good work here towards a feature that Solr needs. We can have 
further discussions on how to take the reference impl forward at a later point 
when it is more stable.

Having said that, it might be worth looking into what approach has been taken 
there for the QOS filter and see if there's something we can do here on the 
same/similar lines.

> Rate limiting in Solr
> -
>
> Key: SOLR-13528
> URL: https://issues.apache.org/jira/browse/SOLR-13528
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In relation to SOLR-13527, Solr also needs a way to throttle update and 
> search requests based on usage metrics. This is the umbrella JIRA for both 
> update and search rate limiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13528) Rate limiting in Solr

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165907#comment-17165907
 ] 

Ishan Chattopadhyaya commented on SOLR-13528:
-

{quote}Should I be continuing work on that?
{quote}
Absolutely! I don't see the existence of the reference branch as a reason for 
discontinuing the good work here towards a feature that Solr needs. We can have 
further discussions on how to take the reference impl forward at a later point 
when it is more stable.

Having said that, it might be worth looking into what approach has been taken 
there for the QOS filter and see if there's something we can do here on the 
same/similar lines.

> Rate limiting in Solr
> -
>
> Key: SOLR-13528
> URL: https://issues.apache.org/jira/browse/SOLR-13528
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In relation to SOLR-13527, Solr also needs a way to throttle update and 
> search requests based on usage metrics. This is the umbrella JIRA for both 
> update and search rate limiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-07-27 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165902#comment-17165902
 ] 

David Smiley commented on SOLR-14636:
-

{quote}I will be moving development to a dev branch and then promote commits to 
the main branch over time
{quote}
Can you please elaborate on what that means?  I'm hoping "promote commits to 
the main branch" means specifically promote pieces of this work to get to 
master branch (thus 9.0) under their own Jira issues.  In some rare cases there 
are _existing_ Jira issues that are suitable like SOLR-13528 in which I 
at-mentioned you today.  I've also at-mentioned you on SOLR-14651 on the PR 
side, which is related to some of your commits here.

> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*solid*{color} with *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*solid*{color} with {color:#de350b}*ignores*{color}
>  * *test-framework*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*solid*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/dataimporthandler*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/dataimporthandler-extras*: {color:#00875a}*solid*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/extraction*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/jaegertracer-configurator*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/langid*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/prometheus-exporter*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/velocity*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
> _* Running tests quickly and efficiently with strict policing will more 
> frequently find bugs and requires a period of hardening._
>  _** Non Nightly currently, Nightly comes last._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13528) Rate limiting in Solr

2020-07-27 Thread Atri Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165899#comment-17165899
 ] 

Atri Sharma commented on SOLR-13528:


[~dsmiley] Not sure if you got a chance to look at the PR I have been working 
on for this feature (#1686). Should I be continuing work on that?

> Rate limiting in Solr
> -
>
> Key: SOLR-13528
> URL: https://issues.apache.org/jira/browse/SOLR-13528
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In relation to SOLR-13527, Solr also needs a way to throttle update and 
> search requests based on usage metrics. This is the umbrella JIRA for both 
> update and search rate limiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-07-27 Thread Mark Robert Miller (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165895#comment-17165895
 ] 

Mark Robert Miller commented on SOLR-14636:
---

These tests are starting to get extremely stable, so before long I will be 
moving development to a dev branch and then promote commits to the main branch 
over time.

> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*solid*{color} with *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*solid*{color} with {color:#de350b}*ignores*{color}
>  * *test-framework*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*solid*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/dataimporthandler*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/dataimporthandler-extras*: {color:#00875a}*solid*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/extraction*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/jaegertracer-configurator*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/langid*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/prometheus-exporter*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/velocity*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
> _* Running tests quickly and efficiently with strict policing will more 
> frequently find bugs and requires a period of hardening._
>  _** Non Nightly currently, Nightly comes last._



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-13528) Rate limiting in Solr

2020-07-27 Thread David Smiley (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-13528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165893#comment-17165893
 ] 

David Smiley commented on SOLR-13528:
-

[~markrmil...@gmail.com] [~markrmiller] (dunno which you want to use these 
days): You've done some exciting work on your reference_impl branch related to 
rate limiting.  If I recall the approach is a thread-local access to a 
thread-pool per request that is used pervasively.  I love the idea!  Can you 
please share information about that here?  I suspect that work could be 
extracted from the big reference_impl branch.

> Rate limiting in Solr
> -
>
> Key: SOLR-13528
> URL: https://issues.apache.org/jira/browse/SOLR-13528
> Project: Solr
>  Issue Type: New Feature
>Reporter: Anshum Gupta
>Priority: Major
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> In relation to SOLR-13527, Solr also needs a way to throttle update and 
> search requests based on usage metrics. This is the umbrella JIRA for both 
> update and search rate limiting.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14683) Review the metrics API to ensure consistent placeholders for missing values

2020-07-27 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165855#comment-17165855
 ] 

Andrzej Bialecki commented on SOLR-14683:
-

We should also consider the potential back-compat aspect should we decide to 
change some of the returned values.

> Review the metrics API to ensure consistent placeholders for missing values
> ---
>
> Key: SOLR-14683
> URL: https://issues.apache.org/jira/browse/SOLR-14683
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki
>Assignee: Andrzej Bialecki
>Priority: Major
>
> Spin-off from SOLR-14657. Some gauges can legitimately be missing or in an 
> unknown state at some points in time, eg. during SolrCore startup or shutdown.
> Currently the API returns placeholders with either impossible values for 
> numeric gauges (such as index size -1) or empty maps / strings for other 
> non-numeric gauges.
> [~hossman] noticed that the values for these placeholders may be misleading, 
> depending on how the user treats them - if the client has no special logic to 
> treat them as "missing values" it may erroneously treat them as valid data. 
> E.g. numeric values of -1 or 0 may severely skew averages and produce 
> misleading peaks / valleys in metrics histories.
> On the other hand returning a literal {{null}} value instead of the expected 
> number may also cause unexpected client issues - although in this case it's 
> clearer that there's actually no data available, so long-term this may be a 
> better strategy than returning impossible values, even if it means that the 
> client should learn to handle {{null}} values appropriately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Created] (SOLR-14683) Review the metrics API to ensure consistent placeholders for missing values

2020-07-27 Thread Andrzej Bialecki (Jira)
Andrzej Bialecki created SOLR-14683:
---

 Summary: Review the metrics API to ensure consistent placeholders 
for missing values
 Key: SOLR-14683
 URL: https://issues.apache.org/jira/browse/SOLR-14683
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: metrics
Reporter: Andrzej Bialecki
Assignee: Andrzej Bialecki


Spin-off from SOLR-14657. Some gauges can legitimately be missing or in an 
unknown state at some points in time, eg. during SolrCore startup or shutdown.

Currently the API returns placeholders with either impossible values for 
numeric gauges (such as index size -1) or empty maps / strings for other 
non-numeric gauges.

[~hossman] noticed that the values for these placeholders may be misleading, 
depending on how the user treats them - if the client has no special logic to 
treat them as "missing values" it may erroneously treat them as valid data. 
E.g. numeric values of -1 or 0 may severely skew averages and produce 
misleading peaks / valleys in metrics histories.

On the other hand returning a literal {{null}} value instead of the expected 
number may also cause unexpected client issues - although in this case it's 
clearer that there's actually no data available, so long-term this may be a 
better strategy than returning impossible values, even if it means that the 
client should learn to handle {{null}} values appropriately.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9321) Port documentation task to gradle

2020-07-27 Thread Uwe Schindler (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9321?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165827#comment-17165827
 ] 

Uwe Schindler commented on LUCENE-9321:
---

Looks fine. Thanks!

> Port documentation task to gradle
> -
>
> Key: LUCENE-9321
> URL: https://issues.apache.org/jira/browse/LUCENE-9321
> Project: Lucene - Core
>  Issue Type: Sub-task
>  Components: general/build
>Reporter: Tomoko Uchida
>Assignee: Uwe Schindler
>Priority: Major
> Fix For: master (9.0)
>
> Attachments: screenshot-1.png
>
>  Time Spent: 9.5h
>  Remaining Estimate: 0h
>
> This is a placeholder issue for porting ant "documentation" task to gradle. 
> The generated documents should be able to be published on lucene.apache.org 
> web site on "as-is" basis.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14656) Deprecate current autoscaling framework, remove from master

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165802#comment-17165802
 ] 

Ishan Chattopadhyaya commented on SOLR-14656:
-

[~ab]would be able to take a stand at the deprecation in 8.7, please?

> Deprecate current autoscaling framework, remove from master
> ---
>
> Key: SOLR-14656
> URL: https://issues.apache.org/jira/browse/SOLR-14656
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Priority: Major
> Attachments: Screenshot from 2020-07-18 07-49-01.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The autoscaling framework is being re-designed in SOLR-14613 (SIP: 
> https://cwiki.apache.org/confluence/display/SOLR/SIP-8+Autoscaling+policy+engine+V2).
> The current autoscaling framework is very inefficient, improperly designed 
> and too bloated and doesn't receive the level of support we aspire to provide 
> for all components that we ship.
> This issue is to deprecate current autoscaling framework in 8x, so we can 
> focus on the new autoscaling framework afresh.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14656) Deprecate current autoscaling framework, remove from master

2020-07-27 Thread Ishan Chattopadhyaya (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165802#comment-17165802
 ] 

Ishan Chattopadhyaya edited comment on SOLR-14656 at 7/27/20, 3:32 PM:
---

[~ab]would be able to take a stab at the deprecation in 8.7, please?


was (Author: ichattopadhyaya):
[~ab]would be able to take a stand at the deprecation in 8.7, please?

> Deprecate current autoscaling framework, remove from master
> ---
>
> Key: SOLR-14656
> URL: https://issues.apache.org/jira/browse/SOLR-14656
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Priority: Major
> Attachments: Screenshot from 2020-07-18 07-49-01.png
>
>  Time Spent: 3h
>  Remaining Estimate: 0h
>
> The autoscaling framework is being re-designed in SOLR-14613 (SIP: 
> https://cwiki.apache.org/confluence/display/SOLR/SIP-8+Autoscaling+policy+engine+V2).
> The current autoscaling framework is very inefficient, improperly designed 
> and too bloated and doesn't receive the level of support we aspire to provide 
> for all components that we ship.
> This issue is to deprecate current autoscaling framework in 8x, so we can 
> focus on the new autoscaling framework afresh.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-9424) Have a warning comment for AttributeSource.captureState

2020-07-27 Thread Michael McCandless (Jira)


 [ 
https://issues.apache.org/jira/browse/LUCENE-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless resolved LUCENE-9424.

Fix Version/s: 8.7
   master (9.0)
   Resolution: Fixed

Thank you [~zhai7631]!

> Have a warning comment for AttributeSource.captureState
> ---
>
> Key: LUCENE-9424
> URL: https://issues.apache.org/jira/browse/LUCENE-9424
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/javadocs
>Reporter: Haoyu Zhai
>Priority: Trivial
> Fix For: master (9.0), 8.7
>
> Attachments: LUCENE-9424.patch
>
>
> {{AttributeSource.captureState}} is a powerful method that can be used to 
> store and (later on) restore the current state, but it comes with a cost of 
> copying all attributes in this source and sometimes can be a big cost if 
> called multiple times.
> We could probably add a warning to indicate this cost, as this method is 
> encapsulated quite well and sometimes people who use it won't be aware of the 
> cost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9424) Have a warning comment for AttributeSource.captureState

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165782#comment-17165782
 ] 

ASF subversion and git services commented on LUCENE-9424:
-

Commit 295b5afcb3af4e65d4ce46291501712b97172dbf in lucene-solr's branch 
refs/heads/branch_8x from Michael McCandless
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=295b5af ]

LUCENE-9424: add a performance warning to AttributeSource.captureState javadocs


> Have a warning comment for AttributeSource.captureState
> ---
>
> Key: LUCENE-9424
> URL: https://issues.apache.org/jira/browse/LUCENE-9424
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/javadocs
>Reporter: Haoyu Zhai
>Priority: Trivial
> Attachments: LUCENE-9424.patch
>
>
> {{AttributeSource.captureState}} is a powerful method that can be used to 
> store and (later on) restore the current state, but it comes with a cost of 
> copying all attributes in this source and sometimes can be a big cost if 
> called multiple times.
> We could probably add a warning to indicate this cost, as this method is 
> encapsulated quite well and sometimes people who use it won't be aware of the 
> cost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9424) Have a warning comment for AttributeSource.captureState

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165781#comment-17165781
 ] 

ASF subversion and git services commented on LUCENE-9424:
-

Commit e4c2be98fa2ffdeacaa1b2566aacd662de067601 in lucene-solr's branch 
refs/heads/master from Michael McCandless
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=e4c2be9 ]

LUCENE-9424: add a performance warning to AttributeSource.captureState javadocs


> Have a warning comment for AttributeSource.captureState
> ---
>
> Key: LUCENE-9424
> URL: https://issues.apache.org/jira/browse/LUCENE-9424
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: general/javadocs
>Reporter: Haoyu Zhai
>Priority: Trivial
> Attachments: LUCENE-9424.patch
>
>
> {{AttributeSource.captureState}} is a powerful method that can be used to 
> store and (later on) restore the current state, but it comes with a cost of 
> copying all attributes in this source and sometimes can be a big cost if 
> called multiple times.
> We could probably add a warning to indicate this cost, as this method is 
> encapsulated quite well and sometimes people who use it won't be aware of the 
> cost.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-14682) Remove repairs put in for upgrading Solr from 6.6.1 to Solr 7.1

2020-07-27 Thread Erick Erickson (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson resolved SOLR-14682.
---
Resolution: Invalid

Never mind, Ilan Ginzburg did this in trunk/9.0 as part of SOLR-12823. Come to 
think, it does reference clusterstate.json...

 

Sorry for the noise.

> Remove repairs put in for upgrading Solr from 6.6.1 to Solr 7.1
> ---
>
> Key: SOLR-14682
> URL: https://issues.apache.org/jira/browse/SOLR-14682
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
>
> This was the bit where we couldn't upgrade from 6.6.1 to 7.1 due to 
> coreNodeName missing from core.properties that's long outlived its usefulness 
> at this point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12845) Add a default cluster policy

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165767#comment-17165767
 ] 

ASF subversion and git services commented on SOLR-12845:


Commit 1a7ca4e2300538b1f577bfaf0e40ec55e48600a9 in lucene-solr's branch 
refs/heads/branch_8_6 from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1a7ca4e ]

Revert "SOLR-12845: Properly clear default policy between tests."

To fix the regressions found in  SOLR-14665.

This reverts commit 789c97be5fb66b61210cff9dafb89daabec9fe39.


> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-12845.patch, SOLR-12845.patch, Screen Shot 
> 2020-07-27 at 10.57.33 AM.png, Screenshot from 2020-07-18 21-07-34.png
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14665) Collection creation is progressively slower

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165768#comment-17165768
 ] 

ASF subversion and git services commented on SOLR-14665:


Commit 1a7ca4e2300538b1f577bfaf0e40ec55e48600a9 in lucene-solr's branch 
refs/heads/branch_8_6 from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=1a7ca4e ]

Revert "SOLR-12845: Properly clear default policy between tests."

To fix the regressions found in  SOLR-14665.

This reverts commit 789c97be5fb66b61210cff9dafb89daabec9fe39.


> Collection creation is progressively slower
> ---
>
> Key: SOLR-14665
> URL: https://issues.apache.org/jira/browse/SOLR-14665
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: 8.7
>
> Attachments: Screenshot from 2020-07-18 07-49-01.png
>
>
> Plain and simple collection creation (or even shard splits etc.) get 
> progressively slower as more and more collections are in the system. The 
> culprit is the autoscaling framework (possibly some unnecessary policy 
> computation), even when *no autoscaling is being used whatsoever*.
>  
> Here is how bad the situation is:
> https://issues.apache.org/jira/browse/SOLR-14656?focusedCommentId=17160311&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17160311
>  
> !Screenshot from 2020-07-18 07-49-01.png!
>  
> Btw, even when using createnodeset parameter, there is still a gradual 
> slowdown (but less steep than the graph here).
>  
> This is a matter of grave concern for anyone running a Solr cluster with more 
> than a few hundred collections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14665) Collection creation is progressively slower

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14665?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165770#comment-17165770
 ] 

ASF subversion and git services commented on SOLR-14665:


Commit 7c8427907300256e7b3bcf6835e43986a712ffac in lucene-solr's branch 
refs/heads/branch_8_6 from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7c84279 ]

Revert "SOLR-12845: Add a default autoscaling cluster policy."

To fix the regressions found in  SOLR-14665.

This reverts commit 8e0eae260a0c38fa03e2eaf682d0db9d8b0b6374.


> Collection creation is progressively slower
> ---
>
> Key: SOLR-14665
> URL: https://issues.apache.org/jira/browse/SOLR-14665
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
>Priority: Blocker
> Fix For: 8.7
>
> Attachments: Screenshot from 2020-07-18 07-49-01.png
>
>
> Plain and simple collection creation (or even shard splits etc.) get 
> progressively slower as more and more collections are in the system. The 
> culprit is the autoscaling framework (possibly some unnecessary policy 
> computation), even when *no autoscaling is being used whatsoever*.
>  
> Here is how bad the situation is:
> https://issues.apache.org/jira/browse/SOLR-14656?focusedCommentId=17160311&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-17160311
>  
> !Screenshot from 2020-07-18 07-49-01.png!
>  
> Btw, even when using createnodeset parameter, there is still a gradual 
> slowdown (but less steep than the graph here).
>  
> This is a matter of grave concern for anyone running a Solr cluster with more 
> than a few hundred collections.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12845) Add a default cluster policy

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165769#comment-17165769
 ] 

ASF subversion and git services commented on SOLR-12845:


Commit 7c8427907300256e7b3bcf6835e43986a712ffac in lucene-solr's branch 
refs/heads/branch_8_6 from Houston Putman
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=7c84279 ]

Revert "SOLR-12845: Add a default autoscaling cluster policy."

To fix the regressions found in  SOLR-14665.

This reverts commit 8e0eae260a0c38fa03e2eaf682d0db9d8b0b6374.


> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-12845.patch, SOLR-12845.patch, Screen Shot 
> 2020-07-27 at 10.57.33 AM.png, Screenshot from 2020-07-18 21-07-34.png
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12845) Add a default cluster policy

2020-07-27 Thread Houston Putman (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165763#comment-17165763
 ] 

Houston Putman commented on SOLR-12845:
---

Varun,

Here is my very crude attempt at plotting my results. Basically everything was 
below a second, until it blew up at the end. Probably because it ran out of 
resources.

!Screen Shot 2020-07-27 at 10.57.33 AM.png|width=400,height=324!

 

Andrzej, sounds good. I shall do the revert on branch_8_6 then.

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-12845.patch, SOLR-12845.patch, Screen Shot 
> 2020-07-27 at 10.57.33 AM.png, Screenshot from 2020-07-18 21-07-34.png
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-12845) Add a default cluster policy

2020-07-27 Thread Houston Putman (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Houston Putman updated SOLR-12845:
--
Attachment: Screen Shot 2020-07-27 at 10.57.33 AM.png

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-12845.patch, SOLR-12845.patch, Screen Shot 
> 2020-07-27 at 10.57.33 AM.png, Screenshot from 2020-07-18 21-07-34.png
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14608) Faster sorting for the /export handler

2020-07-27 Thread Atri Sharma (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165753#comment-17165753
 ] 

Atri Sharma commented on SOLR-14608:


FWIW I have a use case with around 252 billion documents that is planning to 
use this feature – so am excited as well :)

> Faster sorting for the /export handler
> --
>
> Key: SOLR-14608
> URL: https://issues.apache.org/jira/browse/SOLR-14608
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Andrzej Bialecki
>Priority: Major
>
> The largest cost of the export handler is the sorting. This ticket will 
> implement an improved algorithm for sorting that should greatly increase 
> overall throughput for the export handler.
> *The current algorithm is as follows:*
> Collect a bitset of matching docs. Iterate over that bitset and materialize 
> the top level oridinals for the sort fields in the document and add them to 
> priority queue of size 3. Then export the top 3 docs, turn off the 
> bits in the bit set and iterate again until all docs are sorted and sent. 
> There are two performance bottlenecks with this approach:
> 1) Materializing the top level ordinals adds a huge amount of overhead to the 
> sorting process.
> 2) The size of priority queue, 30,000, adds significant overhead to sorting 
> operations.
> *The new algorithm:*
> Has a top level *merge sort iterator* that wraps segment level iterators that 
> perform segment level priority queue sorts.
> *Segment level:*
> The segment level docset will be iterated and the segment level ordinals for 
> the sort fields will be materialized and added to a segment level priority 
> queue. As the segment level iterator pops docs from the priority queue the 
> top level ordinals for the sort fields are materialized. Because the top 
> level ordinals are materialized AFTER the sort, they only need to be looked 
> up when the segment level ordinal changes. This takes advantage of the sort 
> to limit the lookups into the top level ordinal structures. This also 
> eliminates redundant lookups of top level ordinals that occur during the 
> multiple passes over the matching docset.
> The segment level priority queues can be kept smaller than 30,000 to improve 
> performance of the sorting operations because the overall batch size will 
> still be 30,000 or greater when all the segment priority queue sizes are 
> added up. This allows for batch sizes much larger then 30,000 without using a 
> single large priority queue. The increased batch size means fewer iterations 
> over the matching docset and the decreased priority queue size means faster 
> sorting operations.
> *Top level:*
> A top level iterator does a merge sort over the segment level iterators by 
> comparing the top level ordinals materialized when the segment level docs are 
> popped from the segment level priority queues. This requires no extra memory 
> and will be very performant.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14608) Faster sorting for the /export handler

2020-07-27 Thread Joel Bernstein (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165750#comment-17165750
 ] 

Joel Bernstein commented on SOLR-14608:
---

[~gus], we're going to move slowly with this and try to ensure this launches 
without breaking things. Let's see if we can simulate your scenario in a test 
case before this gets committed.

> Faster sorting for the /export handler
> --
>
> Key: SOLR-14608
> URL: https://issues.apache.org/jira/browse/SOLR-14608
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Andrzej Bialecki
>Priority: Major
>
> The largest cost of the export handler is the sorting. This ticket will 
> implement an improved algorithm for sorting that should greatly increase 
> overall throughput for the export handler.
> *The current algorithm is as follows:*
> Collect a bitset of matching docs. Iterate over that bitset and materialize 
> the top level oridinals for the sort fields in the document and add them to 
> priority queue of size 3. Then export the top 3 docs, turn off the 
> bits in the bit set and iterate again until all docs are sorted and sent. 
> There are two performance bottlenecks with this approach:
> 1) Materializing the top level ordinals adds a huge amount of overhead to the 
> sorting process.
> 2) The size of priority queue, 30,000, adds significant overhead to sorting 
> operations.
> *The new algorithm:*
> Has a top level *merge sort iterator* that wraps segment level iterators that 
> perform segment level priority queue sorts.
> *Segment level:*
> The segment level docset will be iterated and the segment level ordinals for 
> the sort fields will be materialized and added to a segment level priority 
> queue. As the segment level iterator pops docs from the priority queue the 
> top level ordinals for the sort fields are materialized. Because the top 
> level ordinals are materialized AFTER the sort, they only need to be looked 
> up when the segment level ordinal changes. This takes advantage of the sort 
> to limit the lookups into the top level ordinal structures. This also 
> eliminates redundant lookups of top level ordinals that occur during the 
> multiple passes over the matching docset.
> The segment level priority queues can be kept smaller than 30,000 to improve 
> performance of the sorting operations because the overall batch size will 
> still be 30,000 or greater when all the segment priority queue sizes are 
> added up. This allows for batch sizes much larger then 30,000 without using a 
> single large priority queue. The increased batch size means fewer iterations 
> over the matching docset and the decreased priority queue size means faster 
> sorting operations.
> *Top level:*
> A top level iterator does a merge sort over the segment level iterators by 
> comparing the top level ordinals materialized when the segment level docs are 
> popped from the segment level priority queues. This requires no extra memory 
> and will be very performant.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11868) Deprecate CloudSolrClient.setIdField, use information from Zookeeper

2020-07-27 Thread Erick Erickson (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165731#comment-17165731
 ] 

Erick Erickson commented on SOLR-11868:
---

[~dsmiley] I don't think I want to go there (remove  and only allow 
"id"). If we'd required that at the start it'd be one thing, but at this point 
there are a lot of installations out there that use something different. It 
just feels like too much difficulty for not enough gain.

So I'm not going to raise a Jira for it. I have no idea why there's another 
push message here with an identical SHA.

> Deprecate CloudSolrClient.setIdField, use information from Zookeeper
> 
>
> Key: SOLR-11868
> URL: https://issues.apache.org/jira/browse/SOLR-11868
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.7
>
>
> IIUC idField has nothing to do with the  field. It's really
> the field used to route documents. Agreed, this is often the "id"
> field, but still
> In fact, over in UpdateReqeust.getRoutes(), it's passed as the "id"
> field to router.getTargetSlice() and just works, even though
> getTargetSlice is clearly designed to route on a field other than the
>  if we didn't just pass null as the "route" param.
> The confusing bit is that if I have a route field defined for my
> collection and want to use CloudSolrClient I have to figure out that I
> need to use the setIdField method to use that field for routing.
>  
> We should deprecate setIdField and refactor how this is used (i.e. 
> getRoutes). Need to beef up tests too I suspect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14435) createNodeSet and createNodeSet.shuffle parameters missing from Collection Restore RefGuide

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14435?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165718#comment-17165718
 ] 

ASF subversion and git services commented on SOLR-14435:


Commit 80b6dcecfebf267ca8390d8421b5d47305fc6db9 in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Eric Pugh
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=80b6dce ]

SOLR-14435: Update collection management docs on RESTORE (#1683)

* include missing RESTORE parameters

* small grammer fix

* remove duplication of describing the parameters in favour of the pattern of 
pointing to the CREATE command documentation.

> createNodeSet and createNodeSet.shuffle parameters missing from Collection 
> Restore RefGuide
> ---
>
> Key: SOLR-14435
> URL: https://issues.apache.org/jira/browse/SOLR-14435
> Project: Solr
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Andras Salamon
>Assignee: David Eric Pugh
>Priority: Minor
> Fix For: 8.7
>
> Attachments: SOLR-14435-01.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> Although {{createNodeSet}} and {{createNodeSet.shuffle}} parameters are 
> supported by the Collection RESTORE command (I've tested it), they are 
> missing from the documentation:
> [https://lucene.apache.org/solr/guide/8_5/collection-management.html#collection-management]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14652) SolrCore should hold its own CoreDescriptor

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14652?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165721#comment-17165721
 ] 

ASF subversion and git services commented on SOLR-14652:


Commit 5295007022b524160b76a7afc55b76d1eee26541 in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from David Smiley
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5295007 ]

SOLR-14652: SolrCore should hold its own CoreDescriptor (#1675)

(minor refactoring)
Also:
* SolrCore's constructors don't need a "name" since it's guaranteed to always 
be the name in the coreDescriptor.  I checked.
* SolrCore's constructor shouldn't call 
coreContainer.solrCores.addCoreDescriptor(cd); because it's the container's 
responsibility to manage such things.  I made SolrCores.putCore ensure the 
descriptor is added, and this is called by CoreContainer.registerCore which is 
called after new SolrCore instances are created.
* solrCore.setName should only be called when we expect the name to change.  
Furthermore that shouldn't ever happen in SolrCloud so I added checks.
* solrCore.setName calls coreMetricManager.afterCoreSetName() which is 
something that is really only related to a rename, not name initialization 
(from the constructor).  I renamed that method and further only call it if the 
name did change from non-null.  

> SolrCore should hold its own CoreDescriptor
> ---
>
> Key: SOLR-14652
> URL: https://issues.apache.org/jira/browse/SOLR-14652
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Minor
> Fix For: 8.7
>
>  Time Spent: 2h
>  Remaining Estimate: 0h
>
> SolrCore.getCoreDescriptor() surprisingly must do 
> {{coreContainer.getCoreDescriptor(name)}} instead of simply return a field on 
> itself.  I think it's more sane that a SolrCore hold onto its own descriptor 
> making it unequivocally clear it will get it.  I've seen a transient-core 
> edge case where it didn't, though I don't want to classify this issue as a 
> bug fix over that.
> Also:
>  * SolrCore's constructors don't need a "name" since it's guaranteed to 
> always be the name in the coreDescriptor.  I checked.
>  * SolrCore's constructor shouldn't call 
> {{coreContainer.solrCores.addCoreDescriptor(cd);}} because it's the 
> container's responsibility to manage such things.  I made SolrCores.putCore 
> ensure the descriptor is added, and this is called by 
> CoreContainer.registerCore which is called after new SolrCore instances are 
> created.
>  * solrCore.setName should only be called when we expect the name to change.  
> Furthermore that shouldn't ever happen in SolrCloud so I added checks.
>  * solrCore.setName calls {{coreMetricManager.afterCoreSetName()}} which is 
> something that is really only related to a _rename_, not name initialization 
> (from the constructor).  I renamed that method and further only call it if 
> the name did change from non-null.  
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9429) missing semicolon in overview.html

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9429?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165720#comment-17165720
 ] 

ASF subversion and git services commented on LUCENE-9429:
-

Commit d0642600ff1a09c2f9e1a641f5d49b4974c9523c in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Zeno Gantner
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=d064260 ]

LUCENE-9429 add missing semicolon (#1673)



> missing semicolon in overview.html
> --
>
> Key: LUCENE-9429
> URL: https://issues.apache.org/jira/browse/LUCENE-9429
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/javadocs
>Affects Versions: 8.6
>Reporter: Zeno Gantner
>Assignee: Mike Drob
>Priority: Trivial
> Fix For: master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> [https://lucene.apache.org/core/8_6_0/core/index.html]
>  
> Line:
> Directory directory = FSDirectory.open(indexPath)
> should end with a semicolon.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9312) Allow builds against arbitrary JVMs (even those unsupported by gradle)

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165714#comment-17165714
 ] 

ASF subversion and git services commented on LUCENE-9312:
-

Commit 8ebf2d0b2187d849032747d0102ca5eb57b76f05 in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8ebf2d0 ]

LUCENE-9312: Allow builds against arbitrary JVMs (squashed
jira/LUCENE-9312)


> Allow builds against arbitrary JVMs (even those unsupported by gradle)
> --
>
> Key: LUCENE-9312
> URL: https://issues.apache.org/jira/browse/LUCENE-9312
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (9.0)
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9437) Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly accessible

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165716#comment-17165716
 ] 

ASF subversion and git services commented on LUCENE-9437:
-

Commit 03a03b34a468f8095c7f0b87ceeaf4ba0d4aeaec in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Michael McCandless
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=03a03b3 ]

LUCENE-9437: make DocValuesOrdinalsReader.decode public


> Make DocValuesOrdinalsReader.decode(BytesRef, IntsRef) method publicly 
> accessible
> -
>
> Key: LUCENE-9437
> URL: https://issues.apache.org/jira/browse/LUCENE-9437
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 8.6
>Reporter: Ankur
>Priority: Trivial
> Fix For: 8.7
>
> Attachments: LUCENE-9437.patch
>
>
> Visibility of _DocValuesOrdinalsReader.decode(BytesRef, IntsRef)_ method is 
> set to 'protected'. This prevents the method from being used outside this 
> class in a setting where BinaryDocValues reader is instantiated outside the 
> class and binary payload containing ordinals still needs to be decoded.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14608) Faster sorting for the /export handler

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165723#comment-17165723
 ] 

ASF subversion and git services commented on SOLR-14608:


Commit bf8d954ca1289d82eb5334719fb97bbabacacb09 in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Andrzej Bialecki
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=bf8d954 ]

Merge branch 'master' into jira/SOLR-14608-export


> Faster sorting for the /export handler
> --
>
> Key: SOLR-14608
> URL: https://issues.apache.org/jira/browse/SOLR-14608
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Andrzej Bialecki
>Priority: Major
>
> The largest cost of the export handler is the sorting. This ticket will 
> implement an improved algorithm for sorting that should greatly increase 
> overall throughput for the export handler.
> *The current algorithm is as follows:*
> Collect a bitset of matching docs. Iterate over that bitset and materialize 
> the top level oridinals for the sort fields in the document and add them to 
> priority queue of size 3. Then export the top 3 docs, turn off the 
> bits in the bit set and iterate again until all docs are sorted and sent. 
> There are two performance bottlenecks with this approach:
> 1) Materializing the top level ordinals adds a huge amount of overhead to the 
> sorting process.
> 2) The size of priority queue, 30,000, adds significant overhead to sorting 
> operations.
> *The new algorithm:*
> Has a top level *merge sort iterator* that wraps segment level iterators that 
> perform segment level priority queue sorts.
> *Segment level:*
> The segment level docset will be iterated and the segment level ordinals for 
> the sort fields will be materialized and added to a segment level priority 
> queue. As the segment level iterator pops docs from the priority queue the 
> top level ordinals for the sort fields are materialized. Because the top 
> level ordinals are materialized AFTER the sort, they only need to be looked 
> up when the segment level ordinal changes. This takes advantage of the sort 
> to limit the lookups into the top level ordinal structures. This also 
> eliminates redundant lookups of top level ordinals that occur during the 
> multiple passes over the matching docset.
> The segment level priority queues can be kept smaller than 30,000 to improve 
> performance of the sorting operations because the overall batch size will 
> still be 30,000 or greater when all the segment priority queue sizes are 
> added up. This allows for batch sizes much larger then 30,000 without using a 
> single large priority queue. The increased batch size means fewer iterations 
> over the matching docset and the decreased priority queue size means faster 
> sorting operations.
> *Top level:*
> A top level iterator does a merge sort over the segment level iterators by 
> comparing the top level ordinals materialized when the segment level docs are 
> popped from the segment level priority queues. This requires no extra memory 
> and will be very performant.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11656) TLOG replication doesn't work properly after rebalancing leaders.

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11656?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165719#comment-17165719
 ] 

ASF subversion and git services commented on SOLR-11656:


Commit 4b2e90b3aaf6d49034727fb5dad59a8bd37f131e in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=4b2e90b3 ]

SOLR-11656: TLOG replication doesn't work properly after rebalancing leaders.


> TLOG replication doesn't work properly after rebalancing leaders.
> -
>
> Key: SOLR-11656
> URL: https://issues.apache.org/jira/browse/SOLR-11656
> Project: Solr
>  Issue Type: Bug
>  Components: replication (java)
>Affects Versions: 7.1
>Reporter: Yuki Yano
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.7
>
> Attachments: SOLR-11656.patch, SOLR-11656.patch
>
>
> With TLOG replica type, the replication may stop after invoking rebalance 
> leaders API.
> This can be reproduced by following steps:
> # Create SolrCloud with TLOG replica type.
> # Set perferredleader flag to some of no-leader nodes.
> # Invoke rebalance leaders API.
> # The replication stops in nodes which were "leader" before rebalancing. 
> Because the leader node doesn't have the replication thread, we need to 
> create it when the status is changed from "leader" to "replica". On the other 
> hand, rebalance leaders API doesn't consider this matter, and the replication 
> may stop if the status is changed from "leader" to "replica" by rebalance 
> leaders.
> Note that, we can avoid this problem if we reload or restart Solr after 
> rebalancing leaders.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-11868) Deprecate CloudSolrClient.setIdField, use information from Zookeeper

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-11868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165722#comment-17165722
 ] 

ASF subversion and git services commented on SOLR-11868:


Commit 6bf5f4a87f40cd9afd3b9104423d0fe51f287259 in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=6bf5f4a ]

SOLR-11868: CloudSolrClient.setIdField is confusing, it's really the routing 
field. Should be deprecated.


> Deprecate CloudSolrClient.setIdField, use information from Zookeeper
> 
>
> Key: SOLR-11868
> URL: https://issues.apache.org/jira/browse/SOLR-11868
> Project: Solr
>  Issue Type: Improvement
>Affects Versions: 7.2
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Major
> Fix For: 8.7
>
>
> IIUC idField has nothing to do with the  field. It's really
> the field used to route documents. Agreed, this is often the "id"
> field, but still
> In fact, over in UpdateReqeust.getRoutes(), it's passed as the "id"
> field to router.getTargetSlice() and just works, even though
> getTargetSlice is clearly designed to route on a field other than the
>  if we didn't just pass null as the "route" param.
> The confusing bit is that if I have a route field defined for my
> collection and want to use CloudSolrClient I have to figure out that I
> need to use the setIdField method to use that field for routing.
>  
> We should deprecate setIdField and refactor how this is used (i.e. 
> getRoutes). Need to beef up tests too I suspect.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9312) Allow builds against arbitrary JVMs (even those unsupported by gradle)

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165715#comment-17165715
 ] 

ASF subversion and git services commented on LUCENE-9312:
-

Commit 8ebf2d0b2187d849032747d0102ca5eb57b76f05 in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Dawid Weiss
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=8ebf2d0 ]

LUCENE-9312: Allow builds against arbitrary JVMs (squashed
jira/LUCENE-9312)


> Allow builds against arbitrary JVMs (even those unsupported by gradle)
> --
>
> Key: LUCENE-9312
> URL: https://issues.apache.org/jira/browse/LUCENE-9312
> Project: Lucene - Core
>  Issue Type: Sub-task
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Major
> Fix For: master (9.0)
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14676) Update commons-collections to 4.4 and use it in Solr

2020-07-27 Thread ASF subversion and git services (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14676?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165717#comment-17165717
 ] 

ASF subversion and git services commented on SOLR-14676:


Commit 67da34ac3b5d1dfbd3757364c5274990da295fc0 in lucene-solr's branch 
refs/heads/jira/SOLR-14608-export from Erick Erickson
[ https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=67da34a ]

SOLR-14676: Update commons-collections to 4.4 and use it in Solr


> Update commons-collections to 4.4 and use it in Solr
> 
>
> Key: SOLR-14676
> URL: https://issues.apache.org/jira/browse/SOLR-14676
> Project: Solr
>  Issue Type: Wish
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: clients - java
>Affects Versions: 8.6
>Reporter: Bernd Wahlen
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 8.7
>
>
> solr-test-framework has dependency to commons-collections:commons-collections
> which moved to org.apache.commons:commons-collections4 some years ago.
> It would be nice to replace or remove this old dependency.
> NOTE: Hadoop requires 3.2.2 so we need it. That said, we've already upgraded 
> commons-collections to 4.2 but Solr code still uses 3.2.2. This ticket will 
> upgrade to 4.4 and eliminate Solr's use of 3.2.2. We still need 3.2.2 as well 
> due to Hadoop.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-14608) Faster sorting for the /export handler

2020-07-27 Thread Gus Heck (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165712#comment-17165712
 ] 

Gus Heck edited comment on SOLR-14608 at 7/27/20, 1:41 PM:
---

A question from a customer caused me to re-read this and think a bit more 
deeply. I'm wondering about the fact that the priority queue has a limit on 
it's size. This would seem to place a (hard to define) limit on the size of the 
segment, and perhaps fail by returning out of order docs silently? (The client 
case in question is a collection that is approaching half a trillion 
documents...)


was (Author: gus_heck):
A question from a customer caused me to re-read this and think a bit more 
deeply. I'm wondering about the fact that the priority queue has a limit on 
it's size. This would seem to place a (hard to define) limit on the size of the 
segment, and perhaps fail by returning out of order docs silently? (The client 
case in question is a cluster that is approaching half a trillion documents...)

> Faster sorting for the /export handler
> --
>
> Key: SOLR-14608
> URL: https://issues.apache.org/jira/browse/SOLR-14608
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Andrzej Bialecki
>Priority: Major
>
> The largest cost of the export handler is the sorting. This ticket will 
> implement an improved algorithm for sorting that should greatly increase 
> overall throughput for the export handler.
> *The current algorithm is as follows:*
> Collect a bitset of matching docs. Iterate over that bitset and materialize 
> the top level oridinals for the sort fields in the document and add them to 
> priority queue of size 3. Then export the top 3 docs, turn off the 
> bits in the bit set and iterate again until all docs are sorted and sent. 
> There are two performance bottlenecks with this approach:
> 1) Materializing the top level ordinals adds a huge amount of overhead to the 
> sorting process.
> 2) The size of priority queue, 30,000, adds significant overhead to sorting 
> operations.
> *The new algorithm:*
> Has a top level *merge sort iterator* that wraps segment level iterators that 
> perform segment level priority queue sorts.
> *Segment level:*
> The segment level docset will be iterated and the segment level ordinals for 
> the sort fields will be materialized and added to a segment level priority 
> queue. As the segment level iterator pops docs from the priority queue the 
> top level ordinals for the sort fields are materialized. Because the top 
> level ordinals are materialized AFTER the sort, they only need to be looked 
> up when the segment level ordinal changes. This takes advantage of the sort 
> to limit the lookups into the top level ordinal structures. This also 
> eliminates redundant lookups of top level ordinals that occur during the 
> multiple passes over the matching docset.
> The segment level priority queues can be kept smaller than 30,000 to improve 
> performance of the sorting operations because the overall batch size will 
> still be 30,000 or greater when all the segment priority queue sizes are 
> added up. This allows for batch sizes much larger then 30,000 without using a 
> single large priority queue. The increased batch size means fewer iterations 
> over the matching docset and the decreased priority queue size means faster 
> sorting operations.
> *Top level:*
> A top level iterator does a merge sort over the segment level iterators by 
> comparing the top level ordinals materialized when the segment level docs are 
> popped from the segment level priority queues. This requires no extra memory 
> and will be very performant.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-14608) Faster sorting for the /export handler

2020-07-27 Thread Gus Heck (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-14608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165712#comment-17165712
 ] 

Gus Heck commented on SOLR-14608:
-

A question from a customer caused me to re-read this and think a bit more 
deeply. I'm wondering about the fact that the priority queue has a limit on 
it's size. This would seem to place a (hard to define) limit on the size of the 
segment, and perhaps fail by returning out of order docs silently? (The client 
case in question is a cluster that is approaching half a trillion documents...)

> Faster sorting for the /export handler
> --
>
> Key: SOLR-14608
> URL: https://issues.apache.org/jira/browse/SOLR-14608
> Project: Solr
>  Issue Type: New Feature
>Reporter: Joel Bernstein
>Assignee: Andrzej Bialecki
>Priority: Major
>
> The largest cost of the export handler is the sorting. This ticket will 
> implement an improved algorithm for sorting that should greatly increase 
> overall throughput for the export handler.
> *The current algorithm is as follows:*
> Collect a bitset of matching docs. Iterate over that bitset and materialize 
> the top level oridinals for the sort fields in the document and add them to 
> priority queue of size 3. Then export the top 3 docs, turn off the 
> bits in the bit set and iterate again until all docs are sorted and sent. 
> There are two performance bottlenecks with this approach:
> 1) Materializing the top level ordinals adds a huge amount of overhead to the 
> sorting process.
> 2) The size of priority queue, 30,000, adds significant overhead to sorting 
> operations.
> *The new algorithm:*
> Has a top level *merge sort iterator* that wraps segment level iterators that 
> perform segment level priority queue sorts.
> *Segment level:*
> The segment level docset will be iterated and the segment level ordinals for 
> the sort fields will be materialized and added to a segment level priority 
> queue. As the segment level iterator pops docs from the priority queue the 
> top level ordinals for the sort fields are materialized. Because the top 
> level ordinals are materialized AFTER the sort, they only need to be looked 
> up when the segment level ordinal changes. This takes advantage of the sort 
> to limit the lookups into the top level ordinal structures. This also 
> eliminates redundant lookups of top level ordinals that occur during the 
> multiple passes over the matching docset.
> The segment level priority queues can be kept smaller than 30,000 to improve 
> performance of the sorting operations because the overall batch size will 
> still be 30,000 or greater when all the segment priority queue sizes are 
> added up. This allows for batch sizes much larger then 30,000 without using a 
> single large priority queue. The increased batch size means fewer iterations 
> over the matching docset and the decreased priority queue size means faster 
> sorting operations.
> *Top level:*
> A top level iterator does a merge sort over the segment level iterators by 
> comparing the top level ordinals materialized when the segment level docs are 
> popped from the segment level priority queues. This requires no extra memory 
> and will be very performant.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-14636) Provide a reference implementation for SolrCloud that is stable and fast.

2020-07-27 Thread Mark Robert Miller (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-14636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Robert Miller updated SOLR-14636:
--
Description: 
SolrCloud powers critical infrastructure and needs the ability to run quickly 
with stability. This reference implementation will allow for this.

*location*: [https://github.com/apache/lucene-solr/tree/reference_impl]

*status*: alpha

*speed*: ludicrous

*tests***:
 * *core*: {color:#00875a}*solid*{color} with *{color:#de350b}ignores{color}*
 * *solrj*: {color:#00875a}*solid*{color} with {color:#de350b}*ignores*{color}
 * *test-framework*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/analysis-extras*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/analytics*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/clustering*: {color:#00875a}*solid*{color} with 
*{color:#de350b}ignores{color}*
 * *contrib/dataimporthandler*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/dataimporthandler-extras*: {color:#00875a}*solid*{color} with 
*{color:#de350b}ignores{color}*
 * *contrib/extraction*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/jaegertracer-configurator*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/langid*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/prometheus-exporter*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/velocity*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}

_* Running tests quickly and efficiently with strict policing will more 
frequently find bugs and requires a period of hardening._
 _** Non Nightly currently, Nightly comes last._

  was:
SolrCloud powers critical infrastructure and needs the ability to run quickly 
with stability. This reference implementation will allow for this.

*location*: [https://github.com/apache/lucene-solr/tree/reference_impl]

*status*: alpha

*speed*: ludicrous

{color:#de350b}NOTE: Just entered a period of likely instability as I clear out 
the new room of zombies.{color}

*tests***:
 * *core*: {color:#00875a}*solid*{color} with *{color:#de350b}ignores{color}*
 * *solrj*: {color:#00875a}*solid*{color} with {color:#de350b}*ignores*{color}
 * *test-framework*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/analysis-extras*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/analytics*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/clustering*: {color:#00875a}*solid*{color} with 
*{color:#de350b}ignores{color}*
 * *contrib/dataimporthandler*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/dataimporthandler-extras*: {color:#00875a}*solid*{color} with 
*{color:#de350b}ignores{color}*
 * *contrib/extraction*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/jaegertracer-configurator*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/langid*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/prometheus-exporter*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}
 * *contrib/velocity*: {color:#00875a}*solid*{color} with 
{color:#de350b}*ignores*{color}

_* Running tests quickly and efficiently with strict policing will more 
frequently find bugs and requires a period of hardening._
 _** Non Nightly currently, Nightly comes last._


> Provide a reference implementation for SolrCloud that is stable and fast.
> -
>
> Key: SOLR-14636
> URL: https://issues.apache.org/jira/browse/SOLR-14636
> Project: Solr
>  Issue Type: Task
>Reporter: Mark Robert Miller
>Assignee: Mark Robert Miller
>Priority: Major
> Attachments: IMG_5575 (1).jpg
>
>
> SolrCloud powers critical infrastructure and needs the ability to run quickly 
> with stability. This reference implementation will allow for this.
> *location*: [https://github.com/apache/lucene-solr/tree/reference_impl]
> *status*: alpha
> *speed*: ludicrous
> *tests***:
>  * *core*: {color:#00875a}*solid*{color} with *{color:#de350b}ignores{color}*
>  * *solrj*: {color:#00875a}*solid*{color} with {color:#de350b}*ignores*{color}
>  * *test-framework*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analysis-extras*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/analytics*: {color:#00875a}*solid*{color} with 
> {color:#de350b}*ignores*{color}
>  * *contrib/clustering*: {color:#00875a}*solid*{color} with 
> *{color:#de350b}ignores{color}*
>  * *contrib/da

[jira] [Created] (SOLR-14682) Remove repairs put in for upgrading Solr from 6.6.1 to Solr 7.1

2020-07-27 Thread Erick Erickson (Jira)
Erick Erickson created SOLR-14682:
-

 Summary: Remove repairs put in for upgrading Solr from 6.6.1 to 
Solr 7.1
 Key: SOLR-14682
 URL: https://issues.apache.org/jira/browse/SOLR-14682
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: SolrCloud
Reporter: Erick Erickson
Assignee: Erick Erickson


This was the bit where we couldn't upgrade from 6.6.1 to 7.1 due to 
coreNodeName missing from core.properties that's long outlived its usefulness 
at this point.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] mocobeta commented on pull request #1695: LUCENE-9321: Fix Javadoc offline link base url for snapshot build

2020-07-27 Thread GitBox


mocobeta commented on pull request #1695:
URL: https://github.com/apache/lucene-solr/pull/1695#issuecomment-664343912


   @uschindler would you check the changes?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on pull request #1694: SOLR-14680: Provide simple interfaces to our concrete SolrCloud classes

2020-07-27 Thread GitBox


murblanc commented on pull request #1694:
URL: https://github.com/apache/lucene-solr/pull/1694#issuecomment-664333081


   We also need to guarantee some immutability of the instances passed to 
plugins. Computing placement if the underlying data (cluster state) can change 
during the computation would be tricky. So these abstractions are read only so 
plugins can’t use them to change Solr state but the instances passed to plugins 
are immutable. That would make it possibly a bit harder to only have the 
internal classes implement these interfaces.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Resolved] (SOLR-12847) Cut over implementation of maxShardsPerNode to a collection policy

2020-07-27 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki resolved SOLR-12847.
-
Resolution: Won't Fix

> Cut over implementation of maxShardsPerNode to a collection policy
> --
>
> Key: SOLR-12847
> URL: https://issues.apache.org/jira/browse/SOLR-12847
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We've back and forth over handling maxShardsPerNode with autoscaling policies 
> (see SOLR-11005 for history). Now that we've reimplemented support for 
> creating collections with maxShardsPerNode when autoscaling policy is 
> enabled, we should re-look at how it is implemented.
> I propose that we fold maxShardsPerNode (if specified) to a collection level 
> policy that overrides the corresponding default in cluster policy (see 
> SOLR-12845). We'll need to ensure that if maxShardsPerNode is specified then 
> the user sees neither violations nor corresponding suggestions because of the 
> default cluster policy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12847) Cut over implementation of maxShardsPerNode to a collection policy

2020-07-27 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165618#comment-17165618
 ] 

Andrzej Bialecki commented on SOLR-12847:
-

With the removal of autoscaling in 9.0 this issue is no longer applicable - 
closing. If we want a similar functionality in the new framework we should 
specify and implement it there.

> Cut over implementation of maxShardsPerNode to a collection policy
> --
>
> Key: SOLR-12847
> URL: https://issues.apache.org/jira/browse/SOLR-12847
> Project: Solr
>  Issue Type: Bug
>  Components: AutoScaling, SolrCloud
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: master (9.0)
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> We've back and forth over handling maxShardsPerNode with autoscaling policies 
> (see SOLR-11005 for history). Now that we've reimplemented support for 
> creating collections with maxShardsPerNode when autoscaling policy is 
> enabled, we should re-look at how it is implemented.
> I propose that we fold maxShardsPerNode (if specified) to a collection level 
> policy that overrides the corresponding default in cluster policy (see 
> SOLR-12845). We'll need to ensure that if maxShardsPerNode is specified then 
> the user sees neither violations nor corresponding suggestions because of the 
> default cluster policy.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on pull request #1694: SOLR-14680: Provide simple interfaces to our concrete SolrCloud classes

2020-07-27 Thread GitBox


murblanc commented on pull request #1694:
URL: https://github.com/apache/lucene-solr/pull/1694#issuecomment-664312074


   Thanks Noble.
   So what’s your guidance regarding the Autoscaling PR? Shall we assume the 
Autoscaling plugins need to use these interfaces or shall I continue building 
cluster/node/collection/shard/replica abstractions that make plugin development 
easier?
   
   If the first option, there will be a few requirements on these interfaces 
(and their implementations), for example allowing them to be keys in maps 
(variable values or snitches are per node or collection or replica etc, they 
need to be stored somewhere so the plugin can access them). Ideally also as 
stated elsewhere, all these interfaces implement a common one (could be empty) 
so that snitch/variable target can be passed easily.
   
   The Autoscaling plugin work can also define similar abstractions on top of 
the interfaces defined here.
   I do see value for all plugins of all types to share a single set of 
abstractions but obviously this puts more constraints on these abstractions...
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (SOLR-12845) Add a default cluster policy

2020-07-27 Thread Andrzej Bialecki (Jira)


[ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165603#comment-17165603
 ] 

Andrzej Bialecki commented on SOLR-12845:
-

(back from vacation, sorry for the radio silence). I'm sorry for this mess. 
Yes, this should be reverted from branch_8x and branch_8_6.

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-12845.patch, SOLR-12845.patch, Screenshot from 
> 2020-07-18 21-07-34.png
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Updated] (SOLR-12845) Add a default cluster policy

2020-07-27 Thread Andrzej Bialecki (Jira)


 [ 
https://issues.apache.org/jira/browse/SOLR-12845?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrzej Bialecki updated SOLR-12845:

Status: Reopened  (was: Closed)

> Add a default cluster policy
> 
>
> Key: SOLR-12845
> URL: https://issues.apache.org/jira/browse/SOLR-12845
> Project: Solr
>  Issue Type: Improvement
>  Components: AutoScaling
>Reporter: Shalin Shekhar Mangar
>Assignee: Andrzej Bialecki
>Priority: Major
> Fix For: 8.6
>
> Attachments: SOLR-12845.patch, SOLR-12845.patch, Screenshot from 
> 2020-07-18 21-07-34.png
>
>
> [~varunthacker] commented on SOLR-12739:
> bq. We should also ship with some default policies - "Don't allow more than 
> one replica of a shard on the same JVM" , "Distribute cores across the 
> cluster evenly" , "Distribute replicas per collection across the nodes"
> This issue is about adding these defaults. I propose the following as default 
> cluster policy:
> {code}
> # Each shard cannot have more than one replica on the same node if possible
> {"replica": "<2", "shard": "#EACH", "node": "#ANY", "strict":false}
> # Each collections replicas should be equally distributed amongst nodes
> {"replica": "#EQUAL", "node": "#ANY", "strict":false} 
> # All cores should be equally distributed amongst nodes
> {"cores": "#EQUAL", "node": "#ANY", "strict":false}
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9416) Fix CheckIndex to print norms as unsigned integers

2020-07-27 Thread Mohammad Sadiq (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9416?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165583#comment-17165583
 ] 

Mohammad Sadiq commented on LUCENE-9416:


Justification of lack of tests with this patch: This is a change in the output 
message being printed. No change in the behaviour. [See the 
note|https://issues.apache.org/jira/browse/LUCENE-9416?focusedCommentId=17164487&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-17164487]
 my Mike McCandless that testing this might not be worth the complexity.

> Fix CheckIndex to print norms as unsigned integers
> --
>
> Key: LUCENE-9416
> URL: https://issues.apache.org/jira/browse/LUCENE-9416
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/index
>Reporter: Mohammad Sadiq
>Priority: Minor
> Attachments: LUCENE-9416.patch
>
>
> In the [discussion on "CheckIndex complaining about -1 for norms value" in 
> the java-user list|http://markmail.org/message/gcwdhasblsyovwc2], it was 
> identified that we should "fix CheckIndex to print norms as unsigned 
> integers".
> I'd like to take a stab at this.
> I'm trying to understand the problem and from what I gather, while norms are 
> `byte`s, the API exposes them as `long` values. While printing the error 
> message, we want it to print a zero instead of -1?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul commented on pull request #1694: SOLR-14680: Provide simple interfaces to our concrete SolrCloud classes

2020-07-27 Thread GitBox


noblepaul commented on pull request #1694:
URL: https://github.com/apache/lucene-solr/pull/1694#issuecomment-664244951


   >I’d think an instance of the abstraction should be returned instead,
   
   I though about it and decided against it. We should always try to keep the 
methods light and avoid creating unnecessary Objects.
   for instance, the `Shard#leader()` returns the name of the leader replica. A 
String is lightweight object and we do not need to create a `ShardReplica` 
object unnecessarily.
   
   > I see two cases where we need interfaces such as defined here:
   
   Both. over time , we will move as much of our code to use these interfaces 
instead of concrete classes.
   
   I do not see a reason why external plugins cannot use these interfaces. This 
is not to say that external plugins may need some more methods. In that case, 
just compose your own interfaces using these as building blocks.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on pull request #1694: SOLR-14680: Provide simple interfaces to our concrete SolrCloud classes

2020-07-27 Thread GitBox


murblanc commented on pull request #1694:
URL: https://github.com/apache/lucene-solr/pull/1694#issuecomment-664240038


   I see two cases where we need interfaces such as defined here:
   - internal code. Coding to interfaces rather than the actual implementation 
makes for better structured code usually with less implementation leaks,
   - external “plugins”.
   
   What’s the intention? If only the later, then there’s really no need to have 
the internal classes implement these interfaces, a wrapper is ok.
   I believe addressing both points with a single interface is complicated (and 
to a point counterproductive as it ties internal and external views).



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] murblanc commented on pull request #1694: SOLR-14680: Provide simple interfaces to our concrete SolrCloud classes

2020-07-27 Thread GitBox


murblanc commented on pull request #1694:
URL: https://github.com/apache/lucene-solr/pull/1694#issuecomment-664230886


   Many methods returning other objects are returning names of other 
abstractions being defined here (Shard, Collection, Node etc.).
   I’d think an instance of the abstraction should be returned instead, and 
that instance would have a getName() method (i.e have SolrCollection 
getCollection() rather than String getCollection() on a Shard for example).
   What’s the rationale for returning names rather than objects?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters

2020-07-27 Thread GitBox


atris commented on a change in pull request #1686:
URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r460720011



##
File path: 
solr/solrj/src/java/org/apache/solr/client/solrj/impl/HttpSolrClient.java
##
@@ -358,7 +358,11 @@ protected HttpRequestBase 
createMethod(@SuppressWarnings({"rawtypes"})SolrReques
 if (parser == null) {
   parser = this.parser;
 }
-
+
+Header[] contextHeaders = new Header[2];
+contextHeaders[0] = new 
BasicHeader(CommonParams.SOLR_REQUEST_CONTEXT_PARAM, getContext().toString());

Review comment:
   I am honestly not sure about that -- Can you suggest a mechanism to 
achieve that? Ideas are more than welcome!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] dweiss commented on pull request #1695: LUCENE-9321: Fix Javadoc offline link base url for snapshot build

2020-07-27 Thread GitBox


dweiss commented on pull request #1695:
URL: https://github.com/apache/lucene-solr/pull/1695#issuecomment-664187825


   Look all right to me. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[jira] [Commented] (LUCENE-9439) Matches API should enumerate hit fields that have no positions (no iterator)

2020-07-27 Thread Dawid Weiss (Jira)


[ 
https://issues.apache.org/jira/browse/LUCENE-9439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17165520#comment-17165520
 ] 

Dawid Weiss commented on LUCENE-9439:
-

Ok, I see your point now. An iterator returning -1 for both positions and 
offsets will require additional logic on the consumer side to detect it... I 
don't know how much of an inconvenience this is but I am sure it can be lived 
with. I think I should extract that highlighter code and place it together here 
so that you can see the bigger picture and how it all fits together. I think 
it'd be more helpful than trying to figure out the required API changes in 
separation. 

I'll try to prepare it in the course of the week.

> Matches API should enumerate hit fields that have no positions (no iterator)
> 
>
> Key: LUCENE-9439
> URL: https://issues.apache.org/jira/browse/LUCENE-9439
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Dawid Weiss
>Assignee: Dawid Weiss
>Priority: Minor
> Attachments: LUCENE-9439.patch
>
>  Time Spent: 20m
>  Remaining Estimate: 0h
>
> I have been fiddling with Matches API and it's great. There is one corner 
> case that doesn't work for me though -- queries that affect fields without 
> positions return {{MatchesUtil.MATCH_WITH_NO_TERMS}} but this constant is 
> problematic as it doesn't carry the field name that caused it (returns null).
> The associated fromSubMatches combines all these constants into one (or 
> swallows them) which is another problem.
> I think it would be more consistent if MATCH_WITH_NO_TERMS was replaced with 
> a true match (carrying field name) returning an empty iterator (or a constant 
> "empty" iterator NO_TERMS).
> I have a very compelling use case: I wrote an "auto-highlighter" that runs on 
> top of Matches API and automatically picks up query-relevant fields and 
> snippets. Everything works beautifully except for cases where fields are 
> searchable but don't have any positions (token-like fields).
> I can work on a patch but wanted to reach out first - [~romseygeek]?



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] noblepaul edited a comment on pull request #1694: SOLR-14680: Provide simple interfaces to our concrete SolrCloud classes

2020-07-27 Thread GitBox


noblepaul edited a comment on pull request #1694:
URL: https://github.com/apache/lucene-solr/pull/1694#issuecomment-664094145


   > Can we start simple by defining the external interface without the 
required changes to internal classes first? Will make a smaller PR easier to 
discuss.
   
   Feel free to ignore the implementing classes. That was to demonstrate one 
way of implementing these interfaces. I'm happy to focus on the interfaces 
inside the `sdk` package
   
   I have updated the description. Only the SDK is important. Implementation 
can/will be removed 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters

2020-07-27 Thread GitBox


atris commented on a change in pull request #1686:
URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r460693175



##
File path: solr/core/src/java/org/apache/solr/servlet/RequestRateLimiter.java
##
@@ -0,0 +1,199 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.servlet;
+
+import javax.servlet.AsyncContext;
+import javax.servlet.AsyncEvent;
+import javax.servlet.AsyncListener;
+import javax.servlet.FilterConfig;
+import javax.servlet.http.HttpServletRequest;
+import javax.servlet.http.HttpServletResponse;
+import java.io.IOException;
+import java.lang.invoke.MethodHandles;
+import java.util.Queue;
+import java.util.concurrent.ConcurrentLinkedQueue;
+import java.util.concurrent.Semaphore;
+import java.util.concurrent.TimeUnit;
+
+import org.slf4j.Logger;
+import org.slf4j.LoggerFactory;
+
+/**
+ * Handles rate limiting for a specific request type.
+ *
+ * The control flow is as follows:
+ * Handle request -- Check if slot is available -- If available, acquire slot 
and proceed --
+ * else asynchronously queue the request.
+ *
+ * When an active request completes, a check is performed to see if there are 
any pending requests.
+ * If there is an available pending request, process the same.
+ */
+public class RequestRateLimiter {
+  private static final Logger log = 
LoggerFactory.getLogger(MethodHandles.lookup().lookupClass());
+
+  private Semaphore allowedConcurrentRequests;
+  private RateLimiterConfig rateLimiterConfig;
+  private Queue waitQueue;
+  private Queue listenerQueue;
+
+  public RequestRateLimiter(RateLimiterConfig rateLimiterConfig) {
+this.rateLimiterConfig = rateLimiterConfig;
+this.allowedConcurrentRequests = new 
Semaphore(rateLimiterConfig.allowedRequests);
+this.waitQueue = new ConcurrentLinkedQueue<>();
+this.listenerQueue = new ConcurrentLinkedQueue<>();
+  }
+
+  public boolean handleRequest(HttpServletRequest request) throws 
InterruptedException {
+
+if (!rateLimiterConfig.isEnabled) {
+  return true;
+}
+
+boolean accepted = 
allowedConcurrentRequests.tryAcquire(rateLimiterConfig.waitForSlotAcquisition, 
TimeUnit.MILLISECONDS);
+
+if (!accepted) {
+  AsyncContext asyncContext = request.startAsync();

Review comment:
   I dont think so -- an async request should just return its current 
context here. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters

2020-07-27 Thread GitBox


atris commented on a change in pull request #1686:
URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r460690321



##
File path: solr/solr-ref-guide/src/rate-limiters.adoc
##
@@ -0,0 +1,101 @@
+= Request Rate Limiters
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+Solr allows rate limiting per request type. Each request type can be allocated 
a maximum allowed number of concurrent requests
+that can be active. The default rate limiting is implemented for updates and 
searches.
+
+If a request exceeds the request quota, further incoming requests are 
automatically queued asynchronously with
+a configurable timeout.
+
+== When To Use Rate Limiters
+Rate limiters should be used when the user wishes to allocate a guaranteed 
capacity of the request threadpool to a specific
+request type. Indexing and search requests are mostly competing with each 
other for CPU resources. This becomes especially
+pronounced under high stress in production workloads.
+
+== Rate Limiter Configurations
+The default rate limiter is search rate limiter. Accordingly, it can be 
configured in web.xml under initParams for
+SolrRequestFilter.
+
+[source,xml]
+
+SolrRequestFilter
+
+
+ Enable Query Rate Limiter
+Controls enabling of query rate limiter. Default value is false.
+[source,xml]
+
+isQueryRateLimiterEnabled
+
+[source,xml]
+
+true
+
+
+ Maximum Number Of Concurrent Requests
+Allows setting maximum concurrent search requests at a given point in time. 
Default value is 10.
+[source,xml]
+
+maxQueryRequests
+
+[source,xml]
+
+15
+
+
+ Request Slot Allocation Wait Time
+Wait time in ms for which a request will wait for a slot to be available when 
all slots are full,
+before the request is put into the wait queue. This allows requests to have a 
chance to proceed if
+the unavailability of the request slots for this rate limiter is a transient 
phenomenon. Default value
+is -1, indicating no wait.

Review comment:
   Yes, updated to say the same.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters

2020-07-27 Thread GitBox


atris commented on a change in pull request #1686:
URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r460689828



##
File path: solr/solr-ref-guide/src/rate-limiters.adoc
##
@@ -0,0 +1,101 @@
+= Request Rate Limiters
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+Solr allows rate limiting per request type. Each request type can be allocated 
a maximum allowed number of concurrent requests
+that can be active. The default rate limiting is implemented for updates and 
searches.
+
+If a request exceeds the request quota, further incoming requests are 
automatically queued asynchronously with
+a configurable timeout.
+
+== When To Use Rate Limiters
+Rate limiters should be used when the user wishes to allocate a guaranteed 
capacity of the request threadpool to a specific
+request type. Indexing and search requests are mostly competing with each 
other for CPU resources. This becomes especially
+pronounced under high stress in production workloads.
+
+== Rate Limiter Configurations
+The default rate limiter is search rate limiter. Accordingly, it can be 
configured in web.xml under initParams for
+SolrRequestFilter.
+
+[source,xml]
+
+SolrRequestFilter
+
+
+ Enable Query Rate Limiter
+Controls enabling of query rate limiter. Default value is false.
+[source,xml]
+
+isQueryRateLimiterEnabled
+
+[source,xml]
+
+true
+
+
+ Maximum Number Of Concurrent Requests
+Allows setting maximum concurrent search requests at a given point in time. 
Default value is 10.
+[source,xml]
+
+maxQueryRequests
+
+[source,xml]
+
+15
+
+
+ Request Slot Allocation Wait Time
+Wait time in ms for which a request will wait for a slot to be available when 
all slots are full,
+before the request is put into the wait queue. This allows requests to have a 
chance to proceed if
+the unavailability of the request slots for this rate limiter is a transient 
phenomenon. Default value
+is -1, indicating no wait.
+[source,xml]
+
+queryWaitForSlotAllocationInMS
+
+[source,xml]
+
+100
+
+
+ Request Expiration Time
+Time in ms after which a request will expire in the wait queue. Default value 
is 200 ms.

Review comment:
   Yes. Updated the docs to state the same.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters

2020-07-27 Thread GitBox


atris commented on a change in pull request #1686:
URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r460688956



##
File path: solr/solr-ref-guide/src/rate-limiters.adoc
##
@@ -0,0 +1,101 @@
+= Request Rate Limiters
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+Solr allows rate limiting per request type. Each request type can be allocated 
a maximum allowed number of concurrent requests
+that can be active. The default rate limiting is implemented for updates and 
searches.
+
+If a request exceeds the request quota, further incoming requests are 
automatically queued asynchronously with
+a configurable timeout.
+
+== When To Use Rate Limiters
+Rate limiters should be used when the user wishes to allocate a guaranteed 
capacity of the request threadpool to a specific
+request type. Indexing and search requests are mostly competing with each 
other for CPU resources. This becomes especially
+pronounced under high stress in production workloads.
+
+== Rate Limiter Configurations
+The default rate limiter is search rate limiter. Accordingly, it can be 
configured in web.xml under initParams for
+SolrRequestFilter.
+
+[source,xml]
+
+SolrRequestFilter
+
+
+ Enable Query Rate Limiter
+Controls enabling of query rate limiter. Default value is false.
+[source,xml]
+
+isQueryRateLimiterEnabled
+
+[source,xml]
+
+true
+
+
+ Maximum Number Of Concurrent Requests
+Allows setting maximum concurrent search requests at a given point in time. 
Default value is 10.

Review comment:
   Changed to number of cores * 3.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters

2020-07-27 Thread GitBox


atris commented on a change in pull request #1686:
URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r460688156



##
File path: solr/solr-ref-guide/src/rate-limiters.adoc
##
@@ -0,0 +1,101 @@
+= Request Rate Limiters
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+Solr allows rate limiting per request type. Each request type can be allocated 
a maximum allowed number of concurrent requests
+that can be active. The default rate limiting is implemented for updates and 
searches.
+
+If a request exceeds the request quota, further incoming requests are 
automatically queued asynchronously with
+a configurable timeout.
+
+== When To Use Rate Limiters
+Rate limiters should be used when the user wishes to allocate a guaranteed 
capacity of the request threadpool to a specific
+request type. Indexing and search requests are mostly competing with each 
other for CPU resources. This becomes especially
+pronounced under high stress in production workloads.
+
+== Rate Limiter Configurations
+The default rate limiter is search rate limiter. Accordingly, it can be 
configured in web.xml under initParams for

Review comment:
   Unfortunately, I do not think that SolrDispatchFilter has access to 
solr-config.xml.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



-
To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
For additional commands, e-mail: issues-h...@lucene.apache.org



[GitHub] [lucene-solr] atris commented on a change in pull request #1686: SOLR-13528: Implement Request Rate Limiters

2020-07-27 Thread GitBox


atris commented on a change in pull request #1686:
URL: https://github.com/apache/lucene-solr/pull/1686#discussion_r460687788



##
File path: solr/core/src/java/org/apache/solr/servlet/RateLimitManager.java
##
@@ -0,0 +1,153 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements.  See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.solr.servlet;
+
+import javax.servlet.FilterConfig;
+import javax.servlet.http.HttpServletRequest;
+import java.util.HashMap;
+import java.util.Map;
+
+import org.apache.solr.client.solrj.SolrRequest;
+
+import static 
org.apache.solr.common.params.CommonParams.SOLR_REQUEST_CONTEXT_PARAM;
+import static 
org.apache.solr.common.params.CommonParams.SOLR_REQUEST_TYPE_PARAM;
+
+/**
+ * This class is responsible for managing rate limiting per request type. Rate 
limiters
+ * can be registered with this class against a corresponding type. There can 
be only one
+ * rate limiter associated with a request type.
+ *
+ * The actual rate limiting and the limits should be implemented in the 
corresponding RequestRateLimiter
+ * implementation. RateLimitManager is responsible for the orchestration but 
not the specifics of how the
+ * rate limiting is being done for a specific request type.
+ */
+public class RateLimitManager {
+  public final static int DEFAULT_CONCURRENT_REQUESTS= 10;
+  public final static long DEFAULT_EXPIRATION_TIME_INMS = 300;
+  public final static long DEFAULT_SLOT_ACQUISITION_TIMEOUT_MS = -1;
+
+  private final Map requestRateLimiterMap;
+
+  public RateLimitManager() {
+this.requestRateLimiterMap = new HashMap();
+  }
+
+  // Handles an incoming request. The main orchestration code path, this 
method will
+  // identify which (if any) rate limiter can handle this request. Internal 
requests will not be
+  // rate limited
+  // Returns true if request is accepted for processing, false if it should be 
rejected
+
+  // NOTE: It is upto specific rate limiter implementation to handle queuing 
of rejected requests.
+  public boolean handleRequest(HttpServletRequest request) throws 
InterruptedException {
+String requestContext = request.getHeader(SOLR_REQUEST_CONTEXT_PARAM);
+String typeOfRequest = request.getHeader(SOLR_REQUEST_TYPE_PARAM);
+
+if (typeOfRequest == null) {
+  // Cannot determine if this request should be throttled
+  return true;
+}
+
+// Do not throttle internal requests
+if (requestContext != null && 
requestContext.equals(SolrRequest.SolrClientContext.SERVER.toString())) {
+  return true;
+}
+
+RequestRateLimiter requestRateLimiter = 
requestRateLimiterMap.get(typeOfRequest);
+
+if (requestRateLimiter == null) {
+  // No request rate limiter for this request type
+  return true;
+}
+
+return requestRateLimiter.handleRequest(request);
+  }
+
+  // Resume a pending request from one of the registered rate limiters.
+  // The current model is round robin -- iterate over the list and get a 
pending request and resume it.
+
+  // TODO: This should be a priority queue based model
+  public void resumePendingRequest(HttpServletRequest request) {
+String typeOfRequest = request.getHeader(SOLR_REQUEST_TYPE_PARAM);
+
+RequestRateLimiter previousRequestRateLimiter = 
requestRateLimiterMap.get(typeOfRequest);
+
+if (previousRequestRateLimiter == null) {
+  // No rate limiter for this request type
+  return;
+}
+
+// Give preference to the previous request's rate limiter
+if (previousRequestRateLimiter.resumePendingOperation()) {
+  return;
+}
+
+for (Map.Entry currentEntry : 
requestRateLimiterMap.entrySet()) {

Review comment:
   This is work stealing -- a thread first tries to get work from its own 
request rate limiter and if not present, let the thread be used for a different 
request type. This is to allow smoothening out of skews. Added a flag to enable 
or disable this behaviour.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



--