[jira] [Updated] (SOLR-12449) Response /autoscaling/diagnostics shows improper json

2018-06-04 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12449:
--
Environment: (was: the value for the key "replica" is a serialized json 
itself

{{"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:}}{{{\n
 \"NRT\":2,\n \"PULL\":0,\n \"TLOG\":0,\n \"count\":2}}}

"
{code:java}
{
  "violations":[{
  "collection":"c1",
  "shard":"s1",
  "tagKey":"8983",
  "violation":{

"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
\"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
"delta":-2},
  "clause":{
"replica":0,
"shard":"#EACH",
"port":"8983",
"collection":"c1"}},
{
  "collection":"c1",
  "shard":"s2",
  "tagKey":"8983",
  "violation":{

"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
\"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
"delta":-2},
  "clause":{
"replica":0,
"shard":"#EACH",
"port":"8983",
"collection":"c1"}}]}
{code})

> Response /autoscaling/diagnostics shows improper json
> -
>
> Key: SOLR-12449
> URL: https://issues.apache.org/jira/browse/SOLR-12449
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> the value for the key "replica" is a serialized json itself
> {{"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:}}{{
> {\n \"NRT\":2,\n \"PULL\":0,\n \"TLOG\":0,\n \"count\":2}
> }}
> "
> {code:java}
> {
>   "violations":[{
>   "collection":"c1",
>   "shard":"s1",
>   "tagKey":"8983",
>   "violation":{
> 
> "replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
> \"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
> "delta":-2},
>   "clause":{
> "replica":0,
> "shard":"#EACH",
> "port":"8983",
> "collection":"c1"}},
> {
>   "collection":"c1",
>   "shard":"s2",
>   "tagKey":"8983",
>   "violation":{
> 
> "replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
> \"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
> "delta":-2},
>   "clause":{
> "replica":0,
> "shard":"#EACH",
> "port":"8983",
> "collection":"c1"}}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8341) Record soft deletes in SegmentCommitInfo

2018-06-04 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501310#comment-16501310
 ] 

Lucene/Solr QA commented on LUCENE-8341:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  5s{color} 
| {color:red} LUCENE-8341 does not apply to master. Rebase required? Wrong 
Branch? See 
https://wiki.apache.org/lucene-java/HowToContribute#Contributing_your_work for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | LUCENE-8341 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926318/LUCENE-8341.patch |
| Console output | 
https://builds.apache.org/job/PreCommit-LUCENE-Build/26/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



>  Record soft deletes in SegmentCommitInfo
> -
>
> Key: LUCENE-8341
> URL: https://issues.apache.org/jira/browse/LUCENE-8341
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8341.patch, LUCENE-8341.patch, LUCENE-8341.patch, 
> LUCENE-8341.patch
>
>
>  This change add the number of documents that are soft deletes but
> not hard deleted to the segment commit info. This is the last step
> towards making soft deletes as powerful as hard deltes since now the
> number of document can be read from commit points without opening a
> full blown reader. This also allows merge posliies to make decisions
> without requiring an NRT reader to get the relevant statistics. This
> change doesn't enforce any field to be used as soft deletes and the 
> statistic
> is maintained per segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12449) Response /autoscaling/diagnostics shows improper json

2018-06-04 Thread Noble Paul (JIRA)
Noble Paul created SOLR-12449:
-

 Summary: Response /autoscaling/diagnostics shows improper json
 Key: SOLR-12449
 URL: https://issues.apache.org/jira/browse/SOLR-12449
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
 Environment: the value for the key "replica" is a serialized json 
itself

{{"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:}}{{{\n
 \"NRT\":2,\n \"PULL\":0,\n \"TLOG\":0,\n \"count\":2}}}

"
{code:java}
{
  "violations":[{
  "collection":"c1",
  "shard":"s1",
  "tagKey":"8983",
  "violation":{

"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
\"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
"delta":-2},
  "clause":{
"replica":0,
"shard":"#EACH",
"port":"8983",
"collection":"c1"}},
{
  "collection":"c1",
  "shard":"s2",
  "tagKey":"8983",
  "violation":{

"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
\"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
"delta":-2},
  "clause":{
"replica":0,
"shard":"#EACH",
"port":"8983",
"collection":"c1"}}]}
{code}
Reporter: Noble Paul
Assignee: Noble Paul






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12209) add Paging Streaming Expression

2018-06-04 Thread Lucene/Solr QA (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12209?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501307#comment-16501307
 ] 

Lucene/Solr QA commented on SOLR-12209:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
57s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Release audit (RAT) {color} | 
{color:green}  1m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Check forbidden APIs {color} | 
{color:green}  1m 51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} Validate source patterns {color} | 
{color:green}  1m 52s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
39s{color} | {color:green} solrj in the patch passed. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 15m 30s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | SOLR-12209 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926316/SOLR-12209.patch |
| Optional Tests |  compile  javac  unit  ratsources  checkforbiddenapis  
validatesourcepatterns  |
| uname | Linux lucene1-us-west 3.13.0-88-generic #135-Ubuntu SMP Wed Jun 8 
21:10:42 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | ant |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-SOLR-Build/sourcedir/dev-tools/test-patch/lucene-solr-yetus-personality.sh
 |
| git revision | master / f9f5e83 |
| ant | version: Apache Ant(TM) version 1.9.3 compiled on April 8 2014 |
| Default Java | 1.8.0_172 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-SOLR-Build/112/testReport/ |
| modules | C: solr/solrj U: solr/solrj |
| Console output | 
https://builds.apache.org/job/PreCommit-SOLR-Build/112/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> add Paging Streaming Expression
> ---
>
> Key: SOLR-12209
> URL: https://issues.apache.org/jira/browse/SOLR-12209
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: mosh
>Priority: Major
> Attachments: SOLR-12209.patch, SOLR-12209.patch, SOLR-12209.patch
>
>
> Currently the closest streaming expression that allows some sort of 
> pagination is top.
> I propose we add a new streaming expression, which is based on the 
> RankedStream class to add offset to the stream. currently it can only be done 
> in code by reading the stream until the desired offset is reached.
> The new expression will be used as such:
> {{paging(rows=3, search(collection1, q="*:*", qt="/export", 
> fl="id,a_s,a_i,a_f", sort="a_f desc, a_i desc"), sort="a_f asc, a_i asc", 
> start=100)}}
> {{this will offset the returned stream by 100 documents}}
>  
> [~joel.bernstein] what to you think?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12449) Response /autoscaling/diagnostics shows improper json

2018-06-04 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12449?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-12449:
--
Description: 
the value for the key "replica" is a serialized json itself

{{"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:}}{{

{\n \"NRT\":2,\n \"PULL\":0,\n \"TLOG\":0,\n \"count\":2}

}}

"
{code:java}
{
  "violations":[{
  "collection":"c1",
  "shard":"s1",
  "tagKey":"8983",
  "violation":{

"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
\"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
"delta":-2},
  "clause":{
"replica":0,
"shard":"#EACH",
"port":"8983",
"collection":"c1"}},
{
  "collection":"c1",
  "shard":"s2",
  "tagKey":"8983",
  "violation":{

"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
\"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
"delta":-2},
  "clause":{
"replica":0,
"shard":"#EACH",
"port":"8983",
"collection":"c1"}}]}
{code}

> Response /autoscaling/diagnostics shows improper json
> -
>
> Key: SOLR-12449
> URL: https://issues.apache.org/jira/browse/SOLR-12449
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
> Environment: the value for the key "replica" is a serialized json 
> itself
> {{"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:}}{{{\n
>  \"NRT\":2,\n \"PULL\":0,\n \"TLOG\":0,\n \"count\":2}}}
> "
> {code:java}
> {
>   "violations":[{
>   "collection":"c1",
>   "shard":"s1",
>   "tagKey":"8983",
>   "violation":{
> 
> "replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
> \"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
> "delta":-2},
>   "clause":{
> "replica":0,
> "shard":"#EACH",
> "port":"8983",
> "collection":"c1"}},
> {
>   "collection":"c1",
>   "shard":"s2",
>   "tagKey":"8983",
>   "violation":{
> 
> "replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
> \"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
> "delta":-2},
>   "clause":{
> "replica":0,
> "shard":"#EACH",
> "port":"8983",
> "collection":"c1"}}]}
> {code}
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
>
> the value for the key "replica" is a serialized json itself
> {{"replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:}}{{
> {\n \"NRT\":2,\n \"PULL\":0,\n \"TLOG\":0,\n \"count\":2}
> }}
> "
> {code:java}
> {
>   "violations":[{
>   "collection":"c1",
>   "shard":"s1",
>   "tagKey":"8983",
>   "violation":{
> 
> "replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
> \"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
> "delta":-2},
>   "clause":{
> "replica":0,
> "shard":"#EACH",
> "port":"8983",
> "collection":"c1"}},
> {
>   "collection":"c1",
>   "shard":"s2",
>   "tagKey":"8983",
>   "violation":{
> 
> "replica":"org.apache.solr.client.solrj.cloud.autoscaling.ReplicaCount:{\n  
> \"NRT\":2,\n  \"PULL\":0,\n  \"TLOG\":0,\n  \"count\":2}",
> "delta":-2},
>   "clause":{
> "replica":0,
> "shard":"#EACH",
> "port":"8983",
> "collection":"c1"}}]}
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-12444) Updating a cluster policy fails

2018-06-04 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12444?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul reassigned SOLR-12444:
-

Assignee: Noble Paul

> Updating a cluster policy fails
> ---
>
> Key: SOLR-12444
> URL: https://issues.apache.org/jira/browse/SOLR-12444
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Varun Thacker
>Assignee: Noble Paul
>Priority: Major
>
> If I create a policy like this
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-cluster-policy": [
> {"cores": "<4","node": "#ANY"}
> ]
> }' http://localhost:8983/solr/admin/autoscaling{code}
> Then I can never update this policy subsequently .
> To reproduce try changing the policy to 
> {code:java}
> curl -X POST -H 'Content-type:application/json' --data-binary '{
> "set-cluster-policy": [
> {"cores": "<3","node": "#ANY"}
> ]
> }' http://localhost:8983/solr/admin/autoscaling{code}
> The policy will never change. The workaround is to clear the policy by 
> sending an empty array and then re-creating it 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 758 - Still Unstable

2018-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/758/

[...truncated 44 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-7.x/633/consoleText

[repro] Revision: 88400a14716f163715eac82a35be90e6e3718208

[repro] Repro line:  ant test  -Dtestcase=TestComputePlanAction 
-Dtests.method=testNodeAdded -Dtests.seed=B235FFDC6C682C40 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=hi-IN -Dtests.timezone=Europe/Gibraltar 
-Dtests.asserts=true -Dtests.file.encoding=ISO-8859-1

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
f9f5e837450e082ae7e1a82a0693760af7485a1b
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 88400a14716f163715eac82a35be90e6e3718208

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   TestComputePlanAction
[repro] ant compile-test

[...truncated 3317 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.TestComputePlanAction" -Dtests.showOutput=onerror  
-Dtests.seed=B235FFDC6C682C40 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=hi-IN -Dtests.timezone=Europe/Gibraltar -Dtests.asserts=true 
-Dtests.file.encoding=ISO-8859-1

[...truncated 2427 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestComputePlanAction
[repro] git checkout f9f5e837450e082ae7e1a82a0693760af7485a1b

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1559 - Still Unstable

2018-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1559/

1 tests failed.
FAILED:  org.apache.solr.update.MaxSizeAutoCommitTest.endToEndTest 
{seed=[637D2E1F06D32289:792B4129E661058B]}

Error Message:
Tlog size exceeds the max size bound. Tlog path: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.update.MaxSizeAutoCommitTest_637D2E1F06D32289-001/init-core-data-001/tlog/tlog.003,
 tlog size: 5537

Stack Trace:
java.lang.AssertionError: Tlog size exceeds the max size bound. Tlog path: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/checkout/solr/build/solr-core/test/J0/temp/solr.update.MaxSizeAutoCommitTest_637D2E1F06D32289-001/init-core-data-001/tlog/tlog.003,
 tlog size: 5537
at 
__randomizedtesting.SeedInfo.seed([637D2E1F06D32289:792B4129E661058B]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.getTlogFileSizes(MaxSizeAutoCommitTest.java:379)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.endToEndTest(MaxSizeAutoCommitTest.java:231)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

Re: discuss: stop adding 'via' from CHANGES.txt entries (take two)

2018-06-04 Thread Erick Erickson
Robert:

I don't have strong feelings either way. Personally I use "via" to
indicate that I didn't have much to do with the hard parts, I just was
the "committer fingers". If I've been more involved I just add my name
as a co-contributor (last). Basically it's a question of "how much
credit do I think I deserve?". If very little I use "via". If I'm more
involved, I just add a comma and my name.

But that's a nuance that I suppose varies by person, so I'm happy
either way. Your point that the tighter integration with Git is well
taken, we can trace things back to whoever committed things pretty
easily.

I'm -1 to having to remember to go to another place like a Wiki page,
too easy to forget. And I don't think we really need it.

Erick


On Mon, Jun 4, 2018 at 5:47 PM, Robert Muir  wrote:
> I raised this issue a few years ago, and no consensus was reached [1]
>
> I'm asking if we can take the time to revisit the issue. Back then it
> was subversion days, and you had "patch-uploaders" and "contributors".
> With git now, I believe the situation is even a bit more extreme,
> because the committer is the contributor and the lucene "committer"
> was really the "pusher".
>
> On the other hand, there were some reasons against removing this
> before. In particular some mentioned that it conveyed meaning about
> who might be the best person to ping about a particular area of the
> code. If this is still the case, I'd ask that we discuss alternative
> ways that it could be accomplished (such as wiki page perhaps
> linked-to HowToContribute that ppl can edit).
>
> I wrote a new summary/argument inline, but see the linked thread for
> the previous discussion:
>
>
> In the past CHANGES.txt entries from a contributor have also had the
> name of the committer with a 'via' entry.
>
> e.g.:
>
> LUCENE-1234: optimized FooBar. (Jane Doe via Joe Schmoe).
>
> I propose we stop adding the committer name (via Joe Schmoe). It seems
> to diminish the value of the contribution. It reminds me of a
> professor adding a second author by default or something like that. If
> someone really wants to know who committed the change, I think its
> fair that they look at version control history?
>
> 1. 
> http://mail-archives.apache.org/mod_mbox/lucene-dev/201206.mbox/%3CCAOdYfZW65MXrzyRPsvBD0C6c4X%2BLuQX4oVec%3DyR_PCPgTQrnhQ%40mail.gmail.com%3E
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-06-04 Thread Cao Manh Dat (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501256#comment-16501256
 ] 

Cao Manh Dat commented on SOLR-12297:
-

bq. One thing that may interest you that I would love help on is configuring 
our Jetty instance for Http/2 as well as Http/1.1. Currently I'm just setting 
everything up for JettySolrRunner and our core tests.

I think we should have a plan to move from Solr get loaded by Jetty to Solr 
boot-up Jetty and do all the configuration.  This will save us a lot of 
difference between running Solr from {{bin/solr}} to testing Solr by using 
JettySolrRunner.

> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22178 - Still Unstable!

2018-06-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22178/
Java: 64bit/jdk-10.0.1 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader

Error Message:
Doc with id=4 not found in 
https://127.0.0.1:33935/solr/outOfSyncReplicasCannotBecomeLeader-false due to: 
Path not found: /id; rsp={doc=null}

Stack Trace:
java.lang.AssertionError: Doc with id=4 not found in 
https://127.0.0.1:33935/solr/outOfSyncReplicasCannotBecomeLeader-false due to: 
Path not found: /id; rsp={doc=null}
at 
__randomizedtesting.SeedInfo.seed([4769DBF7CC24CD38:3982FBE70F43C202]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.TestCloudConsistency.assertDocExists(TestCloudConsistency.java:252)
at 
org.apache.solr.cloud.TestCloudConsistency.assertDocsExistInAllReplicas(TestCloudConsistency.java:236)
at 
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader(TestCloudConsistency.java:129)
at 
org.apache.solr.cloud.TestCloudConsistency.testOutOfSyncReplicasCannotBecomeLeader(TestCloudConsistency.java:92)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-8332) New ConcatenateGraphTokenStream (move/rename CompletionTokenStream)

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501239#comment-16501239
 ] 

ASF subversion and git services commented on LUCENE-8332:
-

Commit 9b61121ffb65d59f49429aba99b1c1b641ddb3c6 in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9b61121 ]

LUCENE-8332: New ConcatenateGraphFilter (from CompletionTokenStream).
* Added a test for FingerprintFilter and clarified FF's end condition.

(cherry picked from commit f9f5e83)


> New ConcatenateGraphTokenStream (move/rename CompletionTokenStream)
> ---
>
> Key: LUCENE-8332
> URL: https://issues.apache.org/jira/browse/LUCENE-8332
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8332.patch, LUCENE-8332.patch, LUCENE-8332.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Lets move and rename the CompletionTokenStream in the suggest module into the 
> analysis module renamed as ConcatenateGraphTokenStream. See comments in 
> LUCENE-8323 leading to this idea. Such a TokenStream (or TokenFilter?) has 
> several uses:
>  * for the suggest module
>  * by the SolrTextTagger for NER/ERD use cases – SOLR-12376
>  * for doing complete match search efficiently
> It will need a factory – a TokenFilterFactory, even though we don't have a 
> TokenFilter based subclass of TokenStream.
> It appears there is no back-compat concern in it suddenly disappearing from 
> the suggest module as it's marked experimental and it only seems to be public 
> now perhaps due to some technicality (it has package level constructors).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8332) New ConcatenateGraphTokenStream (move/rename CompletionTokenStream)

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501229#comment-16501229
 ] 

ASF subversion and git services commented on LUCENE-8332:
-

Commit f9f5e837450e082ae7e1a82a0693760af7485a1b in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f9f5e83 ]

LUCENE-8332: New ConcatenateGraphFilter (from CompletionTokenStream).
* Added a test for FingerprintFilter and clarified FF's end condition.


> New ConcatenateGraphTokenStream (move/rename CompletionTokenStream)
> ---
>
> Key: LUCENE-8332
> URL: https://issues.apache.org/jira/browse/LUCENE-8332
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8332.patch, LUCENE-8332.patch, LUCENE-8332.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Lets move and rename the CompletionTokenStream in the suggest module into the 
> analysis module renamed as ConcatenateGraphTokenStream. See comments in 
> LUCENE-8323 leading to this idea. Such a TokenStream (or TokenFilter?) has 
> several uses:
>  * for the suggest module
>  * by the SolrTextTagger for NER/ERD use cases – SOLR-12376
>  * for doing complete match search efficiently
> It will need a factory – a TokenFilterFactory, even though we don't have a 
> TokenFilter based subclass of TokenStream.
> It appears there is no back-compat concern in it suddenly disappearing from 
> the suggest module as it's marked experimental and it only seems to be public 
> now perhaps due to some technicality (it has package level constructors).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12387) Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, pullReplicas

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501223#comment-16501223
 ] 

ASF subversion and git services commented on SOLR-12387:


Commit 78617f992f03243c6b99a033b5609680418ddb83 in lucene-solr's branch 
refs/heads/branch_7x from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=78617f9 ]

SOLR-12387: added documentation


> Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, 
> pullReplicas
> -
>
> Key: SOLR-12387
> URL: https://issues.apache.org/jira/browse/SOLR-12387
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-12387.patch
>
>
> These will be cluster properties and the commands can omit these and the 
> command would pick it up from the cluster properties
>  
> the cluster property names are
> {code}
>  "collectionDefaults": {
>  "numShards":1
>  "nrtReplicas":1
>  "tlogReplicas":1
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12387) Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, pullReplicas

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501220#comment-16501220
 ] 

ASF subversion and git services commented on SOLR-12387:


Commit f27d8a2dbfee4ba75b7bada786328a4077865d5b in lucene-solr's branch 
refs/heads/master from [~noble.paul]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=f27d8a2 ]

SOLR-12387: added documentation


> Have cluster-wide defaults for numShards, nrtReplicas, tlogReplicas, 
> pullReplicas
> -
>
> Key: SOLR-12387
> URL: https://issues.apache.org/jira/browse/SOLR-12387
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Attachments: SOLR-12387.patch
>
>
> These will be cluster properties and the commands can omit these and the 
> command would pick it up from the cluster properties
>  
> the cluster property names are
> {code}
>  "collectionDefaults": {
>  "numShards":1
>  "nrtReplicas":1
>  "tlogReplicas":1
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8344) TokenStreamToAutomaton doesn't ignore trailing posInc when preservePositionIncrements=false

2018-06-04 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501217#comment-16501217
 ] 

David Smiley commented on LUCENE-8344:
--

bq. Considering that re-build should be trivial in the AnalyzingSuggester, we 
could simply consider the fix a breaking change and discuss if it's acceptable 
to backport to 7x ?

That gets my vote!

> TokenStreamToAutomaton doesn't ignore trailing posInc when 
> preservePositionIncrements=false
> ---
>
> Key: LUCENE-8344
> URL: https://issues.apache.org/jira/browse/LUCENE-8344
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/suggest
>Reporter: David Smiley
>Priority: Major
> Attachments: LUCENE-8344.patch, LUCENE-8344.patch
>
>
> TokenStreamToAutomaton in Lucene core is used by the AnalyzingSuggester 
> (incl. FuzzySuggester subclass ) and NRT Document Suggester and soon the 
> SolrTextTagger.  It has a setting {{preservePositionIncrements}} defaulting 
> to true.  If it's set to false (e.g. to ignore stopwords) and if there is a 
> _trailing_ position increment greater than 1, TS2A will _still_ add position 
> increments (holes) into the automata even though it was configured not to.
> I'm filing this issue separate from LUCENE-8332 where I first found it.  The 
> fix is very simple but I'm concerned about back-compat ramifications so I'm 
> filing it separately.  I'll attach a patch to show the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-06-04 Thread Robert Muir (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-7960.
-
   Resolution: Fixed
Fix Version/s: master (8.0)
   7.4

Thank you [~iwesp] !

> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Assignee: Robert Muir
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch, LUCENE-7960.patch, 
> LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501211#comment-16501211
 ] 

ASF subversion and git services commented on LUCENE-7960:
-

Commit 5c6a49b13f47789c828995f747ec541810bdd0b4 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5c6a49b ]

LUCENE-7960: remove deprecations


> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Assignee: Robert Muir
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch, LUCENE-7960.patch, 
> LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8348) Remove [Edge]NgramTokenizer min/max defaults consistent with Filter

2018-06-04 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-8348:
---

 Summary: Remove [Edge]NgramTokenizer min/max defaults consistent 
with Filter
 Key: LUCENE-8348
 URL: https://issues.apache.org/jira/browse/LUCENE-8348
 Project: Lucene - Core
  Issue Type: Task
  Components: modules/analysis
 Environment: LUCENE-7960 fixed a good deal of trappiness here for the 
tokenfilters, there aren't ridiculous default min/max values such as 1,2. 

Also javadocs are enhanced to present a nice warning about using large ranges: 
it seems to surprise people that min=small, max=huge eats up a ton of 
resources, but its really like creating (huge-small) separate n-gram indexes, 
so of course its expensive.

Finally it keeps it easy to do the typical, more efficient fixed ngram case, vs 
forcing someone to do min=X,max=X range which is unintuitive.

We should improve the tokenizers in the same way.
Reporter: Robert Muir






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7690) TestSimpleTextPointsFormat.testWithExceptions() failure

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501162#comment-16501162
 ] 

ASF subversion and git services commented on LUCENE-7690:
-

Commit 71e2a681235447a08aa4e9e9c3a916df386d1de4 in lucene-solr's branch 
refs/heads/branch_7x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=71e2a68 ]

LUCENE-7690: Add preserveOriginal option to the NGram and EdgeNGram filters


> TestSimpleTextPointsFormat.testWithExceptions() failure
> ---
>
> Key: LUCENE-7690
> URL: https://issues.apache.org/jira/browse/LUCENE-7690
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
>Priority: Major
> Fix For: 6.5, 7.0
>
>
> Reproducing branch_6x seed from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/690/]:
> {noformat}
>[junit4] Suite: 
> org.apache.lucene.codecs.simpletext.TestSimpleTextPointsFormat
>[junit4] IGNOR/A 0.02s J0 | TestSimpleTextPointsFormat.testRandomBinaryBig
>[junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleTextPointsFormat -Dtests.method=testWithExceptions 
> -Dtests.seed=CCE1E867577CFFF6 -Dtests.slow=true -Dtests.locale=uk-UA 
> -Dtests.timezone=Asia/Qatar -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.93s J0 | TestSimpleTextPointsFormat.testWithExceptions 
> <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: this writer 
> hit an unrecoverable error; cannot complete forceMerge
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([CCE1E867577CFFF6:6EB2741BD8F2B00C]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1931)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1881)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.forceMerge(RandomIndexWriter.java:429)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.verify(BasePointsFormatTestCase.java:701)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.testWithExceptions(BasePointsFormatTestCase.java:224)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: org.apache.lucene.index.CorruptIndexException: 
> Problem reading index from 
> MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658) 
> (resource=MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658))
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:140)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:617)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4293)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3940)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]> Caused by: java.io.FileNotFoundException: a random 
> IOException (_0.inf)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:575)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:744)
>[junit4]>  at 
> org.apache.lucene.store.Directory.openChecksumInput(Directory.java:137)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openChecksumInput(MockDirectoryWrapper.java:1072)
>[junit4]>  at 
> org.apache.lucene.codecs.simpletext.SimpleTextFieldInfosFormat.read(SimpleTextFieldInfosFormat.java:73)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:107)
>[junit4]>  ... 7 more
>[junit4] IGNOR/A 0.01s J0 | 

[jira] [Commented] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501163#comment-16501163
 ] 

ASF subversion and git services commented on LUCENE-7960:
-

Commit 98bf43b3da5131f0d27c747ac8bfbe28945cc922 in lucene-solr's branch 
refs/heads/branch_7x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=98bf43b ]

LUCENE-7960: Add preserveOriginal option to the NGram and EdgeNGram filters

(this is a correction of the issue number in both the CHANGES.txt and the 
commit message, sorry for the noise).


> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Assignee: Robert Muir
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch, LUCENE-7960.patch, 
> LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_172) - Build # 2054 - Still Unstable!

2018-06-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2054/
Java: 64bit/jdk1.8.0_172 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.cdcr.CdcrBidirectionalTest.testBiDir

Error Message:
Captured an uncaught exception in thread: Thread[id=35047, 
name=cdcr-replicator-11158-thread-1, state=RUNNABLE, 
group=TGRP-CdcrBidirectionalTest]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=35047, name=cdcr-replicator-11158-thread-1, 
state=RUNNABLE, group=TGRP-CdcrBidirectionalTest]
Caused by: java.lang.AssertionError
at __randomizedtesting.SeedInfo.seed([6203A765DC083CB6]:0)
at 
org.apache.solr.update.CdcrUpdateLog$CdcrLogReader.forwardSeek(CdcrUpdateLog.java:611)
at org.apache.solr.handler.CdcrReplicator.run(CdcrReplicator.java:125)
at 
org.apache.solr.handler.CdcrReplicatorScheduler.lambda$null$0(CdcrReplicatorScheduler.java:81)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 14680 lines...]
   [junit4] Suite: org.apache.solr.cloud.cdcr.CdcrBidirectionalTest
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_6203A765DC083CB6-001/init-core-data-001
   [junit4]   2> 2057155 INFO  
(SUITE-CdcrBidirectionalTest-seed#[6203A765DC083CB6]-worker) [] 
o.a.s.SolrTestCaseJ4 Using PointFields (NUMERIC_POINTS_SYSPROP=true) 
w/NUMERIC_DOCVALUES_SYSPROP=false
   [junit4]   2> 2057156 INFO  
(SUITE-CdcrBidirectionalTest-seed#[6203A765DC083CB6]-worker) [] 
o.a.s.SolrTestCaseJ4 Randomized ssl (false) and clientAuth (false) via: 
@org.apache.solr.util.RandomizeSSL(reason=, ssl=NaN, value=NaN, clientAuth=NaN)
   [junit4]   2> 2057156 INFO  
(SUITE-CdcrBidirectionalTest-seed#[6203A765DC083CB6]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file:/dev/./urandom
   [junit4]   2> 2057160 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[6203A765DC083CB6]) [] 
o.a.s.SolrTestCaseJ4 ###Starting testBiDir
   [junit4]   2> 2057160 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[6203A765DC083CB6]) [] 
o.a.s.c.MiniSolrCloudCluster Starting cluster of 1 servers in 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J0/temp/solr.cloud.cdcr.CdcrBidirectionalTest_6203A765DC083CB6-001/cdcr-cluster2-001
   [junit4]   2> 2057160 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[6203A765DC083CB6]) [] 
o.a.s.c.ZkTestServer STARTING ZK TEST SERVER
   [junit4]   2> 2057160 INFO  (Thread-7948) [] o.a.s.c.ZkTestServer client 
port:0.0.0.0/0.0.0.0:0
   [junit4]   2> 2057160 INFO  (Thread-7948) [] o.a.s.c.ZkTestServer 
Starting server
   [junit4]   2> 2057162 ERROR (Thread-7948) [] o.a.z.s.ZooKeeperServer 
ZKShutdownHandler is not registered, so ZooKeeper server won't take any action 
on ERROR or SHUTDOWN server state changes
   [junit4]   2> 2057260 INFO  
(TEST-CdcrBidirectionalTest.testBiDir-seed#[6203A765DC083CB6]) [] 
o.a.s.c.ZkTestServer start zk server on port:43279
   [junit4]   2> 2057264 INFO  (zkConnectionManagerCallback-10455-thread-1) [   
 ] o.a.s.c.c.ConnectionManager zkClient has connected
   [junit4]   2> 2057268 INFO  (jetty-launcher-10452-thread-1) [] 
o.e.j.s.Server jetty-9.4.10.v20180503; built: 2018-05-03T15:56:21.710Z; git: 
daa59876e6f384329b122929e70a80934569428c; jvm 1.8.0_172-b11
   [junit4]   2> 2057269 INFO  (jetty-launcher-10452-thread-1) [] 
o.e.j.s.session DefaultSessionIdManager workerName=node0
   [junit4]   2> 2057269 INFO  (jetty-launcher-10452-thread-1) [] 
o.e.j.s.session No SessionScavenger set, using defaults
   [junit4]   2> 2057269 INFO  (jetty-launcher-10452-thread-1) [] 
o.e.j.s.session node0 Scavenging every 66ms
   [junit4]   2> 2057270 INFO  (jetty-launcher-10452-thread-1) [] 
o.e.j.s.h.ContextHandler Started 
o.e.j.s.ServletContextHandler@5c8bf9af{/solr,null,AVAILABLE}
   [junit4]   2> 2057270 INFO  (jetty-launcher-10452-thread-1) [] 
o.e.j.s.AbstractConnector Started 
ServerConnector@305a53b6{HTTP/1.1,[http/1.1]}{127.0.0.1:36501}
   [junit4]   2> 2057270 INFO  (jetty-launcher-10452-thread-1) [] 
o.e.j.s.Server Started @2057296ms
   [junit4]   2> 2057270 INFO  (jetty-launcher-10452-thread-1) [] 
o.a.s.c.s.e.JettySolrRunner Jetty properties: {hostContext=/solr, 
hostPort=36501}
   [junit4]   2> 2057270 ERROR (jetty-launcher-10452-thread-1) [] 
o.a.s.u.StartupLoggingUtils Missing Java Option solr.log.dir. Logging may be 
missing or incomplete.
   [junit4]   

[jira] [Commented] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501130#comment-16501130
 ] 

ASF subversion and git services commented on LUCENE-7960:
-

Commit 208d4a9c346ab0dca6c4ae659d55b9446b7d8c87 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=208d4a9 ]

LUCENE-7960: Add preserveOriginal option to the NGram and EdgeNGram filters

(this is a correction of the issue number in both the CHANGES.txt and the 
commit message, sorry for the noise).


> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Assignee: Robert Muir
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch, LUCENE-7960.patch, 
> LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7690) TestSimpleTextPointsFormat.testWithExceptions() failure

2018-06-04 Thread Robert Muir (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501124#comment-16501124
 ] 

Robert Muir commented on LUCENE-7690:
-

Sorry for the wrong messages: my dyslexia in commit message.

> TestSimpleTextPointsFormat.testWithExceptions() failure
> ---
>
> Key: LUCENE-7690
> URL: https://issues.apache.org/jira/browse/LUCENE-7690
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
>Priority: Major
> Fix For: 6.5, 7.0
>
>
> Reproducing branch_6x seed from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/690/]:
> {noformat}
>[junit4] Suite: 
> org.apache.lucene.codecs.simpletext.TestSimpleTextPointsFormat
>[junit4] IGNOR/A 0.02s J0 | TestSimpleTextPointsFormat.testRandomBinaryBig
>[junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleTextPointsFormat -Dtests.method=testWithExceptions 
> -Dtests.seed=CCE1E867577CFFF6 -Dtests.slow=true -Dtests.locale=uk-UA 
> -Dtests.timezone=Asia/Qatar -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.93s J0 | TestSimpleTextPointsFormat.testWithExceptions 
> <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: this writer 
> hit an unrecoverable error; cannot complete forceMerge
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([CCE1E867577CFFF6:6EB2741BD8F2B00C]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1931)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1881)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.forceMerge(RandomIndexWriter.java:429)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.verify(BasePointsFormatTestCase.java:701)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.testWithExceptions(BasePointsFormatTestCase.java:224)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: org.apache.lucene.index.CorruptIndexException: 
> Problem reading index from 
> MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658) 
> (resource=MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658))
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:140)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:617)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4293)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3940)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]> Caused by: java.io.FileNotFoundException: a random 
> IOException (_0.inf)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:575)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:744)
>[junit4]>  at 
> org.apache.lucene.store.Directory.openChecksumInput(Directory.java:137)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openChecksumInput(MockDirectoryWrapper.java:1072)
>[junit4]>  at 
> org.apache.lucene.codecs.simpletext.SimpleTextFieldInfosFormat.read(SimpleTextFieldInfosFormat.java:73)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:107)
>[junit4]>  ... 7 more
>[junit4] IGNOR/A 0.01s J0 | TestSimpleTextPointsFormat.testMergeStability
>[junit4]> Assumption #1: merge is not stable
>[junit4]   2> NOTE: leaving temporary files on disk at: 
> 

[jira] [Commented] (LUCENE-7690) TestSimpleTextPointsFormat.testWithExceptions() failure

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7690?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501118#comment-16501118
 ] 

ASF subversion and git services commented on LUCENE-7690:
-

Commit 2c1ab31b4e5595595cf0f1549eb61b33c8555000 in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2c1ab31 ]

LUCENE-7690: Add preserveOriginal option to the NGram and EdgeNGram filters


> TestSimpleTextPointsFormat.testWithExceptions() failure
> ---
>
> Key: LUCENE-7690
> URL: https://issues.apache.org/jira/browse/LUCENE-7690
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Steve Rowe
>Assignee: Michael McCandless
>Priority: Major
> Fix For: 6.5, 7.0
>
>
> Reproducing branch_6x seed from 
> [https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/690/]:
> {noformat}
>[junit4] Suite: 
> org.apache.lucene.codecs.simpletext.TestSimpleTextPointsFormat
>[junit4] IGNOR/A 0.02s J0 | TestSimpleTextPointsFormat.testRandomBinaryBig
>[junit4]> Assumption #1: 'nightly' test group is disabled (@Nightly())
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=TestSimpleTextPointsFormat -Dtests.method=testWithExceptions 
> -Dtests.seed=CCE1E867577CFFF6 -Dtests.slow=true -Dtests.locale=uk-UA 
> -Dtests.timezone=Asia/Qatar -Dtests.asserts=true 
> -Dtests.file.encoding=ISO-8859-1
>[junit4] ERROR   0.93s J0 | TestSimpleTextPointsFormat.testWithExceptions 
> <<<
>[junit4]> Throwable #1: java.lang.IllegalStateException: this writer 
> hit an unrecoverable error; cannot complete forceMerge
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([CCE1E867577CFFF6:6EB2741BD8F2B00C]:0)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1931)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.forceMerge(IndexWriter.java:1881)
>[junit4]>  at 
> org.apache.lucene.index.RandomIndexWriter.forceMerge(RandomIndexWriter.java:429)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.verify(BasePointsFormatTestCase.java:701)
>[junit4]>  at 
> org.apache.lucene.index.BasePointsFormatTestCase.testWithExceptions(BasePointsFormatTestCase.java:224)
>[junit4]>  at java.lang.Thread.run(Thread.java:745)
>[junit4]> Caused by: org.apache.lucene.index.CorruptIndexException: 
> Problem reading index from 
> MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658) 
> (resource=MockDirectoryWrapper(NIOFSDirectory@/Users/jenkins/workspace/Lucene-Solr-6.x-MacOSX/lucene/build/codecs/test/J0/temp/lucene.codecs.simpletext.TestSimpleTextPointsFormat_CCE1E867577CFFF6-001/tempDir-001
>  lockFactory=org.apache.lucene.store.NativeFSLockFactory@4d6de658))
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:140)
>[junit4]>  at 
> org.apache.lucene.index.SegmentReader.(SegmentReader.java:74)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReader(ReadersAndUpdates.java:145)
>[junit4]>  at 
> org.apache.lucene.index.ReadersAndUpdates.getReaderForMerge(ReadersAndUpdates.java:617)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.mergeMiddle(IndexWriter.java:4293)
>[junit4]>  at 
> org.apache.lucene.index.IndexWriter.merge(IndexWriter.java:3940)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler.doMerge(ConcurrentMergeScheduler.java:588)
>[junit4]>  at 
> org.apache.lucene.index.ConcurrentMergeScheduler$MergeThread.run(ConcurrentMergeScheduler.java:626)
>[junit4]> Caused by: java.io.FileNotFoundException: a random 
> IOException (_0.inf)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.maybeThrowIOExceptionOnOpen(MockDirectoryWrapper.java:575)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:744)
>[junit4]>  at 
> org.apache.lucene.store.Directory.openChecksumInput(Directory.java:137)
>[junit4]>  at 
> org.apache.lucene.store.MockDirectoryWrapper.openChecksumInput(MockDirectoryWrapper.java:1072)
>[junit4]>  at 
> org.apache.lucene.codecs.simpletext.SimpleTextFieldInfosFormat.read(SimpleTextFieldInfosFormat.java:73)
>[junit4]>  at 
> org.apache.lucene.index.SegmentCoreReaders.(SegmentCoreReaders.java:107)
>[junit4]>  ... 7 more
>[junit4] IGNOR/A 0.01s J0 | 

[JENKINS] Lucene-Solr-repro - Build # 757 - Unstable

2018-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/757/

[...truncated 28 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1558/consoleText

[repro] Revision: 3dc4fa199c175ed6351f66bac5c23c73b4e3f89a

[repro] Ant options: -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
[repro] Repro line:  ant test  -Dtestcase=DistributedQueueTest 
-Dtests.method=testPeekElements -Dtests.seed=945DB15E289F10C 
-Dtests.multiplier=2 -Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=ar-MA -Dtests.timezone=Atlantic/Stanley -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=MaxSizeAutoCommitTest 
-Dtests.method=endToEndTest -Dtests.seed=945DB15E289F10C -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=es-EC -Dtests.timezone=Kwajalein -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=LeaderFailureAfterFreshStartTest 
-Dtests.method=test -Dtests.seed=945DB15E289F10C -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=tr-TR -Dtests.timezone=Africa/Abidjan -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=RollingRestartTest 
-Dtests.method=test -Dtests.seed=945DB15E289F10C -Dtests.multiplier=2 
-Dtests.nightly=true -Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=be -Dtests.timezone=Australia/LHI -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
59087d148ac186930c4a51917e361c599e7314c1
[repro] git fetch
[repro] git checkout 3dc4fa199c175ed6351f66bac5c23c73b4e3f89a

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]solr/core
[repro]   LeaderFailureAfterFreshStartTest
[repro]   RollingRestartTest
[repro]   DistributedQueueTest
[repro]   MaxSizeAutoCommitTest
[repro] ant compile-test

[...truncated 3299 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=20 
-Dtests.class="*.LeaderFailureAfterFreshStartTest|*.RollingRestartTest|*.DistributedQueueTest|*.MaxSizeAutoCommitTest"
 -Dtests.showOutput=onerror -Dtests.multiplier=2 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.seed=945DB15E289F10C -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/jenkins-slave/workspace/Lucene-Solr-NightlyTests-master/test-data/enwiki.random.lines.txt
 -Dtests.locale=tr-TR -Dtests.timezone=Africa/Abidjan -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 19725 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: org.apache.solr.cloud.DistributedQueueTest
[repro]   0/5 failed: org.apache.solr.cloud.LeaderFailureAfterFreshStartTest
[repro]   0/5 failed: org.apache.solr.cloud.RollingRestartTest
[repro]   3/5 failed: org.apache.solr.update.MaxSizeAutoCommitTest
[repro] git checkout 59087d148ac186930c4a51917e361c599e7314c1

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 6 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

discuss: stop adding 'via' from CHANGES.txt entries (take two)

2018-06-04 Thread Robert Muir
I raised this issue a few years ago, and no consensus was reached [1]

I'm asking if we can take the time to revisit the issue. Back then it
was subversion days, and you had "patch-uploaders" and "contributors".
With git now, I believe the situation is even a bit more extreme,
because the committer is the contributor and the lucene "committer"
was really the "pusher".

On the other hand, there were some reasons against removing this
before. In particular some mentioned that it conveyed meaning about
who might be the best person to ping about a particular area of the
code. If this is still the case, I'd ask that we discuss alternative
ways that it could be accomplished (such as wiki page perhaps
linked-to HowToContribute that ppl can edit).

I wrote a new summary/argument inline, but see the linked thread for
the previous discussion:


In the past CHANGES.txt entries from a contributor have also had the
name of the committer with a 'via' entry.

e.g.:

LUCENE-1234: optimized FooBar. (Jane Doe via Joe Schmoe).

I propose we stop adding the committer name (via Joe Schmoe). It seems
to diminish the value of the contribution. It reminds me of a
professor adding a second author by default or something like that. If
someone really wants to know who committed the change, I think its
fair that they look at version control history?

1. 
http://mail-archives.apache.org/mod_mbox/lucene-dev/201206.mbox/%3CCAOdYfZW65MXrzyRPsvBD0C6c4X%2BLuQX4oVec%3DyR_PCPgTQrnhQ%40mail.gmail.com%3E

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10.0.1) - Build # 22177 - Still Unstable!

2018-06-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22177/
Java: 64bit/jdk-10.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

4 tests failed.
FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "0009328248181776081654"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"0009328248181776081654"
at 
__randomizedtesting.SeedInfo.seed([813D1498856438AB:99C7BA336C2AC4AA]:0)
at 
java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.base/java.lang.Long.parseLong(Long.java:692)
at java.base/java.lang.Long.parseLong(Long.java:817)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:153)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:141)
at 
org.apache.solr.update.TransactionLogTest.testBigLastAddSize(TransactionLogTest.java:34)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "00015311024131909884699"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"00015311024131909884699"
at 
__randomizedtesting.SeedInfo.seed([813D1498856438AB:99C7BA336C2AC4AA]:0)
at 

[JENKINS] Lucene-Solr-SmokeRelease-7.x - Build # 235 - Still Failing

2018-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/235/

No tests ran.

Build Log:
[...truncated 24220 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2201 links (1756 relative) to 2974 anchors in 229 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/package/solr-7.4.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-7.x/lucene/top-level-ivy-settings.xml


[jira] [Assigned] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-06-04 Thread Robert Muir (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir reassigned LUCENE-7960:
---

Assignee: Robert Muir

> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Assignee: Robert Muir
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch, LUCENE-7960.patch, 
> LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12412) Leader should give up leadership when meet some kind of exceptions

2018-06-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-12412?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16501019#comment-16501019
 ] 

Tomás Fernández Löbbe commented on SOLR-12412:
--

Just a thought here, but what about anything inside {{IndexWriter.tragedy}}?

> Leader should give up leadership when meet some kind of exceptions
> --
>
> Key: SOLR-12412
> URL: https://issues.apache.org/jira/browse/SOLR-12412
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Cao Manh Dat
>Assignee: Cao Manh Dat
>Priority: Major
>
> When a leader meets some kind of unrecoverable exception (ie: 
> CorruptedIndexException). The shard will go into the readable state and human 
> has to intervene. In that case, it will be the best if the leader gives up 
> its leadership and let other replicas become the leader. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #394: Synchronized disruption

2018-06-04 Thread cahilltr
Github user cahilltr closed the pull request at:

https://github.com/apache/lucene-solr/pull/394


---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #394: Synchronized disruption

2018-06-04 Thread cahilltr
GitHub user cahilltr opened a pull request:

https://github.com/apache/lucene-solr/pull/394

Synchronized disruption



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cahilltr/lucene-solr SynchronizedDisruption

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/394.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #394


commit 14ebc375aee1197c37ffcbe1a83d01643d3a3e1e
Author: cahilltr 
Date:   2018-06-01T10:17:25Z

Initial Commit

Updated work for Sync disruption

Removed Quartz dependency, as log4j's cronexpression works well

Fixed misspelling

commit f775115e9053de5048ae61ffc33fa909ebd4a7b0
Author: cahilltr 
Date:   2018-06-04T22:24:09Z

Fixed Merge issues




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.4) - Build # 2053 - Still Unstable!

2018-06-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2053/
Java: 64bit/jdk-9.0.4 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "00011808758302085878341"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"00011808758302085878341"
at 
__randomizedtesting.SeedInfo.seed([3573E709FD5F20AE:2D8949A21411DCAF]:0)
at 
java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.base/java.lang.Long.parseLong(Long.java:692)
at java.base/java.lang.Long.parseLong(Long.java:817)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:153)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:141)
at 
org.apache.solr.update.TransactionLogTest.testBigLastAddSize(TransactionLogTest.java:34)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "0009389134025289088773"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"0009389134025289088773"
at 
__randomizedtesting.SeedInfo.seed([3573E709FD5F20AE:2D8949A21411DCAF]:0)

[jira] [Commented] (LUCENE-8344) TokenStreamToAutomaton doesn't ignore trailing posInc when preservePositionIncrements=false

2018-06-04 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500922#comment-16500922
 ] 

Jim Ferenczi commented on LUCENE-8344:
--

The exact match pass filters prefix paths that don't end with END_BYTE so we'd 
have to change it to ignore trailing POS_SEPs (line 709 and 727). Though we 
have no way to infer the value of preservePositionIncrements for an indexed 
suggestion so I am not even sure that we can handle the BWC safely. Considering 
that re-build should be trivial in the AnalyzingSuggester, we could simply 
consider the fix a breaking change and discuss if it's acceptable to backport 
to 7x ?

> TokenStreamToAutomaton doesn't ignore trailing posInc when 
> preservePositionIncrements=false
> ---
>
> Key: LUCENE-8344
> URL: https://issues.apache.org/jira/browse/LUCENE-8344
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/suggest
>Reporter: David Smiley
>Priority: Major
> Attachments: LUCENE-8344.patch, LUCENE-8344.patch
>
>
> TokenStreamToAutomaton in Lucene core is used by the AnalyzingSuggester 
> (incl. FuzzySuggester subclass ) and NRT Document Suggester and soon the 
> SolrTextTagger.  It has a setting {{preservePositionIncrements}} defaulting 
> to true.  If it's set to false (e.g. to ignore stopwords) and if there is a 
> _trailing_ position increment greater than 1, TS2A will _still_ add position 
> increments (holes) into the automata even though it was configured not to.
> I'm filing this issue separate from LUCENE-8332 where I first found it.  The 
> fix is very simple but I'm concerned about back-compat ramifications so I'm 
> filing it separately.  I'll attach a patch to show the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-06-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500920#comment-16500920
 ] 

Tomás Fernández Löbbe commented on SOLR-11982:
--

bq. unusual or impossible preferences should just fall back to random when they 
cannot be achieved, it shouldn't stop Solr operation
This is the way it will work right now. The value of the {{shards.preference}} 
is used to sort the (previously shuffled) list of replicas that can respond to 
a query, but it doesn't remove any replica from that list. If, for example, you 
say: {{shards.preference=PULL}}, and there are no {{PULL}} replicas in the 
shard, any non pull replica will be queried. The work for filtering replicas 
will be done in SOLR-10880 (in my TODOs list)

> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22176 - Still Unstable!

2018-06-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22176/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "00011884466055582223449"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"00011884466055582223449"
at 
__randomizedtesting.SeedInfo.seed([A958CE8657EE3476:B1A2602DBEA0C877]:0)
at 
java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.base/java.lang.Long.parseLong(Long.java:692)
at java.base/java.lang.Long.parseLong(Long.java:817)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:153)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:141)
at 
org.apache.solr.update.TransactionLogTest.testBigLastAddSize(TransactionLogTest.java:34)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "00015791538073002163331"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"00015791538073002163331"
at 
__randomizedtesting.SeedInfo.seed([A958CE8657EE3476:B1A2602DBEA0C877]:0)
at 

[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-06-04 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500881#comment-16500881
 ] 

Mark Miller commented on SOLR-12297:


You will see this in the logs when running under HTTP/2: 

2018-06-04 16:15:56.319 INFO  (main) [   ] o.e.j.s.AbstractConnector Started 
ServerConnector@3300f4fd\{h2,[h2]}{0.0.0.0:8983}

Until SSL is working correctly or we configure to also server HTTP/1.1 on the 
same port, browsers are not going to work with HTTP/2.

> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-7.x - Build # 235 - Still Unstable

2018-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-7.x/235/

1 tests failed.
FAILED:  
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion

Error Message:
Exception during query

Stack Trace:
java.lang.RuntimeException: Exception during query
at 
__randomizedtesting.SeedInfo.seed([E1F51DFC86801391:192B8814FEECC2C2]:0)
at org.apache.solr.SolrTestCaseJ4.assertQ(SolrTestCaseJ4.java:916)
at 
org.apache.solr.handler.admin.SegmentsInfoRequestHandlerTest.testSegmentInfosVersion(SegmentsInfoRequestHandlerTest.java:68)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.RuntimeException: REQUEST FAILED: 
xpath=2=count(//lst[@name='segments']/lst/str[@name='version'][.='7.4.0'])
xml response was: 

072_01182542018-06-04T20:33:48.513Zflush7.4.0_10164712018-06-04T20:33:48.519Zflush7.4.0_20175842018-06-04T20:33:48.577Zflush7.4.0_30164712018-06-04T20:33:48.619Zflush7.4.0


   

[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-06-04 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500859#comment-16500859
 ] 

Mark Miller commented on SOLR-12297:


You either have to switch the SolrHttpClient to use HTTP/1.1 or change 
etc/jetty-http.xml to use HTTP/2.

 

[https://www.eclipse.org/jetty/documentation/9.4.x/http2-configuring.html]

 

Id try changing the connection factory to 
org.eclipse.jetty.http2.server.HTTP2ServerConnectionFactory at a minimum.

> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8344) TokenStreamToAutomaton doesn't ignore trailing posInc when preservePositionIncrements=false

2018-06-04 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500848#comment-16500848
 ] 

David Smiley commented on LUCENE-8344:
--

RE NRT Doc Suggester:  "This is expected" – Okay I see what you mean.  I guess 
if any user (past/present/future) wants to use  
preservePositionIncrements=false effectively then they need to be using 
CompletionAnalyzer/CompletionTokenStream both at index _and_ query time.  The 
existing tests are not doing that – it is using the input analyzer at query 
time. The particular two queries it did use in a test, "fo" and "foob" didn't 
demonstrate something important this test should be testing for – position 
increments (stopwords) in the _query_.  Ditto for some similar test methods 
here (pos and negative assertions).  I'll try and improve this some.

RE AnalyzingSuggester:  Hmmm.  What if the "exactFirst" logic first phase 
captured the "output2" lookup results in a place that could be examined by the 
second pass?  I think this would be more robust, and wouldn't need to even 
invoke sameSurfaceForm in second phase.  If the FST was built with the bug (7.3 
or prior) then an exact match of a trailing stopword with this setting wouldn't 
be recognized as an exact match, but I think that's a minor loss easily fixed 
with reindexing?

> TokenStreamToAutomaton doesn't ignore trailing posInc when 
> preservePositionIncrements=false
> ---
>
> Key: LUCENE-8344
> URL: https://issues.apache.org/jira/browse/LUCENE-8344
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/suggest
>Reporter: David Smiley
>Priority: Major
> Attachments: LUCENE-8344.patch, LUCENE-8344.patch
>
>
> TokenStreamToAutomaton in Lucene core is used by the AnalyzingSuggester 
> (incl. FuzzySuggester subclass ) and NRT Document Suggester and soon the 
> SolrTextTagger.  It has a setting {{preservePositionIncrements}} defaulting 
> to true.  If it's set to false (e.g. to ignore stopwords) and if there is a 
> _trailing_ position increment greater than 1, TS2A will _still_ add position 
> increments (holes) into the automata even though it was configured not to.
> I'm filing this issue separate from LUCENE-8332 where I first found it.  The 
> fix is very simple but I'm concerned about back-compat ramifications so I'm 
> filing it separately.  I'll attach a patch to show the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12448) Update autoAddReplicas docs since it works on non-shared file systems as well

2018-06-04 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12448?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12448:
-
Component/s: documentation

> Update autoAddReplicas docs since it works on non-shared file systems as well
> -
>
> Key: SOLR-12448
> URL: https://issues.apache.org/jira/browse/SOLR-12448
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling, documentation
>Reporter: Varun Thacker
>Priority: Major
>
> The autoAddReplicas does on 
> [http://lucene.apache.org/solr/guide/collections-api.html] reads 
> "When set to true, enables automatic addition of replicas on shared file 
> systems (such as HDFS) only... "
>  
> After the autoscaling autoAddReplicas feature this is no longer true and is 
> more widely supported. We should fix the docs for it as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12448) Update autoAddReplicas docs since it works on non-shared file systems as well

2018-06-04 Thread Varun Thacker (JIRA)
Varun Thacker created SOLR-12448:


 Summary: Update autoAddReplicas docs since it works on non-shared 
file systems as well
 Key: SOLR-12448
 URL: https://issues.apache.org/jira/browse/SOLR-12448
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Varun Thacker


The autoAddReplicas does on 
[http://lucene.apache.org/solr/guide/collections-api.html] reads 

"When set to true, enables automatic addition of replicas on shared file 
systems (such as HDFS) only... "

 

After the autoscaling autoAddReplicas feature this is no longer true and is 
more widely supported. We should fix the docs for it as well



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-06-04 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500779#comment-16500779
 ] 

Shawn Heisey commented on SOLR-12297:
-

bq. Well the client will speak HTTP/2, but have you setup Jetty to run an 
HTTP/2 connector instead of HTTP/1.1?

No, I just tried to get starburst started with minimal changes -- only the 
patch for ivy.The Jetty config is unchanged, so it's listening for 1.1 
requests only.

I did try to go into SolrCLI and explicitly tell it to use the 1.1 client, but 
either I didn't make the right change, or it failed to work.

> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8344) TokenStreamToAutomaton doesn't ignore trailing posInc when preservePositionIncrements=false

2018-06-04 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500766#comment-16500766
 ] 

Jim Ferenczi commented on LUCENE-8344:
--

{quote}
org.apache.lucene.search.suggest.document.TestPrefixCompletionQuery#testAnalyzerWithSepAndNoPreservePos
 see "test trailing stopword with a new document"
{quote}

If you index with preservePositionIncrements=false you cannot match a query 
that preserves the position increments and contains a stop word. This is 
expected. "baz the" indexed with preservePositionIncrements=false cannot match 
the query "baz the" if you preserve the position increments. However it should 
work if you query "baz" with and without preserving the pos increment. This is 
why I said that the completion field (and all the related queries) should be 
fine with this change. It works without reindexing.

{quote}
org.apache.lucene.search.suggest.analyzing.AnalyzingSuggesterTest#testStandard 
see the "round trip" test
With BUG==true: fails (bad for back-compat)
With BUG==false: passes (therefore a reindex fixes)
{quote}

This one is more tricky because it tries to find exact match first so the 
indexed version and the query version should be the same otherwise the 
assertion line 789 of the AnalyzingSuggester fails. We can probably fix the 
discrepancy by adding a BWC layer that removes the trailing POS_SEP of the 
indexed version when sameSurfaceForm is called and preservePosInc is false ? 
WDYT ? 
This would remove the need to rebuild the FST on a version that contains the 
fix.



> TokenStreamToAutomaton doesn't ignore trailing posInc when 
> preservePositionIncrements=false
> ---
>
> Key: LUCENE-8344
> URL: https://issues.apache.org/jira/browse/LUCENE-8344
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/suggest
>Reporter: David Smiley
>Priority: Major
> Attachments: LUCENE-8344.patch, LUCENE-8344.patch
>
>
> TokenStreamToAutomaton in Lucene core is used by the AnalyzingSuggester 
> (incl. FuzzySuggester subclass ) and NRT Document Suggester and soon the 
> SolrTextTagger.  It has a setting {{preservePositionIncrements}} defaulting 
> to true.  If it's set to false (e.g. to ignore stopwords) and if there is a 
> _trailing_ position increment greater than 1, TS2A will _still_ add position 
> increments (holes) into the automata even though it was configured not to.
> I'm filing this issue separate from LUCENE-8332 where I first found it.  The 
> fix is very simple but I'm concerned about back-compat ramifications so I'm 
> filing it separately.  I'll attach a patch to show the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_172) - Build # 2052 - Unstable!

2018-06-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2052/
Java: 32bit/jdk1.8.0_172 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest

Error Message:
Tlog size exceeds the max size bound. Tlog path: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.update.MaxSizeAutoCommitTest_A970CFAFF1717FB9-001/init-core-data-001/tlog/tlog.003,
 tlog size: 1302

Stack Trace:
java.lang.AssertionError: Tlog size exceeds the max size bound. Tlog path: 
/home/jenkins/workspace/Lucene-Solr-7.x-Linux/solr/build/solr-core/test/J2/temp/solr.update.MaxSizeAutoCommitTest_A970CFAFF1717FB9-001/init-core-data-001/tlog/tlog.003,
 tlog size: 1302
at 
__randomizedtesting.SeedInfo.seed([A970CFAFF1717FB9:B93E2A508ADF4648]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.getTlogFileSizes(MaxSizeAutoCommitTest.java:379)
at 
org.apache.solr.update.MaxSizeAutoCommitTest.deleteTest(MaxSizeAutoCommitTest.java:200)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[jira] [Commented] (LUCENE-7960) NGram filters -- preserve the original token when it is outside the min/max size range

2018-06-04 Thread Ingomar Wesp (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-7960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500742#comment-16500742
 ] 

Ingomar Wesp commented on LUCENE-7960:
--

So ... anyone willing to merge this into master?

> NGram filters -- preserve the original token when it is outside the min/max 
> size range
> --
>
> Key: LUCENE-7960
> URL: https://issues.apache.org/jira/browse/LUCENE-7960
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Shawn Heisey
>Priority: Major
> Attachments: LUCENE-7960.patch, LUCENE-7960.patch, LUCENE-7960.patch, 
> LUCENE-7960.patch
>
>  Time Spent: 0.5h
>  Remaining Estimate: 0h
>
> When ngram or edgengram filters are used, any terms that are shorter than the 
> minGramSize are completely removed from the token stream.
> This is probably 100% what was intended, but I've seen it cause a lot of 
> problems for users.  I am not suggesting that the default behavior be 
> changed.  That would be far too disruptive to the existing user base.
> I do think there should be a new boolean option, with a name like 
> keepShortTerms, that defaults to false, to allow the short terms to be 
> preserved.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12392) IndexSizeTriggerTest fails too frequently.

2018-06-04 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500666#comment-16500666
 ] 

Steve Rowe commented on SOLR-12392:
---

Looks like tests in this suite are still failing.  Here are a few 
representative ones I found since the commits on this issue:

>From [https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/2050/] (reproduced 
>2/5 iterations):
{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=IndexSizeTriggerTest -Dtests.method=testSplitIntegration 
-Dtests.seed=C227ED566D1F9E93 -Dtests.multiplier=3 -Dtests.slow=true 
-Dtests.locale=de-AT -Dtests.timezone=America/Tortola -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   0.79s J1 | IndexSizeTriggerTest.testSplitIntegration <<<
   [junit4]> Throwable #1: java.util.concurrent.TimeoutException: last 
state: DocCollection(testSplitIntegration_collection//clusterstate.json/94)={
   [junit4]>   "replicationFactor":"2",
   [junit4]>   "pullReplicas":"0",
   [junit4]>   "router":{"name":"compositeId"},
   [junit4]>   "maxShardsPerNode":"2",
   [junit4]>   "autoAddReplicas":"false",
   [junit4]>   "nrtReplicas":"2",
   [junit4]>   "tlogReplicas":"0",
   [junit4]>   "autoCreated":"true",
   [junit4]>   "shards":{
   [junit4]> "shard2":{
   [junit4]>   "replicas":{
   [junit4]> "core_node3":{
   [junit4]>   
"core":"testSplitIntegration_collection_shard2_replica_n3",
   [junit4]>   "leader":"true",
   [junit4]>   "SEARCHER.searcher.maxDoc":11,
   [junit4]>   "SEARCHER.searcher.deletedDocs":0,
   [junit4]>   "INDEX.sizeInBytes":1,
   [junit4]>   "node_name":"127.0.0.1:1_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "SEARCHER.searcher.numDocs":11},
   [junit4]> "core_node4":{
   [junit4]>   
"core":"testSplitIntegration_collection_shard2_replica_n4",
   [junit4]>   "SEARCHER.searcher.maxDoc":11,
   [junit4]>   "SEARCHER.searcher.deletedDocs":0,
   [junit4]>   "INDEX.sizeInBytes":1,
   [junit4]>   "node_name":"127.0.0.1:10001_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "SEARCHER.searcher.numDocs":11}},
   [junit4]>   "range":"0-7fff",
   [junit4]>   "state":"active"},
   [junit4]> "shard1":{
   [junit4]>   "stateTimestamp":"1528114164033756150",
   [junit4]>   "replicas":{
   [junit4]> "core_node1":{
   [junit4]>   
"core":"testSplitIntegration_collection_shard1_replica_n1",
   [junit4]>   "leader":"true",
   [junit4]>   "SEARCHER.searcher.maxDoc":14,
   [junit4]>   "SEARCHER.searcher.deletedDocs":0,
   [junit4]>   "INDEX.sizeInBytes":1,
   [junit4]>   "node_name":"127.0.0.1:1_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "SEARCHER.searcher.numDocs":14},
   [junit4]> "core_node2":{
   [junit4]>   
"core":"testSplitIntegration_collection_shard1_replica_n2",
   [junit4]>   "SEARCHER.searcher.maxDoc":14,
   [junit4]>   "SEARCHER.searcher.deletedDocs":0,
   [junit4]>   "INDEX.sizeInBytes":1,
   [junit4]>   "node_name":"127.0.0.1:10001_solr",
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "SEARCHER.searcher.numDocs":14}},
   [junit4]>   "range":"8000-",
   [junit4]>   "state":"inactive"},
   [junit4]> "shard1_1":{
   [junit4]>   "parent":"shard1",
   [junit4]>   "stateTimestamp":"1528114164049960450",
   [junit4]>   "range":"c000-",
   [junit4]>   "state":"active",
   [junit4]>   "replicas":{
   [junit4]> "core_node10":{
   [junit4]>   "leader":"true",
   [junit4]>   
"core":"testSplitIntegration_collection_shard1_1_replica1",
   [junit4]>   "SEARCHER.searcher.maxDoc":7,
   [junit4]>   "SEARCHER.searcher.deletedDocs":0,
   [junit4]>   "INDEX.sizeInBytes":1,
   [junit4]>   "node_name":"127.0.0.1:1_solr",
   [junit4]>   "base_url":"http://127.0.0.1:1/solr;,
   [junit4]>   "state":"active",
   [junit4]>   "type":"NRT",
   [junit4]>   "SEARCHER.searcher.numDocs":7},
   [junit4]> "core_node9":{
   [junit4]>   
"core":"testSplitIntegration_collection_shard1_1_replica0",
   [junit4]>   "SEARCHER.searcher.maxDoc":7,
   [junit4]>   

Re: [jira] [Commented] (PYLUCENE-41) JArray type issue

2018-06-04 Thread Andi Vajda


On Mon, 4 Jun 2018, Petrus Hyvönen wrote:


Hi Andi,

Do you have a suggestion of where to start chasing this issue? Is it on the
definition of objects in java that makes them uncastable, or...


Yes, start by stepping through the JArray::set(...) method, line 
187 in jcc/jcc3/sources/JArray.h and see if it passes all the type checks 
there.


Andi..



Seems like others do not use this feature of creating two dimensional
arrays, or are there other methods of creating them?

All the best,
/Petrus


On Mon, Mar 19, 2018 at 10:30 PM, Petrus Hyvönen (JIRA) 
wrote:



[ https://issues.apache.org/jira/browse/PYLUCENE-41?page=
com.atlassian.jira.plugin.system.issuetabpanels:comment-
tabpanel=16405482#comment-16405482 ]

Petrus Hyvönen commented on PYLUCENE-41:


Hi,



Yes, it's the assignment to the object array that is an issue. This
assignment worked in JCC 3.0 release version somehow.

Regards

/Petrus




JArray type issue
-

Key: PYLUCENE-41
URL: https://issues.apache.org/jira/browse/PYLUCENE-41
Project: PyLucene
 Issue Type: Bug
Environment: windows 7, python 3
   Reporter: Petrus Hyvönen
   Priority: Major

Hi,

In JCC 3.0 release version and early(2.7) it is possible to make a

double array by:


{{mask = JArray('object')(5)}}
{{for i in range(5)}}
 {{mask[i] = JArray('double')([1.0, 2.0])}}
It gives in the <=3.0 following type of 'mask'
JArray[, , ...
for svn version it gives a 'TypeError: JArray[1.0, 2.0]' in the

assignment to mask[i]


Not sure this is a bug or a change of how to do things.
Best Regards
/Petrus






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)





--
_
Petrus Hyvönen, Uppsala, Sweden
Mobile Phone/SMS:+46 73 803 19 00


[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-9.0.4) - Build # 22175 - Still Unstable!

2018-06-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22175/
Java: 64bit/jdk-9.0.4 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "0009481102633298280763"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"0009481102633298280763"
at 
__randomizedtesting.SeedInfo.seed([17DF3749B474F71E:F2599E25D3A0B1F]:0)
at 
java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.base/java.lang.Long.parseLong(Long.java:692)
at java.base/java.lang.Long.parseLong(Long.java:817)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:153)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:141)
at 
org.apache.solr.update.TransactionLogTest.testBigLastAddSize(TransactionLogTest.java:34)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "00013722383568294058780"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"00013722383568294058780"
at 
__randomizedtesting.SeedInfo.seed([17DF3749B474F71E:F2599E25D3A0B1F]:0)
at 

Re: Not adding badapples this week.

2018-06-04 Thread Steve Rowe
I looked at the way that tests are run, an tthe only difference I see in the 
smoke tester jobs is that tests are run twice, once each for Java8 and Java9.  
Compared to non-smoke-tester jobs, this will double the likelihood of overall 
failure.

I looked at the suites that failed in the last ten runs on those two smoke 
tester jobs.  Except for SearchHandlerTest, which I have since (hopefully) 
fixed, there are seven suites with failed tests - here they are along with 
their packages:

  TestExecutePlanAction o.a.s.cloud.autoscaling.sim
  TestComputePlanAction o.a.s.cloud.autoscaling.sim
  TestTriggerIntegrationo.a.s.cloud.autoscaling.sim
  IndexSizeTriggerTest  o.a.s.cloud.autoscaling
  CreateRoutedAliasTest o.a.s.cloud
  ReplaceNodeTest   o.a.s.cloud
  MetricsHistoryHandlerTest o.a.s.handler.admin

Some of those are pretty regular offenders AFAICT from 
http://fucit.org/solr-jenkins-reports/failure-report.html .

Andrzej Białecki did some work on IndexSizeTriggerTest (SOLR-12392) and 
un-bad-apple’d its tests, but at least one of them is still failing since then 
- I’ll go add a comment on the issue.

--
Steve
www.lucidworks.com

> On Jun 4, 2018, at 11:16 AM, Erick Erickson  wrote:
> 
> Adrien:
> 
> "Do you know whether there is something that makes these jobs any different?"
> 
> Unfortunately no. I'm not really very well versed in the various test
> environments, maybe Uwe or Steve Rowe or Hoss might have some insight?
> 
> On Mon, Jun 4, 2018 at 3:34 AM, Adrien Grand  wrote:
>> Thanks for helping on this front Erick. I noticed a significant decrease in
>> noise since you started badapple-ing bad tests, but I'm observing that our
>> smoke-release builds still keep failing because of Solr tests (10 out of the
>> last 10 builds) in spite of the fact that they disable bad apples:
>> - https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/
>> - https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/
>> 
>> Do you know whether there is something that makes these jobs any different?
>> 
>> Le mar. 29 mai 2018 à 18:12, Erick Erickson  a
>> écrit :
>>> 
>>> With the long weekend and the fact that the number of non-BadApple
>>> tests is fairly small this week, I'll skip adding more BadApple tests
>>> until next week.
>>> 
>>> We're scarily close to the non-BaApple'd tests coming under  control.
>>> If we're lucky, we can draw a line in the sand soon then start working
>>> on the backlog. I'll be encouraged if we can start shrinking the
>>> BadApple'd tests.
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



Begin forwarded message:

> From: Erick Erickson 
> Subject: Re: Not adding badapples this week.
> Date: June 4, 2018 at 11:16:56 AM EDT
> To: dev@lucene.apache.org
> Reply-To: dev@lucene.apache.org
> 
> Adrien:
> 
> "Do you know whether there is something that makes these jobs any different?"
> 
> Unfortunately no. I'm not really very well versed in the various test
> environments, maybe Uwe or Steve Rowe or Hoss might have some insight?
> 
> On Mon, Jun 4, 2018 at 3:34 AM, Adrien Grand  wrote:
>> Thanks for helping on this front Erick. I noticed a significant decrease in
>> noise since you started badapple-ing bad tests, but I'm observing that our
>> smoke-release builds still keep failing because of Solr tests (10 out of the
>> last 10 builds) in spite of the fact that they disable bad apples:
>> - https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/
>> - https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/
>> 
>> Do you know whether there is something that makes these jobs any different?
>> 
>> Le mar. 29 mai 2018 à 18:12, Erick Erickson  a
>> écrit :
>>> 
>>> With the long weekend and the fact that the number of non-BadApple
>>> tests is fairly small this week, I'll skip adding more BadApple tests
>>> until next week.
>>> 
>>> We're scarily close to the non-BaApple'd tests coming under  control.
>>> If we're lucky, we can draw a line in the sand soon then start working
>>> on the backlog. I'll be encouraged if we can start shrinking the
>>> BadApple'd tests.
>>> 
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> 
>> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 

-
To unsubscribe, e-mail: 

[jira] [Commented] (SOLR-12439) Switch from Apache HttpClient to Jetty HttpClient which offers a single API for Async, Http/1.1 Http/2.

2018-06-04 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500567#comment-16500567
 ] 

Mark Miller commented on SOLR-12439:


I compared a few clients, so nothing in this issue is directly related to 
Apache HttpClient.

The Apache client has a competing offer in it's 5 version, which is in beta so 
not considered. I considered clients that have been out of beta for a long time.

How singular the API design and focus is is also something I considered, there 
is no reason to argue about it here.

The decision on the client did not come down to those individual points, nor 
was it a 1 on 1 comparison. Regardless, with Apache HttpClient 5 in beta, I 
only looked at it out of curiosity more than anything.

> Switch from Apache HttpClient to Jetty HttpClient which offers a single API 
> for Async, Http/1.1 Http/2.
> ---
>
> Key: SOLR-12439
> URL: https://issues.apache.org/jira/browse/SOLR-12439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-06-04 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500566#comment-16500566
 ] 

Shawn Heisey commented on SOLR-11982:
-

[~emaijala], that's why I had the question, because the type does remain TLOG 
even when it's leader.  But the way the replica works does change, so that 
effectively it works exactly the same as an NRT replica would in a leader role.

It would make sense to me if this feature were to treat a TLOG leader as if it 
were NRT.  But some minority of users might not actually want that behavior.  
The solution suggested by [~tomasflobbe] would allow whatever combination the 
user wants.

I think that there are a couple of preference combinations that warrant some 
special action.  If somebody sets a preference for leaders and also sets a 
preference for PULL replicas, that should create an ERROR log entry, because it 
is not possible for a PULL replica to be leader.  A preference for leaders with 
TLOG replicas should produce a WARN log entry.  Because unusual or impossible 
preferences should just fall back to random when they cannot be achieved, it 
shouldn't stop Solr operation.


> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BlendedInfixSuggester, a couple of questions

2018-06-04 Thread Alessandro Benedetti
Hi all,
*2)* has been added to Jira :
https://issues.apache.org/jira/browse/LUCENE-8347
A patch with the improvement and related tests is available for review

Regards

--
Alessandro Benedetti
Search Consultant, R Software Engineer, Director
www.sease.io

On Fri, Jun 1, 2018 at 12:57 PM, Alessandro Benedetti 
wrote:

> Hi all,
>
> *1)* has been added in Jira : https://issues.apache.org/
> jira/browse/LUCENE-8343
>  .
> A patch with the fix and related tests is available for review.
>
> Regards
>
> --
> Alessandro Benedetti
> Search Consultant, R Software Engineer, Director
> www.sease.io
>
> On Tue, May 22, 2018 at 3:05 PM, Alessandro Benedetti <
> a.benede...@sease.io> wrote:
>
>> Thanks David, I attach in copy Andrea, probably he wants to follow up as
>> he originally found the Lucene behavior.
>>
>> Cheers
>>
>> --
>> Alessandro Benedetti
>> Search Consultant, R Software Engineer, Director
>> www.sease.io
>>
>> On Tue, May 22, 2018 at 2:53 PM, David Smiley 
>> wrote:
>>
>>> Feel free to file an issue with a proposal; probably to Lucene in this
>>> case.
>>>
>>> On Tue, May 22, 2018 at 7:42 AM Alessandro Benedetti <
>>> benedetti.ale...@gmail.com> wrote:
>>>
 UP
 i am facing the same behaviour and I agree with Andrea observations,
 any view on this from the dev community ?

 Regards

 On Wed, Nov 29, 2017 at 4:36 PM, Andrea Gazzarini 
 wrote:

> Hi guys,
> any suggestion about this?
>
> Best,
> Andres
>
> On 27 Nov 2017 5:54 pm, "Andrea Gazzarini"  wrote:
>
>> Hi,
>> I'm using Solr 7.1.0 (but I guess all what I'm going to describe is
>> the same in the previous versions) and I have to implement a simple 
>> product
>> name suggester.
>>
>> I started focusing on the BlendedInfixLookup which could fit my
>> needs, but I have some doubts, even after looking at the code, about how 
>> it
>> works.  I have several questions:
>>
>> *1) org.apache.lucene.search.su
>> ggest.Lookup*
>> The formula in the BlendedInfixSuggester documentation says "final
>> weight = 1 - (0.10*position)" so it would suggest to me a float or a 
>> double
>> datatype. Instead, the "value" instance member of the Lookup class, which
>> should hold the computed weight, it's a long.
>> I realised that because, in a scenario where the weight field in my
>> schema always returns 1, the final computed weight is always 0 or 1,
>> therefore loosing the precision when the actual result of the formula 
>> above
>> is between 0 and 1 (excluded).
>>
>> 2) *Position role within the **BlendedInfixSuggester*
>> If I write more than one term in the query, let's say
>>
>> "Mini Bar Fridge"
>>
>> I would expect in the results something like (note that
>> allTermsRequired=true and the schema weight field always returns
>> 1000)
>>
>> - *Mini Bar Fridge* something
>> - *Mini Bar Fridge* something else
>> - *Mini Bar* something *Fridge*
>> - *Mini Bar* something else *Fridge*
>> - *Mini* something *Bar Fridge*
>> ...
>>
>> Instead I see this:
>>
>> - *Mini Bar* something *Fridge*
>> - *Mini Bar* something else *Fridge*
>> - *Mini Bar Fridge* something
>> - *Mini Bar Fridge* something else
>> - *Mini* something *Bar Fridge*
>> ...
>>
>> After having a look at the suggester code (BlendedInfixSuggester.
>> createCoefficient), I see that the component takes in account only
>> one position, which is the lowest position (among the three matching 
>> terms)
>> within the term vector ("mini" in the example above) so all the 
>> suggestions
>> above have the same weight
>>
>> score = weight * (1 - 0.10 * position) = 1000 * (1 - 0.10 * 0) = 1000
>>
>> Is that the expected behaviour?
>>
>> Many thanks in advance
>> Andrea
>>
>


 --
 --

 Benedetti Alessandro
 Visiting card - http://about.me/alessandro_benedetti
 Blog - http://alexbenedetti.blogspot.co.uk

 "Tyger, tyger burning bright
 In the forests of the night,
 What immortal hand or eye
 Could frame thy fearful symmetry?"

 William Blake - Songs of Experience -1794 England

>>> --
>>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>>> http://www.solrenterprisesearchserver.com
>>>
>>
>>
>


[jira] [Commented] (LUCENE-8347) BlendedInfixSuggester to handle multi term matches better

2018-06-04 Thread Alessandro Benedetti (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500554#comment-16500554
 ] 

Alessandro Benedetti commented on LUCENE-8347:
--

It is recommended to merge this one first

> BlendedInfixSuggester to handle multi term matches better
> -
>
> Key: LUCENE-8347
> URL: https://issues.apache.org/jira/browse/LUCENE-8347
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: core/search
>Affects Versions: 7.3.1
>Reporter: Alessandro Benedetti
>Priority: Major
> Attachments: LUCENE-8347.patch
>
>
> Currently the blendedInfix suggester considers just the first match position 
> when scoring a suggestion.
> From the lucene-dev mailing list :
> "
> If I write more than one term in the query, let's say 
>  
> "Mini Bar Fridge" 
>  
> I would expect in the results something like (note that allTermsRequired=true 
> and the schema weight field always returns 1000)
>  
> - *Mini Bar Fridge* something
> - *Mini Bar Fridge* something else
> - *Mini Bar* something *Fridge*        
> - *Mini Bar* something else *Fridge*
> - *Mini* something *Bar Fridge*
> ...
>  
> Instead I see this: 
>  
> - *Mini Bar* something *Fridge*        
> - *Mini Bar* something else *Fridge*
> - *Mini Bar Fridge* something
> - *Mini Bar Fridge* something else
> - *Mini* something *Bar Fridge*
> ...
>  
> After having a look at the suggester code 
> (BlendedInfixSuggester.createCoefficient), I see that the component takes in 
> account only one position, which is the lowest position (among the three 
> matching terms) within the term vector ("mini" in the example above) so all 
> the suggestions above have the same weight 
> "
> Scope of this Jira issue is to improve the BlendedInfix to better manage 
> those scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #393: Lucene 8347

2018-06-04 Thread alessandrobenedetti
GitHub user alessandrobenedetti opened a pull request:

https://github.com/apache/lucene-solr/pull/393

Lucene 8347

Introduced multi term management for the BlendedInfix suggester.
The score of each suggestion will be calculated based on : 

- the positional coefficient of each token matched
- the length of the suggestion ( minor)

This patch is supposed to go in after LUCENE-8343

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/SeaseLtd/lucene-solr LUCENE-8347

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/393.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #393


commit e83e8ee1a42388606fffd10330ed1aeec9518098
Author: Alessandro Benedetti 
Date:   2018-06-01T11:52:41Z

[LUCENE-8343] introduced weight 0 check and positional coefficient scaling 
+ tests

commit a416b7a867568876bdbdb77483b0538cf53c
Author: Alessandro Benedetti 
Date:   2018-06-04T17:17:57Z

[LUCENE-8347] improved positional coefficient to manage multi terms + token 
count coefficient + tests




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8347) BlendedInfixSuggester to handle multi term matches better

2018-06-04 Thread Alessandro Benedetti (JIRA)
Alessandro Benedetti created LUCENE-8347:


 Summary: BlendedInfixSuggester to handle multi term matches better
 Key: LUCENE-8347
 URL: https://issues.apache.org/jira/browse/LUCENE-8347
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Reporter: Alessandro Benedetti


Currently the blendedInfix suggester considers just the first match position 
when scoring a suggestion.
>From the lucene-dev mailing list :
"
If I write more than one term in the query, let's say 
 
"Mini Bar Fridge" 
 
I would expect in the results something like (note that allTermsRequired=true 
and the schema weight field always returns 1000)
 
- *Mini Bar Fridge* something
- *Mini Bar Fridge* something else
- *Mini Bar* something *Fridge*        
- *Mini Bar* something else *Fridge*
- *Mini* something *Bar Fridge*
...
 
Instead I see this: 
 
- *Mini Bar* something *Fridge*        
- *Mini Bar* something else *Fridge*
- *Mini Bar Fridge* something
- *Mini Bar Fridge* something else
- *Mini* something *Bar Fridge*
...
 
After having a look at the suggester code 
(BlendedInfixSuggester.createCoefficient), I see that the component takes in 
account only one position, which is the lowest position (among the three 
matching terms) within the term vector ("mini" in the example above) so all the 
suggestions above have the same weight 
"
Scope of this Jira issue is to improve the BlendedInfix to better manage those 
scenarios.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8165) ban Arrays.copyOfRange with forbidden APIs

2018-06-04 Thread Nhat Nguyen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500497#comment-16500497
 ] 

Nhat Nguyen commented on LUCENE-8165:
-

+1: {{copyOfSubArray}} to be more explicit about the fact that it is a copy

> ban Arrays.copyOfRange with forbidden APIs
> --
>
> Key: LUCENE-8165
> URL: https://issues.apache.org/jira/browse/LUCENE-8165
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8165_copy_of.patch, 
> LUCENE-8165_copy_of_range.patch, LUCENE-8165_start.patch, 
> LUCENE-8165_start.patch
>
>
> This method is no good, because instead of throwing AIOOBE for bad bounds, it 
> will silently fill with zeros (essentially silent corruption). Unfortunately 
> it is used in quite a few places so replacing it with e.g. arrayCopy may 
> uncover some interesting surprises.
> See LUCENE-8164 for motivation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-master - Build # 1040 - Still Failing

2018-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/1040/

No tests ran.

Build Log:
[...truncated 24180 lines...]
[asciidoctor:convert] asciidoctor: ERROR: about-this-guide.adoc: line 1: 
invalid part, must have at least one section (e.g., chapter, appendix, etc.)
[asciidoctor:convert] asciidoctor: ERROR: solr-glossary.adoc: line 1: invalid 
part, must have at least one section (e.g., chapter, appendix, etc.)
 [java] Processed 2214 links (1768 relative) to 3105 anchors in 246 files
 [echo] Validated Links & Anchors via: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr-ref-guide/bare-bones-html/

-dist-changes:
 [copy] Copying 4 files to 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/changes

-dist-keys:
  [get] Getting: http://home.apache.org/keys/group/lucene.asc
  [get] To: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/KEYS

package:

-unpack-solr-tgz:

-ensure-solr-tgz-exists:
[mkdir] Created dir: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked
[untar] Expanding: 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/package/solr-8.0.0.tgz
 into 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/solr/build/solr.tgz.unpacked

generate-maven-artifacts:

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 
/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-master/lucene/top-level-ivy-settings.xml

resolve:

ivy-availability-check:
[loadresource] Do not set property disallowed.ivy.jars.list as its length is 0.

-ivy-fail-disallowed-ivy-version:

ivy-fail:

ivy-configure:
[ivy:configure] :: loading settings :: file = 

[jira] [Comment Edited] (LUCENE-8165) ban Arrays.copyOfRange with forbidden APIs

2018-06-04 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500492#comment-16500492
 ] 

Adrien Grand edited comment on LUCENE-8165 at 6/4/18 4:39 PM:
--

bq.  I think our code will be more clear and less error-prone with these helper 
methods.

+1 I was going to suggest something like that too. I understand why someone 
would like the explicitness of System.arraycopy, but I miss the type safety and 
conciseness of Arrays.copyOf/copyOfRange. Maybe call the second method that you 
suggested something like {{copyOfSubArray}} to be more explicit about the fact 
that it is a copy?


was (Author: jpountz):
bq.  I think our code will be more clear and less error-prone with these helper 
methods.

+1 I was going to use something like that too. I understand why someone would 
like the explicitness of System.arraycopy, but I miss the type safety and 
conciseness of Arrays.copyOf/copyOfRange. Maybe call the second method that you 
suggested something like {{copyOfSubArray}} to be more explicit about the fact 
that it is a copy?

> ban Arrays.copyOfRange with forbidden APIs
> --
>
> Key: LUCENE-8165
> URL: https://issues.apache.org/jira/browse/LUCENE-8165
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8165_copy_of.patch, 
> LUCENE-8165_copy_of_range.patch, LUCENE-8165_start.patch, 
> LUCENE-8165_start.patch
>
>
> This method is no good, because instead of throwing AIOOBE for bad bounds, it 
> will silently fill with zeros (essentially silent corruption). Unfortunately 
> it is used in quite a few places so replacing it with e.g. arrayCopy may 
> uncover some interesting surprises.
> See LUCENE-8164 for motivation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8165) ban Arrays.copyOfRange with forbidden APIs

2018-06-04 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500492#comment-16500492
 ] 

Adrien Grand commented on LUCENE-8165:
--

bq.  I think our code will be more clear and less error-prone with these helper 
methods.

+1 I was going to use something like that too. I understand why someone would 
like the explicitness of System.arraycopy, but I miss the type safety and 
conciseness of Arrays.copyOf/copyOfRange. Maybe call the second method that you 
suggested something like {{copyOfSubArray}} to be more explicit about the fact 
that it is a copy?

> ban Arrays.copyOfRange with forbidden APIs
> --
>
> Key: LUCENE-8165
> URL: https://issues.apache.org/jira/browse/LUCENE-8165
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8165_copy_of.patch, 
> LUCENE-8165_copy_of_range.patch, LUCENE-8165_start.patch, 
> LUCENE-8165_start.patch
>
>
> This method is no good, because instead of throwing AIOOBE for bad bounds, it 
> will silently fill with zeros (essentially silent corruption). Unfortunately 
> it is used in quite a few places so replacing it with e.g. arrayCopy may 
> uncover some interesting surprises.
> See LUCENE-8164 for motivation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12445) Upgrade Dropwizard Metrics to 4.0.2 release

2018-06-04 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500477#comment-16500477
 ] 

Andrzej Bialecki  commented on SOLR-12445:
--

If this issue is not resolved in time for Solr 7.4 release I propose to at 
least upgrade to 3.2.6 where the reservoir bug is fixed.

> Upgrade Dropwizard Metrics to 4.0.2 release
> ---
>
> Key: SOLR-12445
> URL: https://issues.apache.org/jira/browse/SOLR-12445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> This version of the library is compatible with Java 9 and fixes an important 
> bug in ExponentiallyDecayingReservoir, which resulted in incorrect values 
> being reported after long periods of inactivity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8165) ban Arrays.copyOfRange with forbidden APIs

2018-06-04 Thread Nhat Nguyen (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500444#comment-16500444
 ] 

Nhat Nguyen commented on LUCENE-8165:
-

[~rcmuir] and [~simonw]

I've submitted a new patch which removes Arrays#copyOf. This patch is on top of 
the #copyOfRange patch.

I considered introducing `ArrayUtils#growExact(array, newLength)` and 
`ArrayUtils#subArray(array, from, to)`. Method `ArrayUtils#growExact` will grow 
an array to the exact given length instead of an over-allocated length like ` 
ArrayUtils#grow`. I think our code will be more clear and less error-prone with 
these helper methods. I am open to suggestions.

Please have a look when you have time. Thank you!

> ban Arrays.copyOfRange with forbidden APIs
> --
>
> Key: LUCENE-8165
> URL: https://issues.apache.org/jira/browse/LUCENE-8165
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8165_copy_of.patch, 
> LUCENE-8165_copy_of_range.patch, LUCENE-8165_start.patch, 
> LUCENE-8165_start.patch
>
>
> This method is no good, because instead of throwing AIOOBE for bad bounds, it 
> will silently fill with zeros (essentially silent corruption). Unfortunately 
> it is used in quite a few places so replacing it with e.g. arrayCopy may 
> uncover some interesting surprises.
> See LUCENE-8164 for motivation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8165) ban Arrays.copyOfRange with forbidden APIs

2018-06-04 Thread Nhat Nguyen (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8165:

Attachment: LUCENE-8165-copy-of.patch

> ban Arrays.copyOfRange with forbidden APIs
> --
>
> Key: LUCENE-8165
> URL: https://issues.apache.org/jira/browse/LUCENE-8165
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8165_copy_of.patch, 
> LUCENE-8165_copy_of_range.patch, LUCENE-8165_start.patch, 
> LUCENE-8165_start.patch
>
>
> This method is no good, because instead of throwing AIOOBE for bad bounds, it 
> will silently fill with zeros (essentially silent corruption). Unfortunately 
> it is used in quite a few places so replacing it with e.g. arrayCopy may 
> uncover some interesting surprises.
> See LUCENE-8164 for motivation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8165) ban Arrays.copyOfRange with forbidden APIs

2018-06-04 Thread Nhat Nguyen (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8165:

Attachment: LUCENE-8165_copy_of.patch

> ban Arrays.copyOfRange with forbidden APIs
> --
>
> Key: LUCENE-8165
> URL: https://issues.apache.org/jira/browse/LUCENE-8165
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8165_copy_of.patch, 
> LUCENE-8165_copy_of_range.patch, LUCENE-8165_start.patch, 
> LUCENE-8165_start.patch
>
>
> This method is no good, because instead of throwing AIOOBE for bad bounds, it 
> will silently fill with zeros (essentially silent corruption). Unfortunately 
> it is used in quite a few places so replacing it with e.g. arrayCopy may 
> uncover some interesting surprises.
> See LUCENE-8164 for motivation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8165) ban Arrays.copyOfRange with forbidden APIs

2018-06-04 Thread Nhat Nguyen (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nhat Nguyen updated LUCENE-8165:

Attachment: (was: LUCENE-8165-copy-of.patch)

> ban Arrays.copyOfRange with forbidden APIs
> --
>
> Key: LUCENE-8165
> URL: https://issues.apache.org/jira/browse/LUCENE-8165
> Project: Lucene - Core
>  Issue Type: Bug
>Reporter: Robert Muir
>Priority: Major
> Attachments: LUCENE-8165_copy_of.patch, 
> LUCENE-8165_copy_of_range.patch, LUCENE-8165_start.patch, 
> LUCENE-8165_start.patch
>
>
> This method is no good, because instead of throwing AIOOBE for bad bounds, it 
> will silently fill with zeros (essentially silent corruption). Unfortunately 
> it is used in quite a few places so replacing it with e.g. arrayCopy may 
> uncover some interesting surprises.
> See LUCENE-8164 for motivation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Commented] (PYLUCENE-41) JArray type issue

2018-06-04 Thread Petrus Hyvönen
Hi Andi,

Do you have a suggestion of where to start chasing this issue? Is it on the
definition of objects in java that makes them uncastable, or...

Seems like others do not use this feature of creating two dimensional
arrays, or are there other methods of creating them?

All the best,
/Petrus


On Mon, Mar 19, 2018 at 10:30 PM, Petrus Hyvönen (JIRA) 
wrote:

>
> [ https://issues.apache.org/jira/browse/PYLUCENE-41?page=
> com.atlassian.jira.plugin.system.issuetabpanels:comment-
> tabpanel=16405482#comment-16405482 ]
>
> Petrus Hyvönen commented on PYLUCENE-41:
> 
>
> Hi,
>
>
>
> Yes, it's the assignment to the object array that is an issue. This
> assignment worked in JCC 3.0 release version somehow.
>
> Regards
>
> /Petrus
>
>
>
> > JArray type issue
> > -
> >
> > Key: PYLUCENE-41
> > URL: https://issues.apache.org/jira/browse/PYLUCENE-41
> > Project: PyLucene
> >  Issue Type: Bug
> > Environment: windows 7, python 3
> >Reporter: Petrus Hyvönen
> >Priority: Major
> >
> > Hi,
> >
> > In JCC 3.0 release version and early(2.7) it is possible to make a
> double array by:
> >
> > {{mask = JArray('object')(5)}}
> > {{for i in range(5)}}
> >  {{mask[i] = JArray('double')([1.0, 2.0])}}
> > It gives in the <=3.0 following type of 'mask'
> > JArray[, , ...
> > for svn version it gives a 'TypeError: JArray[1.0, 2.0]' in the
> assignment to mask[i]
> >
> > Not sure this is a bug or a change of how to do things.
> > Best Regards
> > /Petrus
> >
> >
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v7.6.3#76005)
>



-- 
_
Petrus Hyvönen, Uppsala, Sweden
Mobile Phone/SMS:+46 73 803 19 00


[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-06-04 Thread JIRA


[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500426#comment-16500426
 ] 

Tomás Fernández Löbbe commented on SOLR-11982:
--

I think this is expected, at least with the current state of things. A TLOG 
replica that is leader doesn't stop being a TLOG replica (the replica type is 
recorded on the cluster state and doesn't change). IMO, what we want is a rule 
that one can use to prefer leader (or not leader, which would be more common). 
Something like {{shards.prefernce=isLeader:false replicaType:TLOG}}

> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Not adding badapples this week.

2018-06-04 Thread Erick Erickson
Adrien:

"Do you know whether there is something that makes these jobs any different?"

Unfortunately no. I'm not really very well versed in the various test
environments, maybe Uwe or Steve Rowe or Hoss might have some insight?

On Mon, Jun 4, 2018 at 3:34 AM, Adrien Grand  wrote:
> Thanks for helping on this front Erick. I noticed a significant decrease in
> noise since you started badapple-ing bad tests, but I'm observing that our
> smoke-release builds still keep failing because of Solr tests (10 out of the
> last 10 builds) in spite of the fact that they disable bad apples:
>  - https://builds.apache.org/job/Lucene-Solr-SmokeRelease-master/
>  - https://builds.apache.org/job/Lucene-Solr-SmokeRelease-7.x/
>
> Do you know whether there is something that makes these jobs any different?
>
> Le mar. 29 mai 2018 à 18:12, Erick Erickson  a
> écrit :
>>
>> With the long weekend and the fact that the number of non-BadApple
>> tests is fairly small this week, I'll skip adding more BadApple tests
>> until next week.
>>
>> We're scarily close to the non-BaApple'd tests coming under  control.
>> If we're lucky, we can draw a line in the sand soon then start working
>> on the backlog. I'll be encouraged if we can start shrinking the
>> BadApple'd tests.
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12443) Add a default policy set for solr test framework

2018-06-04 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12443:
-
Description: 
The goal here is to increase test coverage for the policy framework.

We can add policies that we know will never get violated  - Do not allow more 
than 100 cores on any node 

But in doing so we will ensue that the policy framework is always being 
executed .  I'd imagine bugs like SOLR-12358 would have been caught earlier 
this way

> Add a default policy set for solr test framework
> 
>
> Key: SOLR-12443
> URL: https://issues.apache.org/jira/browse/SOLR-12443
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
> Environment: The goal here is to increase test coverage for the 
> policy framework.
> We can add policies that we know will never get violated  - Do not allow more 
> than 100 cores on any node 
> But in doing so we will ensue that the policy framework is always being 
> executed .  I'd imagine bugs like SOLR-12358 would have been caught earlier 
> this way
>  
>Reporter: Varun Thacker
>Priority: Major
>
> The goal here is to increase test coverage for the policy framework.
> We can add policies that we know will never get violated  - Do not allow more 
> than 100 cores on any node 
> But in doing so we will ensue that the policy framework is always being 
> executed .  I'd imagine bugs like SOLR-12358 would have been caught earlier 
> this way



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12443) Add a default policy set for solr test framework

2018-06-04 Thread Cassandra Targett (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12443?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cassandra Targett updated SOLR-12443:
-
Environment: 

 

  was:
The goal here is to increase test coverage for the policy framework.

We can add policies that we know will never get violated  - Do not allow more 
than 100 cores on any node 

But in doing so we will ensue that the policy framework is always being 
executed .  I'd imagine bugs like SOLR-12358 would have been caught earlier 
this way

 


> Add a default policy set for solr test framework
> 
>
> Key: SOLR-12443
> URL: https://issues.apache.org/jira/browse/SOLR-12443
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
> Environment:  
>Reporter: Varun Thacker
>Priority: Major
>
> The goal here is to increase test coverage for the policy framework.
> We can add policies that we know will never get violated  - Do not allow more 
> than 100 cores on any node 
> But in doing so we will ensue that the policy framework is always being 
> executed .  I'd imagine bugs like SOLR-12358 would have been caught earlier 
> this way



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11519) Suggestions for replica count violations

2018-06-04 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11519?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11519:
--
Description: 
Example 

{code}
{replica:"<3", "port":"8983"}
{code}

If such a policy rule is configured, and there are  3 or more replicas placed 
in a node with port 8983 that is a violation. The suggestions end point should 
give suggestions to move replicas from  those nodes so that the policy rule is 
not violated. For example , if there are 4 replicas for a given  collection on 
nodes with port 8983 , the suggestions API would return 2  MOVE_REPLICA 
operations

> Suggestions for replica count violations
> 
>
> Key: SOLR-11519
> URL: https://issues.apache.org/jira/browse/SOLR-11519
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Assignee: Noble Paul
>Priority: Major
> Fix For: 7.2, master (8.0)
>
>
> Example 
> {code}
> {replica:"<3", "port":"8983"}
> {code}
> If such a policy rule is configured, and there are  3 or more replicas placed 
> in a node with port 8983 that is a violation. The suggestions end point 
> should give suggestions to move replicas from  those nodes so that the policy 
> rule is not violated. For example , if there are 4 replicas for a given  
> collection on nodes with port 8983 , the suggestions API would return 2  
> MOVE_REPLICA operations



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12447) Allow SimplePostTool to POST hidden files.

2018-06-04 Thread Ian Goldsmith-Rooney (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12447?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ian Goldsmith-Rooney updated SOLR-12447:

Affects Version/s: master (8.0)
Fix Version/s: master (8.0)

> Allow SimplePostTool to POST hidden files.
> --
>
> Key: SOLR-12447
> URL: https://issues.apache.org/jira/browse/SOLR-12447
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: master (8.0)
>Reporter: Ian Goldsmith-Rooney
>Priority: Minor
>  Labels: newbie, patch
> Fix For: master (8.0)
>
>
> Currently, the SimplePostTool ignores all hidden files without a toggle. This 
> feature will add a toggle to allow POSTing hidden files for indexing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8344) TokenStreamToAutomaton doesn't ignore trailing posInc when preservePositionIncrements=false

2018-06-04 Thread David Smiley (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500343#comment-16500343
 ] 

David Smiley commented on LUCENE-8344:
--

To demonstrate the issue in the patch I added a 
 TokenStreamToAutomaton.BUG boolean flag so a test can see what happens when 
the suggest index had trailing holes but differs at query time.

org.apache.lucene.search.suggest.analyzing.AnalyzingSuggesterTest#testStandard 
see the "round trip" test
 With BUG==true: fails (bad for back-compat)
 With BUG==false: passes (therefore a reindex fixes)

org.apache.lucene.search.suggest.document.TestPrefixCompletionQuery#testAnalyzerWithSepAndNoPreservePos
 see "test trailing stopword with a new document"
 With BUG==true: passes (good for back-compat)
 With BUG==false: fails(*) 
 (*): however if you flip the analyzer passed to the PrefixCompletionQuery 
constructor to the "completionAnalyzer" (instead of the plain/original 
"analyzer"), then it passes.  So apparently this may require users change how 
it's used? (ouch)

CC [~areek]

> TokenStreamToAutomaton doesn't ignore trailing posInc when 
> preservePositionIncrements=false
> ---
>
> Key: LUCENE-8344
> URL: https://issues.apache.org/jira/browse/LUCENE-8344
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/suggest
>Reporter: David Smiley
>Priority: Major
> Attachments: LUCENE-8344.patch, LUCENE-8344.patch
>
>
> TokenStreamToAutomaton in Lucene core is used by the AnalyzingSuggester 
> (incl. FuzzySuggester subclass ) and NRT Document Suggester and soon the 
> SolrTextTagger.  It has a setting {{preservePositionIncrements}} defaulting 
> to true.  If it's set to false (e.g. to ignore stopwords) and if there is a 
> _trailing_ position increment greater than 1, TS2A will _still_ add position 
> increments (holes) into the automata even though it was configured not to.
> I'm filing this issue separate from LUCENE-8332 where I first found it.  The 
> fix is very simple but I'm concerned about back-compat ramifications so I'm 
> filing it separately.  I'll attach a patch to show the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8344) TokenStreamToAutomaton doesn't ignore trailing posInc when preservePositionIncrements=false

2018-06-04 Thread David Smiley (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8344?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated LUCENE-8344:
-
Attachment: LUCENE-8344.patch

> TokenStreamToAutomaton doesn't ignore trailing posInc when 
> preservePositionIncrements=false
> ---
>
> Key: LUCENE-8344
> URL: https://issues.apache.org/jira/browse/LUCENE-8344
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/suggest
>Reporter: David Smiley
>Priority: Major
> Attachments: LUCENE-8344.patch, LUCENE-8344.patch
>
>
> TokenStreamToAutomaton in Lucene core is used by the AnalyzingSuggester 
> (incl. FuzzySuggester subclass ) and NRT Document Suggester and soon the 
> SolrTextTagger.  It has a setting {{preservePositionIncrements}} defaulting 
> to true.  If it's set to false (e.g. to ignore stopwords) and if there is a 
> _trailing_ position increment greater than 1, TS2A will _still_ add position 
> increments (holes) into the automata even though it was configured not to.
> I'm filing this issue separate from LUCENE-8332 where I first found it.  The 
> fix is very simple but I'm concerned about back-compat ramifications so I'm 
> filing it separately.  I'll attach a patch to show the problem.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-12447) Allow SimplePostTool to POST hidden files.

2018-06-04 Thread Ian Goldsmith-Rooney (JIRA)
Ian Goldsmith-Rooney created SOLR-12447:
---

 Summary: Allow SimplePostTool to POST hidden files.
 Key: SOLR-12447
 URL: https://issues.apache.org/jira/browse/SOLR-12447
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Ian Goldsmith-Rooney


Currently, the SimplePostTool ignores all hidden files without a toggle. This 
feature will add a toggle to allow POSTing hidden files for indexing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12439) Switch from Apache HttpClient to Jetty HttpClient which offers a single API for Async, Http/1.1 Http/2.

2018-06-04 Thread Oleg Kalnichevski (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500332#comment-16500332
 ] 

Oleg Kalnichevski commented on SOLR-12439:
--

HttpClient 5.0 async "... offers a single API for Async, Http/1.1 Http/2...". I 
am not sure what idea you have in mind. 

Oleg

> Switch from Apache HttpClient to Jetty HttpClient which offers a single API 
> for Async, Http/1.1 Http/2.
> ---
>
> Key: SOLR-12439
> URL: https://issues.apache.org/jira/browse/SOLR-12439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (64bit/jdk-10) - Build # 22174 - Unstable!

2018-06-04 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/22174/
Java: 64bit/jdk-10 -XX:-UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "00017325418398871747345"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"00017325418398871747345"
at 
__randomizedtesting.SeedInfo.seed([A2DF458CF5192A9B:BA25EB271C57D69A]:0)
at 
java.base/java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.base/java.lang.Long.parseLong(Long.java:692)
at java.base/java.lang.Long.parseLong(Long.java:817)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:153)
at org.apache.solr.update.TransactionLog.(TransactionLog.java:141)
at 
org.apache.solr.update.TransactionLogTest.testBigLastAddSize(TransactionLogTest.java:34)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  org.apache.solr.update.TransactionLogTest.testBigLastAddSize

Error Message:
For input string: "00013652265431949537175"

Stack Trace:
java.lang.NumberFormatException: For input string: 
"00013652265431949537175"
at 
__randomizedtesting.SeedInfo.seed([A2DF458CF5192A9B:BA25EB271C57D69A]:0)
at 

[jira] [Commented] (LUCENE-8332) New ConcatenateGraphTokenStream (move/rename CompletionTokenStream)

2018-06-04 Thread Steve Rowe (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500280#comment-16500280
 ] 

Steve Rowe commented on LUCENE-8332:


bq. I hate to bother you Steve Rowe but if you have any tips on diagnosing 
puzzling Yetus build failures then I'd appreciate it.

The patch looks like it was produced by IntelliJ, which appears not to be 
compatible with some git tooling.  Perhaps just regen using {{git diff}}?

> New ConcatenateGraphTokenStream (move/rename CompletionTokenStream)
> ---
>
> Key: LUCENE-8332
> URL: https://issues.apache.org/jira/browse/LUCENE-8332
> Project: Lucene - Core
>  Issue Type: New Feature
>  Components: modules/analysis
>Reporter: David Smiley
>Assignee: David Smiley
>Priority: Major
> Attachments: LUCENE-8332.patch, LUCENE-8332.patch, LUCENE-8332.patch
>
>  Time Spent: 3h 10m
>  Remaining Estimate: 0h
>
> Lets move and rename the CompletionTokenStream in the suggest module into the 
> analysis module renamed as ConcatenateGraphTokenStream. See comments in 
> LUCENE-8323 leading to this idea. Such a TokenStream (or TokenFilter?) has 
> several uses:
>  * for the suggest module
>  * by the SolrTextTagger for NER/ERD use cases – SOLR-12376
>  * for doing complete match search efficiently
> It will need a factory – a TokenFilterFactory, even though we don't have a 
> TokenFilter based subclass of TokenStream.
> It appears there is no back-compat concern in it suddenly disappearing from 
> the suggest module as it's marked experimental and it only seems to be public 
> now perhaps due to some technicality (it has package level constructors).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-repro - Build # 755 - Still Unstable

2018-06-04 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-repro/755/

[...truncated 51 lines...]
[repro] Jenkins log URL: 
https://builds.apache.org/job/Lucene-Solr-Tests-master/2554/consoleText

[repro] Revision: 3dc4fa199c175ed6351f66bac5c23c73b4e3f89a

[repro] Repro line:  ant test  -Dtestcase=ConfusionMatrixGeneratorTest 
-Dtests.method=testGetConfusionMatrixWithSNB -Dtests.seed=2D876800A0E19369 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=sr-Latn-ME 
-Dtests.timezone=CST -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestPolicyCloud 
-Dtests.method=testCreateCollectionAddShardWithReplicaTypeUsingPolicy 
-Dtests.seed=349DC482AE72AA08 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=hi-IN -Dtests.timezone=Africa/Algiers -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestTriggerIntegration 
-Dtests.method=testTriggerThrottling -Dtests.seed=349DC482AE72AA08 
-Dtests.multiplier=2 -Dtests.slow=true -Dtests.locale=hi 
-Dtests.timezone=Asia/Kamchatka -Dtests.asserts=true -Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=TestCollectionStateWatchers 
-Dtests.method=testWaitForStateWatcherIsRetainedOnPredicateFailure 
-Dtests.seed=3ACEB281457D6C48 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=uk-UA -Dtests.timezone=America/Atikokan -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] Repro line:  ant test  -Dtestcase=GraphTest 
-Dtests.seed=3ACEB281457D6C48 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=bg -Dtests.timezone=Australia/West -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[repro] git rev-parse --abbrev-ref HEAD
[repro] git rev-parse HEAD
[repro] Initial local git branch/revision: 
fe83838ec3768f25964a04510cd10772cf034d34
[repro] git fetch

[...truncated 2 lines...]
[repro] git checkout 3dc4fa199c175ed6351f66bac5c23c73b4e3f89a

[...truncated 2 lines...]
[repro] git merge --ff-only

[...truncated 1 lines...]
[repro] ant clean

[...truncated 6 lines...]
[repro] Test suites by module:
[repro]lucene/classification
[repro]   ConfusionMatrixGeneratorTest
[repro]solr/solrj
[repro]   GraphTest
[repro]   TestCollectionStateWatchers
[repro]solr/core
[repro]   TestPolicyCloud
[repro]   TestTriggerIntegration
[repro] ant compile-test

[...truncated 1061 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=5 
-Dtests.class="*.ConfusionMatrixGeneratorTest" -Dtests.showOutput=onerror  
-Dtests.seed=2D876800A0E19369 -Dtests.multiplier=2 -Dtests.slow=true 
-Dtests.locale=sr-Latn-ME -Dtests.timezone=CST -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8

[...truncated 98 lines...]
[repro] ant compile-test

[...truncated 1452 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.GraphTest|*.TestCollectionStateWatchers" 
-Dtests.showOutput=onerror  -Dtests.seed=3ACEB281457D6C48 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=bg -Dtests.timezone=Australia/West 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 2806 lines...]
[repro] Setting last failure code to 256

[repro] ant compile-test

[...truncated 1329 lines...]
[repro] ant test-nocompile -Dtests.dups=5 -Dtests.maxfailures=10 
-Dtests.class="*.TestPolicyCloud|*.TestTriggerIntegration" 
-Dtests.showOutput=onerror  -Dtests.seed=349DC482AE72AA08 -Dtests.multiplier=2 
-Dtests.slow=true -Dtests.locale=hi-IN -Dtests.timezone=Africa/Algiers 
-Dtests.asserts=true -Dtests.file.encoding=UTF-8

[...truncated 3868 lines...]
[repro] Setting last failure code to 256

[repro] Failures:
[repro]   0/5 failed: 
org.apache.lucene.classification.utils.ConfusionMatrixGeneratorTest
[repro]   0/5 failed: org.apache.solr.client.solrj.io.graph.GraphTest
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.TestPolicyCloud
[repro]   0/5 failed: org.apache.solr.cloud.autoscaling.sim.TestPolicyCloud
[repro]   2/5 failed: 
org.apache.solr.cloud.autoscaling.sim.TestTriggerIntegration
[repro]   2/5 failed: org.apache.solr.common.cloud.TestCollectionStateWatchers
[repro] git checkout fe83838ec3768f25964a04510cd10772cf034d34

[...truncated 2 lines...]
[repro] Exiting with code 256

[...truncated 5 lines...]

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (LUCENE-8346) Allow QueryBuilder subclass to override createSpanQuery

2018-06-04 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-8346:
-
Environment: (was: QueryBuilder creates one span near query per path 
when multi-word synonyms are detected in a phrase query. Span near queries are 
created using createSpanQuery which is a protected final method. Since the 
method is final it is not possible to change the behavior in a subclass. It can 
be useful for clients to change how this query is built so I propose to make it 
a non-final protected method.)
Description: QueryBuilder creates one span near query per path when 
multi-word synonyms are detected in a phrase query. Span near queries are 
created using createSpanQuery which is a protected final method. Since the 
method is final it is not possible to change the behavior in a subclass. It can 
be useful for clients to change how this query is built so I propose to make it 
a non-final protected method.

> Allow QueryBuilder subclass to override createSpanQuery
> ---
>
> Key: LUCENE-8346
> URL: https://issues.apache.org/jira/browse/LUCENE-8346
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8250.patch
>
>
> QueryBuilder creates one span near query per path when multi-word synonyms 
> are detected in a phrase query. Span near queries are created using 
> createSpanQuery which is a protected final method. Since the method is final 
> it is not possible to change the behavior in a subclass. It can be useful for 
> clients to change how this query is built so I propose to make it a non-final 
> protected method.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-06-04 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500253#comment-16500253
 ] 

Mark Miller edited comment on SOLR-12297 at 6/4/18 2:00 PM:


{quote}If that works, great. I must be doing something wrong.
{quote}
For the Http2SolrClient, you would have to enable it in code - it uses a 
different Transport class It's not setup to work with 1.1 without flipping what 
Transport class is used in the code.

Thanks for fixing the hard coded versions, I'll look at that.
{quote}But creating a core will fail
{quote}
Well the client will speak HTTP/2, but have you setup Jetty to run an HTTP/2 
connector instead of HTTP/1.1?


was (Author: markrmil...@gmail.com):
{quote}If that works, great. I must be doing something wrong.
{quote}
For the Http2SolrClient, you would have to enable it in code - it uses a 
different Transport class It's not setup to work with 1.1 with flipping what 
Transport class is used in the code.

Thanks for fixing the hard coded versions, I'll look at that.
{quote}But creating a core will fail
{quote}
Well the client will speak HTTP/2, but have you setup Jetty to run an HTTP/2 
connector instead of HTTP/1.1?

> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12297) Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.

2018-06-04 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12297?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500253#comment-16500253
 ] 

Mark Miller commented on SOLR-12297:


{quote}If that works, great. I must be doing something wrong.
{quote}
For the Http2SolrClient, you would have to enable it in code - it uses a 
different Transport class It's not setup to work with 1.1 with flipping what 
Transport class is used in the code.

Thanks for fixing the hard coded versions, I'll look at that.
{quote}But creating a core will fail
{quote}
Well the client will speak HTTP/2, but have you setup Jetty to run an HTTP/2 
connector instead of HTTP/1.1?

> Add Http2SolrClient, capable of HTTP/1.1, HTTP/2, and asynchronous requests.
> 
>
> Key: SOLR-12297
> URL: https://issues.apache.org/jira/browse/SOLR-12297
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Assignee: Mark Miller
>Priority: Major
> Attachments: starburst-ivy-fixes.patch
>
>
> Blocking or async support as well as HTTP2 compatible with multiplexing.
> Once it supports enough and is stable, replace internal usage, allowing 
> async, and eventually move to HTTP2 connector and allow multiplexing. Could 
> support HTTP1.1 and HTTP2 on different ports depending on state of the world 
> then.
> The goal of the client itself is to work against HTTP1.1 or HTTP2 with 
> minimal or no code path differences and the same for async requests (should 
> initially work for both 1.1 and 2 and share majority of code).
> The client should also be able to replace HttpSolrClient and plug into the 
> other clients the same way.
> I doubt it would make sense to keep ConcurrentUpdateSolrClient eventually 
> though.
> I evaluated some clients and while there are a few options, I went with 
> Jetty's HttpClient. It's more mature than Apache HttpClient's support (in 5 
> beta) and we would have to update to a new API for Apache HttpClient anyway.
> Meanwhile, the Jetty guys have been very supportive of helping Solr with any 
> issues and I like having the client and server from the same project.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8346) Allow QueryBuilder subclass to override createSpanQuery

2018-06-04 Thread Adrien Grand (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500248#comment-16500248
 ] 

Adrien Grand commented on LUCENE-8346:
--

Sounds reasonable to me.

> Allow QueryBuilder subclass to override createSpanQuery
> ---
>
> Key: LUCENE-8346
> URL: https://issues.apache.org/jira/browse/LUCENE-8346
> Project: Lucene - Core
>  Issue Type: Improvement
> Environment: QueryBuilder creates one span near query per path when 
> multi-word synonyms are detected in a phrase query. Span near queries are 
> created using createSpanQuery which is a protected final method. Since the 
> method is final it is not possible to change the behavior in a subclass. It 
> can be useful for clients to change how this query is built so I propose to 
> make it a non-final protected method.
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8250.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8346) Allow QueryBuilder subclass to override createSpanQuery

2018-06-04 Thread Jim Ferenczi (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-8346:
-
Attachment: LUCENE-8250.patch

> Allow QueryBuilder subclass to override createSpanQuery
> ---
>
> Key: LUCENE-8346
> URL: https://issues.apache.org/jira/browse/LUCENE-8346
> Project: Lucene - Core
>  Issue Type: Improvement
> Environment: QueryBuilder creates one span near query per path when 
> multi-word synonyms are detected in a phrase query. Span near queries are 
> created using createSpanQuery which is a protected final method. Since the 
> method is final it is not possible to change the behavior in a subclass. It 
> can be useful for clients to change how this query is built so I propose to 
> make it a non-final protected method.
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8250.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8346) Allow QueryBuilder subclass to override createSpanQuery

2018-06-04 Thread Jim Ferenczi (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500241#comment-16500241
 ] 

Jim Ferenczi commented on LUCENE-8346:
--

I attached a simple patch that removes the final modifier.

> Allow QueryBuilder subclass to override createSpanQuery
> ---
>
> Key: LUCENE-8346
> URL: https://issues.apache.org/jira/browse/LUCENE-8346
> Project: Lucene - Core
>  Issue Type: Improvement
> Environment: QueryBuilder creates one span near query per path when 
> multi-word synonyms are detected in a phrase query. Span near queries are 
> created using createSpanQuery which is a protected final method. Since the 
> method is final it is not possible to change the behavior in a subclass. It 
> can be useful for clients to change how this query is built so I propose to 
> make it a non-final protected method.
>Reporter: Jim Ferenczi
>Priority: Minor
> Attachments: LUCENE-8250.patch
>
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8346) Allow QueryBuilder subclass to override createSpanQuery

2018-06-04 Thread Jim Ferenczi (JIRA)
Jim Ferenczi created LUCENE-8346:


 Summary: Allow QueryBuilder subclass to override createSpanQuery
 Key: LUCENE-8346
 URL: https://issues.apache.org/jira/browse/LUCENE-8346
 Project: Lucene - Core
  Issue Type: Improvement
 Environment: QueryBuilder creates one span near query per path when 
multi-word synonyms are detected in a phrase query. Span near queries are 
created using createSpanQuery which is a protected final method. Since the 
method is final it is not possible to change the behavior in a subclass. It can 
be useful for clients to change how this query is built so I propose to make it 
a non-final protected method.
Reporter: Jim Ferenczi






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12439) Switch from Apache HttpClient to Jetty HttpClient which offers a single API for Async, Http/1.1 Http/2.

2018-06-04 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500233#comment-16500233
 ] 

Mark Miller commented on SOLR-12439:


I also still see a classic and async api for the clients in Apache Http Client 
5 Beta. Perhaps we are just meaning different things with the idea of a "single 
API".

> Switch from Apache HttpClient to Jetty HttpClient which offers a single API 
> for Async, Http/1.1 Http/2.
> ---
>
> Key: SOLR-12439
> URL: https://issues.apache.org/jira/browse/SOLR-12439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-12439) Switch from Apache HttpClient to Jetty HttpClient which offers a single API for Async, Http/1.1 Http/2.

2018-06-04 Thread Mark Miller (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mark Miller updated SOLR-12439:
---
Summary: Switch from Apache HttpClient to Jetty HttpClient which offers a 
single API for Async, Http/1.1 Http/2.  (was: Switch from Apache HttpClient to 
Jetty HttpClient which offers a single API for Async, Http/1.1 Http/2.h)

> Switch from Apache HttpClient to Jetty HttpClient which offers a single API 
> for Async, Http/1.1 Http/2.
> ---
>
> Key: SOLR-12439
> URL: https://issues.apache.org/jira/browse/SOLR-12439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-12439) Switch from Apache HttpClient to Jetty HttpClient which offers a single API for Async, Http/1.1 Http/2.h

2018-06-04 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500223#comment-16500223
 ] 

Mark Miller edited comment on SOLR-12439 at 6/4/18 1:39 PM:


The issue doesn't state the reasons, it just mentions that the the jetty client 
offers these features in a single API. There are a variety of reason we are 
switching that are more fully enumerated in a related issue.

I'd still list the unified API in that list, given Apache HttpClient 5 is in 
beta, but this title is not trying to encompass the reason for switching in any 
way, it is trying to encompass a work item.

 


was (Author: markrmil...@gmail.com):
The issue doesn't state the reasons, it just mention the the jetty client 
offers these features in a single API. There are a variety of reason we are 
switch that are more fully enumerated in a related issue.

I'd still list that in that least given Apache HttpClient 5 is in beta, but 
this title is not pretending to be the reason for switching.

 

> Switch from Apache HttpClient to Jetty HttpClient which offers a single API 
> for Async, Http/1.1 Http/2.h
> 
>
> Key: SOLR-12439
> URL: https://issues.apache.org/jira/browse/SOLR-12439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12439) Switch from Apache HttpClient to Jetty HttpClient which offers a single API for Async, Http/1.1 Http/2.h

2018-06-04 Thread Mark Miller (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500223#comment-16500223
 ] 

Mark Miller commented on SOLR-12439:


The issue doesn't state the reasons, it just mention the the jetty client 
offers these features in a single API. There are a variety of reason we are 
switch that are more fully enumerated in a related issue.

I'd still list that in that least given Apache HttpClient 5 is in beta, but 
this title is not pretending to be the reason for switching.

 

> Switch from Apache HttpClient to Jetty HttpClient which offers a single API 
> for Async, Http/1.1 Http/2.h
> 
>
> Key: SOLR-12439
> URL: https://issues.apache.org/jira/browse/SOLR-12439
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Mark Miller
>Priority: Major
>




--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-06-04 Thread Ere Maijala (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500222#comment-16500222
 ] 

Ere Maijala commented on SOLR-11982:


That's an issue if its type is still TLOG while it's serving as the leader. I'm 
under the impression that this should be a temporary situation, however, so it 
might very well be justified e.g. if the replica is on hardware that has the 
best capacity for handling queries. That's a hairy situation, though, since 
it's difficult to say what's the best thing to do. The logic could probably be 
extended with an option to prefer non-leaders.

> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12445) Upgrade Dropwizard Metrics to 4.0.2 release

2018-06-04 Thread Andrzej Bialecki (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12445?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500214#comment-16500214
 ] 

Andrzej Bialecki  commented on SOLR-12445:
--

It appears that {{metrics-ganglia}} 4.0.2 is missing from Maven Central - I 
filed an issue with the metrics project and awaiting response.

> Upgrade Dropwizard Metrics to 4.0.2 release
> ---
>
> Key: SOLR-12445
> URL: https://issues.apache.org/jira/browse/SOLR-12445
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: metrics
>Reporter: Andrzej Bialecki 
>Assignee: Andrzej Bialecki 
>Priority: Major
> Fix For: 7.4, master (8.0)
>
>
> This version of the library is compatible with Java 9 and fixes an important 
> bug in ExponentiallyDecayingReservoir, which resulted in incorrect values 
> being reported after long periods of inactivity.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11982) Add support for indicating preferred replica types for queries

2018-06-04 Thread Shawn Heisey (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11982?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500203#comment-16500203
 ] 

Shawn Heisey commented on SOLR-11982:
-

I've got a question about this, sorry I didn't think of it while the issue was 
open.

If a preference is configured for TLOG replicas, what happens if a TLOG replica 
is elected leader?  I would think that most people would NOT want requests to 
go to that replica, because its operation while leader is the same as NRT.


> Add support for indicating preferred replica types for queries
> --
>
> Key: SOLR-11982
> URL: https://issues.apache.org/jira/browse/SOLR-11982
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: SolrCloud
>Affects Versions: 7.4, master (8.0)
>Reporter: Ere Maijala
>Assignee: Tomás Fernández Löbbe
>Priority: Minor
>  Labels: patch-available, patch-with-test
> Fix For: 7.4, master (8.0)
>
> Attachments: SOLR-11982-preferReplicaTypes.patch, 
> SOLR-11982-preferReplicaTypes.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch, 
> SOLR-11982.patch, SOLR-11982.patch, SOLR-11982.patch
>
>
> It would be nice to have the possibility to easily sort the shards in the 
> preferred order e.g. by replica type. The attached patch adds support for 
> {{shards.sort}} parameter that allows one to sort e.g. PULL and TLOG replicas 
> first with \{{shards.sort=replicaType:PULL|TLOG }}(which would mean that NRT 
> replicas wouldn't be hit with queries unless they're the only ones available) 
> and/or to sort by replica location (like preferLocalShards=true but more 
> versatile).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-12356) Always auto-create ".system" collection when in SolrCloud mode

2018-06-04 Thread Noble Paul (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-12356?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500193#comment-16500193
 ] 

Noble Paul commented on SOLR-12356:
---

If you use SolrJ to read/write from {{.system}} collection it fails because 
SolrJ does a check before even sending the request. So, auto-create can fail

> Always auto-create ".system" collection when in SolrCloud mode
> --
>
> Key: SOLR-12356
> URL: https://issues.apache.org/jira/browse/SOLR-12356
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Andrzej Bialecki 
>Priority: Major
>
> The {{.system}} collection is currently used for blobs, and in SolrCloud mode 
> it's also used for autoscaling history and as a metrics history store 
> (SOLR-11779). It should be automatically created on Overseer start if it's 
> missing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11911) TestLargeCluster.testSearchRate() failure

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/SOLR-11911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500192#comment-16500192
 ] 

ASF subversion and git services commented on SOLR-11911:


Commit 0a7c3f462f9b59da61aa0d05dc86d74ca38a10aa in lucene-solr's branch 
refs/heads/branch_7x from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=0a7c3f4 ]

SOLR-11911: Fix a number of synchronization issues in the simulator. Enable 
this test for now.


> TestLargeCluster.testSearchRate() failure
> -
>
> Key: SOLR-11911
> URL: https://issues.apache.org/jira/browse/SOLR-11911
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Andrzej Bialecki 
>Priority: Major
>
> My Jenkins found a branch_7x seed that reproduced 4/5 times for me:
> {noformat}
> Checking out Revision af9706cb89335a5aa04f9bcae0c2558a61803b50 
> (refs/remotes/origin/branch_7x)
> [...]
>[junit4]   2> NOTE: reproduce with: ant test  -Dtestcase=TestLargeCluster 
> -Dtests.method=testSearchRate -Dtests.seed=2D7724685882A83D -Dtests.slow=true 
> -Dtests.locale=be-BY -Dtests.timezone=Africa/Ouagadougou -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 1.24s J0  | TestLargeCluster.testSearchRate <<<
>[junit4]> Throwable #1: java.lang.AssertionError: The trigger did not 
> fire at all
>[junit4]>  at 
> __randomizedtesting.SeedInfo.seed([2D7724685882A83D:703F3AE197440E72]:0)
>[junit4]>  at 
> org.apache.solr.cloud.autoscaling.sim.TestLargeCluster.testSearchRate(TestLargeCluster.java:547)
>[junit4]>  at java.lang.Thread.run(Thread.java:748)
> [...]
>[junit4]   2> NOTE: test params are: codec=CheapBastard, 
> sim=RandomSimilarity(queryNorm=true): {}, locale=be-BY, 
> timezone=Africa/Ouagadougou
>[junit4]   2> NOTE: Linux 4.1.0-custom2-amd64 amd64/Oracle Corporation 
> 1.8.0_151 (64-bit)/cpus=16,threads=1,free=388243840,total=502267904
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-8341) Record soft deletes in SegmentCommitInfo

2018-06-04 Thread Simon Willnauer (JIRA)


 [ 
https://issues.apache.org/jira/browse/LUCENE-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned LUCENE-8341:
---

Resolution: Fixed
  Assignee: Simon Willnauer

>  Record soft deletes in SegmentCommitInfo
> -
>
> Key: LUCENE-8341
> URL: https://issues.apache.org/jira/browse/LUCENE-8341
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Assignee: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8341.patch, LUCENE-8341.patch, LUCENE-8341.patch, 
> LUCENE-8341.patch
>
>
>  This change add the number of documents that are soft deletes but
> not hard deleted to the segment commit info. This is the last step
> towards making soft deletes as powerful as hard deltes since now the
> number of document can be read from commit points without opening a
> full blown reader. This also allows merge posliies to make decisions
> without requiring an NRT reader to get the relevant statistics. This
> change doesn't enforce any field to be used as soft deletes and the 
> statistic
> is maintained per segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8341) Record soft deletes in SegmentCommitInfo

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500189#comment-16500189
 ] 

ASF subversion and git services commented on LUCENE-8341:
-

Commit 21f03a49532d8623f839dfacb73532da11cc0be1 in lucene-solr's branch 
refs/heads/branch_7x from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=21f03a4 ]

LUCENE-8341: Record soft deletes in SegmentCommitInfo

This change add the number of documents that are soft deletes but
not hard deleted to the segment commit info. This is the last step
towards making soft deletes as powerful as hard deltes since now the
number of document can be read from commit points without opening a
full blown reader. This also allows merge posliies to make decisions
without requiring an NRT reader to get the relevant statistics. This
change doesn't enforce any field to be used as soft deletes and the statistic
is maintained per segment.


>  Record soft deletes in SegmentCommitInfo
> -
>
> Key: LUCENE-8341
> URL: https://issues.apache.org/jira/browse/LUCENE-8341
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8341.patch, LUCENE-8341.patch, LUCENE-8341.patch, 
> LUCENE-8341.patch
>
>
>  This change add the number of documents that are soft deletes but
> not hard deleted to the segment commit info. This is the last step
> towards making soft deletes as powerful as hard deltes since now the
> number of document can be read from commit points without opening a
> full blown reader. This also allows merge posliies to make decisions
> without requiring an NRT reader to get the relevant statistics. This
> change doesn't enforce any field to be used as soft deletes and the 
> statistic
> is maintained per segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11522) Suggestions/recommendations to rebalance replicas

2018-06-04 Thread Noble Paul (JIRA)


 [ 
https://issues.apache.org/jira/browse/SOLR-11522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul updated SOLR-11522:
--
Description: Suggestions should recommend moving replicas from more loaded 
nodes to less loaded nodes to balance the cluster

> Suggestions/recommendations to rebalance replicas
> -
>
> Key: SOLR-11522
> URL: https://issues.apache.org/jira/browse/SOLR-11522
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Noble Paul
>Priority: Major
>
> Suggestions should recommend moving replicas from more loaded nodes to less 
> loaded nodes to balance the cluster



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8341) Record soft deletes in SegmentCommitInfo

2018-06-04 Thread ASF subversion and git services (JIRA)


[ 
https://issues.apache.org/jira/browse/LUCENE-8341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16500186#comment-16500186
 ] 

ASF subversion and git services commented on LUCENE-8341:
-

Commit fe83838ec3768f25964a04510cd10772cf034d34 in lucene-solr's branch 
refs/heads/master from [~simonw]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fe83838 ]

LUCENE-8341: Record soft deletes in SegmentCommitInfo

This change add the number of documents that are soft deletes but
not hard deleted to the segment commit info. This is the last step
towards making soft deletes as powerful as hard deltes since now the
number of document can be read from commit points without opening a
full blown reader. This also allows merge posliies to make decisions
without requiring an NRT reader to get the relevant statistics. This
change doesn't enforce any field to be used as soft deletes and the statistic
is maintained per segment.


>  Record soft deletes in SegmentCommitInfo
> -
>
> Key: LUCENE-8341
> URL: https://issues.apache.org/jira/browse/LUCENE-8341
> Project: Lucene - Core
>  Issue Type: Improvement
>Affects Versions: 7.4, master (8.0)
>Reporter: Simon Willnauer
>Priority: Major
> Fix For: 7.4, master (8.0)
>
> Attachments: LUCENE-8341.patch, LUCENE-8341.patch, LUCENE-8341.patch, 
> LUCENE-8341.patch
>
>
>  This change add the number of documents that are soft deletes but
> not hard deleted to the segment commit info. This is the last step
> towards making soft deletes as powerful as hard deltes since now the
> number of document can be read from commit points without opening a
> full blown reader. This also allows merge posliies to make decisions
> without requiring an NRT reader to get the relevant statistics. This
> change doesn't enforce any field to be used as soft deletes and the 
> statistic
> is maintained per segment.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >