[jira] [Commented] (SOLR-11631) Schema API always has status 0

2018-01-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317947#comment-16317947
 ] 

Noble Paul commented on SOLR-11631:
---

[~varunthacker]it is not necessarily useful to a human user. However, it's 
useful for a program to know what exception was thrown

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11631.patch, SOLR-11631.patch, SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11830) PKI authentication testcases not correct

2018-01-08 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11830.
---
   Resolution: Fixed
Fix Version/s: 7.3

> PKI authentication testcases not correct
> 
>
> Key: SOLR-11830
> URL: https://issues.apache.org/jira/browse/SOLR-11830
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
> Fix For: 7.3
>
> Attachments: SOLR-11830.patch
>
>
> it doesn't do proper test if user is missing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (32bit/jdk1.8.0_144) - Build # 1152 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1152/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseSerialGC

10 tests failed.
FAILED:  
org.apache.solr.client.solrj.embedded.TestEmbeddedSolrServerSchemaAPI.testSchemaAddFieldAndFailOnImmutable

Error Message:
error processing commands

Stack Trace:
org.apache.solr.api.ApiBag$ExceptionWithErrObject: error processing commands, 
errors: [{errorMessages=schema is not editable}], 
at 
__randomizedtesting.SeedInfo.seed([EC06B1CBF54FA637:77B25E1AA8D9CC26]:0)
at 
org.apache.solr.handler.SchemaHandler.handleRequestBody(SchemaHandler.java:92)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:180)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.embedded.TestEmbeddedSolrServerSchemaAPI.testSchemaAddFieldAndFailOnImmutable(TestEmbeddedSolrServerSchemaAPI.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgn

[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1615 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1615/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseG1GC

9 tests failed.
FAILED:  
org.apache.solr.client.solrj.request.SchemaTest.deletingAFieldThatDoesntExistInTheSchemaShouldFail

Error Message:
Error from server at http://127.0.0.1:65189/solr/collection1: error processing 
commands

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at http://127.0.0.1:65189/solr/collection1: error processing 
commands
at 
__randomizedtesting.SeedInfo.seed([1EA27B6BB7092F7:38055E817110C0ED]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException.create(HttpSolrClient.java:829)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:620)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.request.SchemaTest.deletingAFieldThatDoesntExistInTheSchemaShouldFail(SchemaTest.java:332)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleI

[jira] [Commented] (SOLR-11782) LatchWatcher.await doesn’t protect against spurious wakeup

2018-01-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11782?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317802#comment-16317802
 ] 

Tomás Fernández Löbbe commented on SOLR-11782:
--

Thanks for the hints [~dweiss], but, {{await(millis, TimeUnit.MILLIS);}} 
doesn't return the remaining time, so if I use that method I need to calculate 
that myself like in the previous patch, is that what you suggest?

> LatchWatcher.await doesn’t protect against spurious wakeup
> --
>
> Key: SOLR-11782
> URL: https://issues.apache.org/jira/browse/SOLR-11782
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Tomás Fernández Löbbe
>Priority: Minor
> Attachments: SOLR-11782.patch, SOLR-11782.patch, SOLR-11782.patch
>
>
> I noticed that {{LatchWatcher.await}} does:
> {code}
> public void await(long timeout) throws InterruptedException {
>   synchronized (lock) {
> if (this.event != null) return;
> lock.wait(timeout);
>   }
> }
> {code}
> while the recommendation of lock.wait is to check the wait condition even 
> after the method returns in case of spurious wakeup. {{lock}} is a private 
> local field to which {{notifyAll}} is called only after a zk event is being 
> handled. I think we should check the {{await}} method to something like:
> {code}
> public void await(long timeout) throws InterruptedException {
>   assert timeout > 0;
>   long timeoutTime = System.currentTimeMillis() + timeout;
>   synchronized (lock) {
> while (this.event == null) {
>   long nextTimeout = timeoutTime - System.currentTimeMillis();
>   if (nextTimeout <= 0) {
> return;
>   }
>   lock.wait(nextTimeout);
> }
>   }
> }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21246 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21246/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseG1GC

8 tests failed.
FAILED:  
org.apache.solr.client.solrj.request.SchemaTest.addFieldTypeShouldntBeCalledTwiceWithTheSameName

Error Message:
Error from server at https://127.0.0.1:45891/solr/collection1: error processing 
commands

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException: 
Error from server at https://127.0.0.1:45891/solr/collection1: error processing 
commands
at 
__randomizedtesting.SeedInfo.seed([D8BB6442CF3161D:C86A4454D28E04B5]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteExecutionException.create(HttpSolrClient.java:829)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:620)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.request.SchemaTest.addFieldTypeShouldntBeCalledTwiceWithTheSameName(SchemaTest.java:649)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxF

[jira] [Updated] (LUCENE-8125) emoji sequence support in ICUTokenizer

2018-01-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-8125:

Attachment: LUCENE-8125.patch

I added tests for emoji tag sequences. I also refactored TestICUTokenizer to 
only test the tokenizer (tests were getting complex because the tokenfilter was 
normalizing away joiners, selectors, tag_specs, etc: we dont want that, we are 
just trying to test tokenization here).

now all the emoji types work.

> emoji sequence support in ICUTokenizer
> --
>
> Key: LUCENE-8125
> URL: https://issues.apache.org/jira/browse/LUCENE-8125
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-8125.patch, LUCENE-8125.patch, LUCENE-8125.patch, 
> LUCENE-8125.patch, LUCENE-8125.patch
>
>
> uax29 word break rules already know how to handle these correctly, we just 
> need to assign them a token type. 
> This is better than users trying to do this with custom rules (e.g. 
> LUCENE-7916) because they are script-independent (common/inherited).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk1.8.0) - Build # 389 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/389/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

9 tests failed.
FAILED:  
org.apache.solr.client.solrj.embedded.TestEmbeddedSolrServerSchemaAPI.testSchemaAddFieldAndFailOnImmutable

Error Message:
error processing commands

Stack Trace:
org.apache.solr.api.ApiBag$ExceptionWithErrObject: error processing commands, 
errors: [{errorMessages=schema is not editable}], 
at 
__randomizedtesting.SeedInfo.seed([65954A42BAAC34FC:FE21A593E73A5EED]:0)
at 
org.apache.solr.handler.SchemaHandler.handleRequestBody(SchemaHandler.java:92)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
at 
org.apache.solr.client.solrj.embedded.EmbeddedSolrServer.request(EmbeddedSolrServer.java:180)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
at 
org.apache.solr.client.solrj.embedded.TestEmbeddedSolrServerSchemaAPI.testSchemaAddFieldAndFailOnImmutable(TestEmbeddedSolrServerSchemaAPI.java:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.eva

[jira] [Updated] (SOLR-11829) [Ref-Guide] Indexing documents with existing id

2018-01-08 Thread Munendra S N (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Munendra S N updated SOLR-11829:

Attachment: SOLR-11829.patch

Attaching with the updated patch
Changes
* Fix the anchor reference (as pointed out by [~ctargett])
* Change *Common Fields* level to 1 from 2 (as same other section levels)

I generated the pdf to verify the correctness of links

> [Ref-Guide] Indexing documents with existing id
> ---
>
> Key: SOLR-11829
> URL: https://issues.apache.org/jira/browse/SOLR-11829
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Munendra S N
>Assignee: Erick Erickson
> Attachments: SOLR-11829.patch, SOLR-11829.patch, SOLR-11829.patch, 
> SOLR-11829.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Solr Documentation for [Document 
> screen|http://lucene.apache.org/solr/guide/7_2/documents-screen.html] states 
> that if overwrite is set to false, then incoming documents with the same id 
> would be dropped.
> But the documentation of 
> [Indexing|http://lucene.apache.org/solr/guide/7_2/introduction-to-solr-indexing.html#introduction-to-solr-indexing]
>  and actual behavior states otherwise (i.e, allows the duplicate addition of 
> documents with the same id)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-11829) [Ref-Guide] Indexing documents with existing id

2018-01-08 Thread Munendra S N (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317709#comment-16317709
 ] 

Munendra S N edited comment on SOLR-11829 at 1/9/18 5:01 AM:
-

Attaching the updated patch
Changes
* Fix the anchor reference (as pointed out by [~ctargett])
* Change *Common Fields* level to 1 from 2 (as same other section levels)

I generated the pdf to verify the correctness of links


was (Author: munendrasn):
Attaching with the updated patch
Changes
* Fix the anchor reference (as pointed out by [~ctargett])
* Change *Common Fields* level to 1 from 2 (as same other section levels)

I generated the pdf to verify the correctness of links

> [Ref-Guide] Indexing documents with existing id
> ---
>
> Key: SOLR-11829
> URL: https://issues.apache.org/jira/browse/SOLR-11829
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Munendra S N
>Assignee: Erick Erickson
> Attachments: SOLR-11829.patch, SOLR-11829.patch, SOLR-11829.patch, 
> SOLR-11829.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Solr Documentation for [Document 
> screen|http://lucene.apache.org/solr/guide/7_2/documents-screen.html] states 
> that if overwrite is set to false, then incoming documents with the same id 
> would be dropped.
> But the documentation of 
> [Indexing|http://lucene.apache.org/solr/guide/7_2/introduction-to-solr-indexing.html#introduction-to-solr-indexing]
>  and actual behavior states otherwise (i.e, allows the duplicate addition of 
> documents with the same id)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8125) emoji sequence support in ICUTokenizer

2018-01-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-8125:

Attachment: LUCENE-8125.patch

I updated with a middle of the road approach better matching our RBBI rules. I 
don't think it would cause additional complexity if we wanted similar for jflex.

I still kept a TODO around the use the extended_pict set which could be used 
here just like its using in RBBI rules, for maybe similar future-proofing (e.g. 
we know emoji evolves rapidly and tokenizers/indexes fall behind, it would be 
nice). 

But I think its fair to keep as a TODO, we gotta crawl before we can run.


> emoji sequence support in ICUTokenizer
> --
>
> Key: LUCENE-8125
> URL: https://issues.apache.org/jira/browse/LUCENE-8125
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-8125.patch, LUCENE-8125.patch, LUCENE-8125.patch, 
> LUCENE-8125.patch
>
>
> uax29 word break rules already know how to handle these correctly, we just 
> need to assign them a token type. 
> This is better than users trying to do this with custom rules (e.g. 
> LUCENE-7916) because they are script-independent (common/inherited).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk1.8.0_144) - Build # 1151 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1151/
Java: 64bit/jdk1.8.0_144 -XX:+UseCompressedOops -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test

Error Message:
Expected 2 of 3 replicas to be active but only found 1; 
[core_node5:{"core":"c8n_1x3_lf_shard1_replica_n2","base_url":"http://127.0.0.1:43201","node_name":"127.0.0.1:43201_","state":"active","type":"NRT","leader":"true"}];
 clusterState: 
DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/16)={   
"pullReplicas":"0",   "replicationFactor":"1",   "shards":{"shard1":{   
"range":"8000-7fff",   "state":"active",   "replicas":{ 
"core_node4":{   "core":"c8n_1x3_lf_shard1_replica_n1",   
"base_url":"http://127.0.0.1:33301";,   "node_name":"127.0.0.1:33301_",  
 "state":"down",   "type":"NRT"}, "core_node5":{
   "core":"c8n_1x3_lf_shard1_replica_n2",   
"base_url":"http://127.0.0.1:43201";,   "node_name":"127.0.0.1:43201_",  
 "state":"active",   "type":"NRT",   "leader":"true"},  
   "core_node6":{   "state":"down",   
"base_url":"http://127.0.0.1:41953";,   
"core":"c8n_1x3_lf_shard1_replica_n3",   
"node_name":"127.0.0.1:41953_",   "type":"NRT",   
"router":{"name":"compositeId"},   "maxShardsPerNode":"1",   
"autoAddReplicas":"false",   "nrtReplicas":"3",   "tlogReplicas":"0"}

Stack Trace:
java.lang.AssertionError: Expected 2 of 3 replicas to be active but only found 
1; 
[core_node5:{"core":"c8n_1x3_lf_shard1_replica_n2","base_url":"http://127.0.0.1:43201","node_name":"127.0.0.1:43201_","state":"active","type":"NRT","leader":"true"}];
 clusterState: DocCollection(c8n_1x3_lf//collections/c8n_1x3_lf/state.json/16)={
  "pullReplicas":"0",
  "replicationFactor":"1",
  "shards":{"shard1":{
  "range":"8000-7fff",
  "state":"active",
  "replicas":{
"core_node4":{
  "core":"c8n_1x3_lf_shard1_replica_n1",
  "base_url":"http://127.0.0.1:33301";,
  "node_name":"127.0.0.1:33301_",
  "state":"down",
  "type":"NRT"},
"core_node5":{
  "core":"c8n_1x3_lf_shard1_replica_n2",
  "base_url":"http://127.0.0.1:43201";,
  "node_name":"127.0.0.1:43201_",
  "state":"active",
  "type":"NRT",
  "leader":"true"},
"core_node6":{
  "state":"down",
  "base_url":"http://127.0.0.1:41953";,
  "core":"c8n_1x3_lf_shard1_replica_n3",
  "node_name":"127.0.0.1:41953_",
  "type":"NRT",
  "router":{"name":"compositeId"},
  "maxShardsPerNode":"1",
  "autoAddReplicas":"false",
  "nrtReplicas":"3",
  "tlogReplicas":"0"}
at 
__randomizedtesting.SeedInfo.seed([28D11138177DA81D:A0852EE2B981C5E5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.testRf3WithLeaderFailover(LeaderFailoverAfterPartitionTest.java:169)
at 
org.apache.solr.cloud.LeaderFailoverAfterPartitionTest.test(LeaderFailoverAfterPartitionTest.java:56)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.e

Re: BugFix release 7.2.1

2018-01-08 Thread S G
Sorry, I missed some of the details but this is what we did in one of my
past projects with success:

We can begin by supporting only those machines where Apache Solr's
regression tests are run.
The aim is to identify OS-independent performance regressions, not to
certify each OS where Solr could be run.

Repository wise is easy too - We store the results in a performance-results
directory that stays in the github repo of Apache Solr.
This directory will receive metric-result-file(s) whenever a Solr release
is made.
And if older files are present, then the last metric file will be used to
compare the current performance.
When not making a release, the directory can be used to compare current
code's performance without writing to the performance-results directory.
When releasing Solr, the performance-metrics file should get updated
automatically.

Further improvements can include:
1) Deleting older files from performance-results directory
2) Having performance-results directories for each OS where Solr is
released (if we think OS-dependent performance issues could be there).

These ideas can be fine-tuned to ensure that they work.
Please suggest more issues if you think this would be impractical.

Thanks
SG




On Mon, Jan 8, 2018 at 12:59 PM, Erick Erickson 
wrote:

> Hmmm, I think you missed my implied point. How are these metrics collected
> and compared? There are about a dozen different machines running various op
> systems etc. For these measurements to spot regressions and/or
> improvements, they need to have a repository where the results get
> published. So a report like "build XXX took YYY seconds to index ZZZ
> documents" doesn't tell us anything. You need to gather then for a
> _specific_ machine.
>
> As for whether they should be run or not, an annotation could help here,
> there are already @Slow, @Nightly, @Weekly and @Performance could be added.
> Mike McCandless has some of these kinds of things already for Lucene, I
> htink the first thing would be to check whether they are already done, it's
> possible you'd be reinventing the wheel.
>
> Best,
> Erick
>
> On Mon, Jan 8, 2018 at 11:45 AM, S G  wrote:
>
>> We can put some lower limits on CPU and Memory for running a performance
>> test.
>> If those lower limits are not met, then the test will just skip execution.
>>
>> And then we put some lower bounds (time-wise) on the time spent by
>> different parts of the test like:
>>  - Max time taken to index 1 million documents
>>  - Max time taken to query, facet, pivot etc
>>  - Max time taken to delete 100,000 documents while read and writes are
>> happening.
>>
>> For all of the above, we can publish metrics like 5minRate, 95thPercent
>> and assert on values lower than a particular value.
>>
>> I know some other software compare CPU cycles across different runs as
>> well but not sure how.
>>
>> Such tests will give us more confidence when releasing/adopting new
>> features like pint compared to tint etc.
>>
>> Thanks
>> SG
>>
>>
>>
>> On Sat, Jan 6, 2018 at 9:59 AM, Erick Erickson 
>> wrote:
>>
>>> Not sure how performance tests in the unit tests would be interpreted.
>>> If I run the same suite on two different machines how do I compare the
>>> numbers?
>>>
>>> Or are you thinking of having some tests so someone can check out
>>> different versions of Solr and run the perf tests on a single machine,
>>> perhaps using bisect to pinpoint when something changed?
>>>
>>> I'm not opposed at all, just trying to understand how one would go about
>>> using such tests.
>>>
>>> Best,
>>> Erick
>>>
>>> On Fri, Jan 5, 2018 at 10:09 PM, S G  wrote:
>>>
 Just curious to know, does the test suite include some performance test
 also?
 I would like to know the performance impact of using pints vs tints or
 ints etc.
 If they are not there, I can try to add some tests for the same.

 Thanks
 SG


 On Fri, Jan 5, 2018 at 5:47 PM, Đạt Cao Mạnh 
 wrote:

> Hi all,
>
> I will work on SOLR-11771
>  today, It is a
> simple fix and will be great if it get fixed in 7.2.1
>
> On Fri, Jan 5, 2018 at 11:23 PM Erick Erickson <
> erickerick...@gmail.com> wrote:
>
>> Neither of those Solr fixes are earth shatteringly important, they've
>> both been around for quite a while. I don't think it's urgent to include
>> them.
>>
>> That said, they're pretty simple and isolated so worth doing if Jim
>> is willing. But not worth straining much. I was just clearing out some
>> backlog over vacation.
>>
>> Strictly up to you Jim.
>>
>> Erick
>>
>> On Fri, Jan 5, 2018 at 6:54 AM, David Smiley <
>> david.w.smi...@gmail.com> wrote:
>>
>>> https://issues.apache.org/jira/browse/SOLR-11809 is in progress,
>>> should be easy and I think definitely worth backporting
>>>
>>> On Fri, Jan 5, 2018 at 8:52 AM Adrien Grand 
>>> wrote:

[jira] [Resolved] (SOLR-11692) SolrDispatchFilter.closeShield passes the shielded response object back to jetty making the stream unclose able

2018-01-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley resolved SOLR-11692.
-
Resolution: Fixed

> SolrDispatchFilter.closeShield passes the shielded response object back to 
> jetty making the stream unclose able
> ---
>
> Key: SOLR-11692
> URL: https://issues.apache.org/jira/browse/SOLR-11692
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 7.1
> Environment: Linux/Mac tested
>Reporter: Jeff Miller
>Assignee: David Smiley
>Priority: Minor
>  Labels: dispatchlayer, jetty, newbie, streams
> Fix For: 7.3
>
> Attachments: SOLR-11692.patch, SOLR-11692.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> In test mode we trigger closeShield code in SolrDispatchFilter, however there 
> are code paths where we passthrough the objects to the DefaultHandler which 
> can no longer close the response.
> Example stack trace:
> java.lang.AssertionError: Attempted close of response output stream.
> at 
> org.apache.solr.servlet.SolrDispatchFilter$2$1.close(SolrDispatchFilter.java:528)
> at org.eclipse.jetty.server.Dispatcher.commitResponse(Dispatcher.java:315)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:279)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103)
> at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:566)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:385)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> searchserver.filter.SfdcDispatchFilter.doFilter(SfdcDispatchFilter.java:204)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:370)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
> at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:745)
> Related JIRA: SOLR-8933



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11692) SolrDispatchFilter.closeShield passes the shielded response object back to jetty making the stream unclose able

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317634#comment-16317634
 ] 

ASF subversion and git services commented on SOLR-11692:


Commit 9e3c16cf2ef0d2b9d562d8df75b4688d6ce0131a in lucene-solr's branch 
refs/heads/branch_7x from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9e3c16c ]

SOLR-11692: Constrain cases where SolrDispatchFilter uses closeShield

(cherry picked from commit 7a375fd)


> SolrDispatchFilter.closeShield passes the shielded response object back to 
> jetty making the stream unclose able
> ---
>
> Key: SOLR-11692
> URL: https://issues.apache.org/jira/browse/SOLR-11692
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 7.1
> Environment: Linux/Mac tested
>Reporter: Jeff Miller
>Assignee: David Smiley
>Priority: Minor
>  Labels: dispatchlayer, jetty, newbie, streams
> Fix For: 7.3
>
> Attachments: SOLR-11692.patch, SOLR-11692.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> In test mode we trigger closeShield code in SolrDispatchFilter, however there 
> are code paths where we passthrough the objects to the DefaultHandler which 
> can no longer close the response.
> Example stack trace:
> java.lang.AssertionError: Attempted close of response output stream.
> at 
> org.apache.solr.servlet.SolrDispatchFilter$2$1.close(SolrDispatchFilter.java:528)
> at org.eclipse.jetty.server.Dispatcher.commitResponse(Dispatcher.java:315)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:279)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103)
> at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:566)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:385)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> searchserver.filter.SfdcDispatchFilter.doFilter(SfdcDispatchFilter.java:204)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:370)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
> at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
>

[jira] [Commented] (SOLR-11692) SolrDispatchFilter.closeShield passes the shielded response object back to jetty making the stream unclose able

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317633#comment-16317633
 ] 

ASF subversion and git services commented on SOLR-11692:


Commit 7a375fda828015ab62702e2e0f07a1038aef40c6 in lucene-solr's branch 
refs/heads/master from [~dsmiley]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7a375fd ]

SOLR-11692: Constrain cases where SolrDispatchFilter uses closeShield


> SolrDispatchFilter.closeShield passes the shielded response object back to 
> jetty making the stream unclose able
> ---
>
> Key: SOLR-11692
> URL: https://issues.apache.org/jira/browse/SOLR-11692
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 7.1
> Environment: Linux/Mac tested
>Reporter: Jeff Miller
>Assignee: David Smiley
>Priority: Minor
>  Labels: dispatchlayer, jetty, newbie, streams
> Fix For: 7.3
>
> Attachments: SOLR-11692.patch, SOLR-11692.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> In test mode we trigger closeShield code in SolrDispatchFilter, however there 
> are code paths where we passthrough the objects to the DefaultHandler which 
> can no longer close the response.
> Example stack trace:
> java.lang.AssertionError: Attempted close of response output stream.
> at 
> org.apache.solr.servlet.SolrDispatchFilter$2$1.close(SolrDispatchFilter.java:528)
> at org.eclipse.jetty.server.Dispatcher.commitResponse(Dispatcher.java:315)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:279)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103)
> at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:566)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:385)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> searchserver.filter.SfdcDispatchFilter.doFilter(SfdcDispatchFilter.java:204)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:370)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
> at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java

[jira] [Updated] (SOLR-10995) No jetties were stopped in ChaosMonkey

2018-01-08 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-10995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tomás Fernández Löbbe updated SOLR-10995:
-
Attachment: SOLR-10995.patch

I've been testing with this patch. I haven't seen the same error again. The 
test is still not 100% reliable, and I can get it to fail beasting it with 
tests.nightly=true

> No jetties were stopped in ChaosMonkey
> --
>
> Key: SOLR-10995
> URL: https://issues.apache.org/jira/browse/SOLR-10995
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 7.0
>Reporter: Tomás Fernández Löbbe
> Attachments: 1339.txt.zip, 1341.txt, SOLR-10995.patch
>
>
> In the last 10 days I've seen 5 failures of different ChaosMonkey tests 
> (nightly) with the message: "The Monkey ran for over 45 seconds and no 
> jetties were stopped - this is worth investigating!" in master only. This is 
> a new kind of failure, maybe something changed to trigger this.
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1333/
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1334/
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1337/
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1339/
> https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1341/



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-01-08 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved LUCENE-8106.

   Resolution: Done
 Assignee: Steve Rowe
Fix Version/s: 7.3
   master (8.0)

> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8106.patch, LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-8125) emoji sequence support in ICUTokenizer

2018-01-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-8125:

Attachment: LUCENE-8125.patch

Updated patch just with some code comments explaining the logic, in particular 
documenting that its not a perfect science and some alternatives that we could 
do. The current algorithm is very conservative.

In the ICU case the word break rules use "extended text segmentation rules from 
CLDR", so breaks themselves also use an {{$Extended_Pict}} set, which is a 
subset of {{\[:Extended_Pictographic:]-\[:Emoji:]}}, but being maintained 
manually I guess?

anyway the logic here could be substantially more aggressive, but I wanted to 
start with something more simple and by the book, so to speak. 

For more information, see: 
* http://unicode.org/reports/tr29/#WB3c
* https://www.unicode.org/reports/tr51/#Identification
* https://www.unicode.org/repos/cldr/trunk/common/segments/root.xml
* 
http://source.icu-project.org/repos/icu/trunk/icu4c/source/data/brkitr/rules/word.txt

> emoji sequence support in ICUTokenizer
> --
>
> Key: LUCENE-8125
> URL: https://issues.apache.org/jira/browse/LUCENE-8125
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-8125.patch, LUCENE-8125.patch, LUCENE-8125.patch
>
>
> uax29 word break rules already know how to handle these correctly, we just 
> need to assign them a token type. 
> This is better than users trying to do this with custom rules (e.g. 
> LUCENE-7916) because they are script-independent (common/inherited).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11631) Schema API always has status 0

2018-01-08 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe resolved SOLR-11631.
---
   Resolution: Fixed
 Assignee: Steve Rowe
Fix Version/s: 7.3
   master (8.0)

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
>Assignee: Steve Rowe
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11631.patch, SOLR-11631.patch, SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11631) Schema API always has status 0

2018-01-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317563#comment-16317563
 ] 

Steve Rowe commented on SOLR-11631:
---

bq. [...] the error key is not inside the responseHeader section, which seems 
like the right place to me, since the error info is metadata, not data.

I'll make a different issue to consistently place this info across all Solr 
APIs.

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11631.patch, SOLR-11631.patch, SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11631) Schema API always has status 0

2018-01-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317561#comment-16317561
 ] 

Steve Rowe commented on SOLR-11631:
---

{quote}
{noformat}
"metadata":[
  "error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject",
  "root-error-class","org.apache.solr.api.ApiBag$ExceptionWithErrObject"],
{noformat}
In what cases will this part of the response be useful to users?
{quote}

[~varunthacker], I think this should go on another issue.

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch, SOLR-11631.patch, SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11631) Schema API always has status 0

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317558#comment-16317558
 ] 

ASF subversion and git services commented on SOLR-11631:


Commit 34b30da60cc4b6f9ed0a528d470eb075871db6f7 in lucene-solr's branch 
refs/heads/branch_7x from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=34b30da ]

SOLR-11631: The Schema API should return non-zero status when there are failures


> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch, SOLR-11631.patch, SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11631) Schema API always has status 0

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317559#comment-16317559
 ] 

ASF subversion and git services commented on SOLR-11631:


Commit 9f221796fe1b79ead6509efdcaa0a17c5a382c65 in lucene-solr's branch 
refs/heads/master from [~steve_rowe]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9f22179 ]

SOLR-11631: The Schema API should return non-zero status when there are failures


> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch, SOLR-11631.patch, SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_144) - Build # 21245 - Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/21245/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager.testGetSetRemoveData

Error Message:
Node watch should have fired!

Stack Trace:
java.lang.AssertionError: Node watch should have fired!
at 
__randomizedtesting.SeedInfo.seed([C8D2BBF2592E9F73:EE4254A00DA05979]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager.testGetSetRemoveData(TestDistribStateManager.java:256)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)




Build Log:
[...truncated 12159 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager
   [junit4]   2> Creating dataDir: 
/home/jenkins/workspace/Lucene-Solr-master-Linux/solr/build/solr-core/test/J1/temp/solr.cloud.autoscaling.sim.TestDistribStateManager_C8D2BBF2592

[jira] [Comment Edited] (SOLR-11631) Schema API always has status 0

2018-01-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317554#comment-16317554
 ] 

Steve Rowe edited comment on SOLR-11631 at 1/9/18 2:18 AM:
---

Noble's patch, fixing up test failures due to differently formatted error 
messages.

Precommit and all Solr tests succeed.

Committing shortly.


was (Author: steve_rowe):
Patch fixing up test failures due to differently formatted error messages.

Precommit and all Solr tests succeed.

Committing shortly.

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch, SOLR-11631.patch, SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11631) Schema API always has status 0

2018-01-08 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-11631:
--
Attachment: SOLR-11631.patch

Patch fixing up test failures due to differently formatted error messages.

Precommit and all Solr tests succeed.

Committing shortly.

> Schema API always has status 0
> --
>
> Key: SOLR-11631
> URL: https://issues.apache.org/jira/browse/SOLR-11631
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-11631.patch, SOLR-11631.patch, SOLR-11631.patch
>
>
> Schema API failures always return status=0.
> Consumers should be able to detect failure using normal mechanisms (i.e. 
> status != 0) rather than having to parse the response for "errors".  Right 
> now if I attempt to {{add-field}} an already existing field, I get:
> {noformat}
> {responseHeader={status=0,QTime=XXX},errors=[{add-field={name=YYY, ...}, 
> errorMessages=[Field 'YYY' already exists.]}]}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8106) Add script to attempt to reproduce failing tests from a Jenkins log

2018-01-08 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317536#comment-16317536
 ] 

Steve Rowe commented on LUCENE-8106:


bq. So are you going to add Jobs to Apache Jenkins for this?

Yes, I was planning on it.

> Add script to attempt to reproduce failing tests from a Jenkins log
> ---
>
> Key: LUCENE-8106
> URL: https://issues.apache.org/jira/browse/LUCENE-8106
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Steve Rowe
> Attachments: LUCENE-8106.patch, LUCENE-8106.patch
>
>
> This script will be runnable from a downstream job triggered by an upstream 
> failing Jenkins job, passing log location info between the two.
> The script will also be runnable manually from a developer's cmdline.
> From the script help:
> {noformat}
> Usage:
>  python3 -u reproduceJenkinsFailures.py URL
> Must be run from a Lucene/Solr git workspace. Downloads the Jenkins
> log pointed to by the given URL, parses it for Git revision and failed
> Lucene/Solr tests, checks out the Git revision in the local workspace,
> groups the failed tests by module, then runs
> 'ant test -Dtest.dups=5 -Dtests.class="*.test1[|*.test2[...]]" ...'
> in each module of interest, failing at the end if any of the runs fails.
> To control the maximum number of concurrent JVMs used for each module's
> test run, set 'tests.jvms', e.g. in ~/lucene.build.properties
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 390 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/390/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.analysis.de.TestGermanMinimalStemFilter

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.de.TestGermanMinimalStemFilter_BBF7A8BA58D072C0-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.de.TestGermanMinimalStemFilter_BBF7A8BA58D072C0-001\tempDir-001

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.de.TestGermanMinimalStemFilter_BBF7A8BA58D072C0-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.de.TestGermanMinimalStemFilter_BBF7A8BA58D072C0-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.de.TestGermanMinimalStemFilter_BBF7A8BA58D072C0-001\tempDir-001:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.de.TestGermanMinimalStemFilter_BBF7A8BA58D072C0-001\tempDir-001
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.de.TestGermanMinimalStemFilter_BBF7A8BA58D072C0-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\analysis\common\test\J1\temp\lucene.analysis.de.TestGermanMinimalStemFilter_BBF7A8BA58D072C0-001

at __randomizedtesting.SeedInfo.seed([BBF7A8BA58D072C0]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestDemoParallelLeafReader

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_724E33A04A79E2E3-001\tempDir-004\segs:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_724E33A04A79E2E3-001\tempDir-004\segs

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_724E33A04A79E2E3-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_724E33A04A79E2E3-001\tempDir-004
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_724E33A04A79E2E3-001\tempDir-004\segs:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_724E33A04A79E2E3-001\tempDir-004\segs
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_724E33A04A79E2E3-001\tempDir-004:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\lucene\build\core\test\J0\temp\lucene.index.TestDemoParallelLeafReader_724E33A04A79E2E3-001\tempDir-004

at __randomizedtesting.SeedInfo.seed([724E33A04A79E2E3]:0)

[jira] [Commented] (SOLR-10716) Add termVectors Stream Evaluator

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317491#comment-16317491
 ] 

ASF subversion and git services commented on SOLR-10716:


Commit 63b3e553e7b0d00595c94720390e22c7256ca7a4 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=63b3e55 ]

SOLR-10716: Add termVectors Stream Evaluator


> Add termVectors Stream Evaluator
> 
>
> Key: SOLR-10716
> URL: https://issues.apache.org/jira/browse/SOLR-10716
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.0
>
> Attachments: SOLR-10716.patch, SOLR-10716.patch, SOLR-10716.patch, 
> SOLR-10716.patch, SOLR-10716.patch
>
>
> The termVectors Stream Evaluator returns tf-idf word vectors for a text field 
> in a list of tuples. 
> Syntax:
> {code}
> let(a=select(search(...), analyze(a, body) as terms),
>  b=termVectors(a, minDocFreq=".00", maxDocFreq="1.0")) 
> {code}
> The code above performs a search then uses the *select* stream and *analyze* 
> evaluator to attach a list of terms to each document.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10716) Add termVectors Stream Evaluator

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317492#comment-16317492
 ] 

ASF subversion and git services commented on SOLR-10716:


Commit 9ac376a0c4668072f223dd943e148cf5156b8765 in lucene-solr's branch 
refs/heads/branch_7x from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=9ac376a ]

SOLR-10716: Improve error handling


> Add termVectors Stream Evaluator
> 
>
> Key: SOLR-10716
> URL: https://issues.apache.org/jira/browse/SOLR-10716
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.0
>
> Attachments: SOLR-10716.patch, SOLR-10716.patch, SOLR-10716.patch, 
> SOLR-10716.patch, SOLR-10716.patch
>
>
> The termVectors Stream Evaluator returns tf-idf word vectors for a text field 
> in a list of tuples. 
> Syntax:
> {code}
> let(a=select(search(...), analyze(a, body) as terms),
>  b=termVectors(a, minDocFreq=".00", maxDocFreq="1.0")) 
> {code}
> The code above performs a search then uses the *select* stream and *analyze* 
> evaluator to attach a list of terms to each document.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11783) Rename core in solr standalone mode is not persisted

2018-01-08 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated SOLR-11783:

Fix Version/s: 7.2.1

> Rename core in solr standalone mode is not persisted
> 
>
> Key: SOLR-11783
> URL: https://issues.apache.org/jira/browse/SOLR-11783
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 7.1
>Reporter: Michael Dürr
>Assignee: Erick Erickson
> Fix For: 7.3, 7.2.1
>
>
> After I upgraded from solr 6.3.0 to 7.1.0 I recognized that the RENAME admin 
> command does not persist the new core name to the core.properties file.
> I'm not very familiar with the solr internals, but it seems like the 
> {quote}CorePropertiesLocator.buildCoreProperties(CoreDescriptor cd){quote} 
> method uses an invalid core descriptor to initialize the core properties that 
> get written to the core properties file.
> Best regards,
> Michael



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11555) If the query terms reduce to nothing, filter(clause) produces an NPE whereas fq=clause does not

2018-01-08 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated SOLR-11555:

Fix Version/s: 7.2.1

> If the query terms reduce to nothing, filter(clause) produces an NPE whereas 
> fq=clause does not
> ---
>
> Key: SOLR-11555
> URL: https://issues.apache.org/jira/browse/SOLR-11555
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.3, 7.2.1
>
> Attachments: SOLR-11555.patch, SOLR-11555.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10716) Add termVectors Stream Evaluator

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317484#comment-16317484
 ] 

ASF subversion and git services commented on SOLR-10716:


Commit d189b587084dfc8ffb359061e43ec76761b5056d in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=d189b58 ]

SOLR-10716: Improve error handling


> Add termVectors Stream Evaluator
> 
>
> Key: SOLR-10716
> URL: https://issues.apache.org/jira/browse/SOLR-10716
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.0
>
> Attachments: SOLR-10716.patch, SOLR-10716.patch, SOLR-10716.patch, 
> SOLR-10716.patch, SOLR-10716.patch
>
>
> The termVectors Stream Evaluator returns tf-idf word vectors for a text field 
> in a list of tuples. 
> Syntax:
> {code}
> let(a=select(search(...), analyze(a, body) as terms),
>  b=termVectors(a, minDocFreq=".00", maxDocFreq="1.0")) 
> {code}
> The code above performs a search then uses the *select* stream and *analyze* 
> evaluator to attach a list of terms to each document.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10716) Add termVectors Stream Evaluator

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317483#comment-16317483
 ] 

ASF subversion and git services commented on SOLR-10716:


Commit 459ed85052a72219631f0dcdeb1b6650b632a8fa in lucene-solr's branch 
refs/heads/master from [~joel.bernstein]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=459ed85 ]

SOLR-10716: Add termVectors Stream Evaluator


> Add termVectors Stream Evaluator
> 
>
> Key: SOLR-10716
> URL: https://issues.apache.org/jira/browse/SOLR-10716
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.0
>
> Attachments: SOLR-10716.patch, SOLR-10716.patch, SOLR-10716.patch, 
> SOLR-10716.patch, SOLR-10716.patch
>
>
> The termVectors Stream Evaluator returns tf-idf word vectors for a text field 
> in a list of tuples. 
> Syntax:
> {code}
> let(a=select(search(...), analyze(a, body) as terms),
>  b=termVectors(a, minDocFreq=".00", maxDocFreq="1.0")) 
> {code}
> The code above performs a search then uses the *select* stream and *analyze* 
> evaluator to attach a list of terms to each document.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10261) TestStressCloudBlindAtomicUpdates.test_dv() fail

2018-01-08 Thread Steve Rowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10261?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Rowe updated SOLR-10261:
--
Attachment: SOLR-10261.patch

Modernized patch, allows the following reproducing seed from my Jenkins to pass 
on master:

{noformat}
   [junit4]   2> NOTE: reproduce with: ant test  
-Dtestcase=TestStressCloudBlindAtomicUpdates -Dtests.method=test_dv 
-Dtests.seed=51E04D7714EBAA76 -Dtests.multiplier=2 -Dtests.nightly=true 
-Dtests.slow=true 
-Dtests.linedocsfile=/home/jenkins/lucene-data/enwiki.random.lines.txt 
-Dtests.locale=ru -Dtests.timezone=Asia/Brunei -Dtests.asserts=true 
-Dtests.file.encoding=UTF-8
   [junit4] ERROR   2.95s J2 | TestStressCloudBlindAtomicUpdates.test_dv <<<
   [junit4]> Throwable #1: java.util.concurrent.ExecutionException: 
java.lang.RuntimeException: Error from server at 
http://127.0.0.1:58084/solr/test_col: Async exception during distributed 
update: Error from server at 
http://127.0.0.1:54160/solr/test_col_shard3_replica_n9: Server Error
   [junit4]> request: 
http://127.0.0.1:54160/solr/test_col_shard3_replica_n9/update?update.distrib=TOLEADER&distrib.from=http%3A%2F%2F127.0.0.1%3A58084%2Fsolr%2Ftest_col_shard3_replica_n8%2F&wt=javabin&version=2
   [junit4]> Remote error message: Failed synchronous update on shard 
StdNode: http://127.0.0.1:58084/solr/test_col_shard3_replica_n8/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@22ddf3bb
   [junit4]>at 
__randomizedtesting.SeedInfo.seed([51E04D7714EBAA76:67F42F319EB69067]:0)
   [junit4]>at 
java.util.concurrent.FutureTask.report(FutureTask.java:122)
   [junit4]>at 
java.util.concurrent.FutureTask.get(FutureTask.java:192)
   [junit4]>at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.checkField(TestStressCloudBlindAtomicUpdates.java:283)
   [junit4]>at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates.test_dv(TestStressCloudBlindAtomicUpdates.java:195)
   [junit4]>at java.lang.Thread.run(Thread.java:748)
   [junit4]> Caused by: java.lang.RuntimeException: Error from server at 
http://127.0.0.1:58084/solr/test_col: Async exception during distributed 
update: Error from server at 
http://127.0.0.1:54160/solr/test_col_shard3_replica_n9: Server Error
   [junit4]> request: 
http://127.0.0.1:54160/solr/test_col_shard3_replica_n9/update?update.distrib=TOLEADER&distrib.from=http%3A%2F%2F127.0.0.1%3A58084%2Fsolr%2Ftest_col_shard3_replica_n8%2F&wt=javabin&version=2
   [junit4]> Remote error message: Failed synchronous update on shard 
StdNode: http://127.0.0.1:58084/solr/test_col_shard3_replica_n8/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@22ddf3bb
   [junit4]>at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates$Worker.run(TestStressCloudBlindAtomicUpdates.java:411)
   [junit4]>at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
   [junit4]>at 
java.util.concurrent.FutureTask.run(FutureTask.java:266)
   [junit4]>at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   [junit4]>at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   [junit4]>... 1 more
   [junit4]> Caused by: 
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:58084/solr/test_col: Async exception during 
distributed update: Error from server at 
http://127.0.0.1:54160/solr/test_col_shard3_replica_n9: Server Error
   [junit4]> request: 
http://127.0.0.1:54160/solr/test_col_shard3_replica_n9/update?update.distrib=TOLEADER&distrib.from=http%3A%2F%2F127.0.0.1%3A58084%2Fsolr%2Ftest_col_shard3_replica_n8%2F&wt=javabin&version=2
   [junit4]> Remote error message: Failed synchronous update on shard 
StdNode: http://127.0.0.1:58084/solr/test_col_shard3_replica_n8/ update: 
org.apache.solr.client.solrj.request.UpdateRequest@22ddf3bb
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
   [junit4]>at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
   [junit4]>at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:211)
   [junit4]>at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpdates$Worker.doRandomAtomicUpdate(TestStressCloudBlindAtomicUpdates.java:370)
   [junit4]>at 
org.apache.solr.cloud.TestStressCloudBlindAtomicUpda

[jira] [Commented] (SOLR-11555) If the query terms reduce to nothing, filter(clause) produces an NPE whereas fq=clause does not

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11555?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317423#comment-16317423
 ] 

ASF subversion and git services commented on SOLR-11555:


Commit 4d919e2c2e34693a5cfa3cd2170ac3eb3ebdf465 in lucene-solr's branch 
refs/heads/branch_7_2 from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=4d919e2 ]

SOLR-11555: If the query terms reduce to nothing, filter(clause) produces an 
NPE whereas fq=clause does not


> If the query terms reduce to nothing, filter(clause) produces an NPE whereas 
> fq=clause does not
> ---
>
> Key: SOLR-11555
> URL: https://issues.apache.org/jira/browse/SOLR-11555
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Erick Erickson
>Assignee: Erick Erickson
>Priority: Minor
> Fix For: 7.3
>
> Attachments: SOLR-11555.patch, SOLR-11555.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11783) Rename core in solr standalone mode is not persisted

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317415#comment-16317415
 ] 

ASF subversion and git services commented on SOLR-11783:


Commit 429719f25741827408d58d1c7a6fa884f5e955ff in lucene-solr's branch 
refs/heads/branch_7_2 from [~erickerickson]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=429719f ]

SOLR-11783: Rename core in solr standalone mode is not persisted


> Rename core in solr standalone mode is not persisted
> 
>
> Key: SOLR-11783
> URL: https://issues.apache.org/jira/browse/SOLR-11783
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Server
>Affects Versions: 7.1
>Reporter: Michael Dürr
>Assignee: Erick Erickson
> Fix For: 7.3
>
>
> After I upgraded from solr 6.3.0 to 7.1.0 I recognized that the RENAME admin 
> command does not persist the new core name to the core.properties file.
> I'm not very familiar with the solr internals, but it seems like the 
> {quote}CorePropertiesLocator.buildCoreProperties(CoreDescriptor cd){quote} 
> method uses an invalid core descriptor to initialize the core properties that 
> get written to the core properties file.
> Best regards,
> Michael



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 1150 - Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1150/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager.testGetSetRemoveData

Error Message:
Node watch should have fired!

Stack Trace:
java.lang.AssertionError: Node watch should have fired!
at 
__randomizedtesting.SeedInfo.seed([B028724B80E4CB6:2D926876EC808ABC]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager.testGetSetRemoveData(TestDistribStateManager.java:256)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)




Build Log:
[...truncated 13677 lines...]
   [junit4] Suite: org.apache.solr.cloud.autoscaling.sim.TestDistribStateManager
   [junit4]   2> 3056379 INFO  
(SUITE-TestDistribStateManager-seed#[B028724B80E4CB6]-worker) [] 
o.a.s.SolrTestCaseJ4 SecureRandom sanity checks: 
test.solr.allowed.securerandom=null & java.security.egd=file

[jira] [Updated] (LUCENE-8125) emoji sequence support in ICUTokenizer

2018-01-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-8125:

Attachment: LUCENE-8125.patch

I updated the patch with support for presentation selectors 
(http://unicode.org/emoji/charts/emoji-variants.html). 

Nothing fancy, if we want to hyper-optimize this stuff, jflex is a better 
place: all of these cases are spelled out in unicode data files.

> emoji sequence support in ICUTokenizer
> --
>
> Key: LUCENE-8125
> URL: https://issues.apache.org/jira/browse/LUCENE-8125
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-8125.patch, LUCENE-8125.patch
>
>
> uax29 word break rules already know how to handle these correctly, we just 
> need to assign them a token type. 
> This is better than users trying to do this with custom rules (e.g. 
> LUCENE-7916) because they are script-independent (common/inherited).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7102 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7102/
Java: 32bit/jdk1.8.0_144 -client -XX:+UseConcMarkSweepGC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.benchmark.quality.TestQualityRun

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001\benchmark-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001\benchmark-001

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001\benchmark-001\reuters.578.lines.txt.bz2:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001\benchmark-001\reuters.578.lines.txt.bz2

C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001\benchmark-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001\benchmark-001
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001\benchmark-001\reuters.578.lines.txt.bz2:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001\benchmark-001\reuters.578.lines.txt.bz2
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\benchmark\test\J1\temp\lucene.benchmark.quality.TestQualityRun_11DBF31D84C3F26F-001

at __randomizedtesting.SeedInfo.seed([11DBF31D84C3F26F]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.lucene.store.TestNIOFSDirectory

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_33DC533B1BB8FD10-001\testThreadSafety-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_33DC533B1BB8FD10-001\testThreadSafety-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_33DC533B1BB8FD10-001\testThreadSafety-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\core\test\J0\temp\lucene.store.TestNIOFSDirectory_33DC533B1BB8FD10-001\testThreadSafety-001

at

[jira] [Commented] (LUCENE-8125) emoji sequence support in ICUTokenizer

2018-01-08 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317274#comment-16317274
 ] 

Robert Muir commented on LUCENE-8125:
-

Note: I think it'd be nice to fix for standardtokenizer at some point too, but 
we need to first bring its grammar up to the latest unicode i think? This way 
it will have the latest uax#29 rules around this stuff such as "Do not break 
within emoji zwj sequences." So some work to do for that, but we can tackle 
here with ICU first.

> emoji sequence support in ICUTokenizer
> --
>
> Key: LUCENE-8125
> URL: https://issues.apache.org/jira/browse/LUCENE-8125
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-8125.patch
>
>
> uax29 word break rules already know how to handle these correctly, we just 
> need to assign them a token type. 
> This is better than users trying to do this with custom rules (e.g. 
> LUCENE-7916) because they are script-independent (common/inherited).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-7.x - Build # 310 - Still Unstable

2018-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-7.x/310/

14 tests failed.
FAILED:  org.apache.solr.cloud.AliasIntegrationTest.test

Error Message:
Collection not found: testalias4

Stack Trace:
org.apache.solr.common.SolrException: Collection not found: testalias4
at 
__randomizedtesting.SeedInfo.seed([AA4F17DB90C69AE8:221B28013E3AF710]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:851)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at 
org.apache.solr.client.solrj.request.UpdateRequest.commit(UpdateRequest.java:233)
at 
org.apache.solr.cloud.AliasIntegrationTest.test(AliasIntegrationTest.java:238)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testCreateShouldFai

[jira] [Updated] (LUCENE-8125) emoji sequence support in ICUTokenizer

2018-01-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir updated LUCENE-8125:

Attachment: LUCENE-8125.patch

Here's a patch. I did more cleanup of outdated breakiterator stuff while I was 
here. Its not needed after the ICU upgrade (LUCENE-8122).

I added some simple tests, e.g. sequences such as 👩‍❤️‍👩 (WOMAN + ZWJ + HEAVY 
BLACK HEART + VARIATION SELECTOR-16 + ZWJ + WOMAN) are recognized as one token 
because the rules already knew that. 

the filters we have such as ICUNormalizer2Filter/ICUFoldingFilter would reduce 
the above to WOMAN + HEAVY BLACK HEART + WOMAN, because they remove the default 
ignorables.

> emoji sequence support in ICUTokenizer
> --
>
> Key: LUCENE-8125
> URL: https://issues.apache.org/jira/browse/LUCENE-8125
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Robert Muir
> Attachments: LUCENE-8125.patch
>
>
> uax29 word break rules already know how to handle these correctly, we just 
> need to assign them a token type. 
> This is better than users trying to do this with custom rules (e.g. 
> LUCENE-7916) because they are script-independent (common/inherited).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-8125) emoji sequence support in ICUTokenizer

2018-01-08 Thread Robert Muir (JIRA)
Robert Muir created LUCENE-8125:
---

 Summary: emoji sequence support in ICUTokenizer
 Key: LUCENE-8125
 URL: https://issues.apache.org/jira/browse/LUCENE-8125
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Robert Muir


uax29 word break rules already know how to handle these correctly, we just need 
to assign them a token type. 

This is better than users trying to do this with custom rules (e.g. 
LUCENE-7916) because they are script-independent (common/inherited).





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8122) upgrade icu to 60.2

2018-01-08 Thread Robert Muir (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Muir resolved LUCENE-8122.
-
   Resolution: Fixed
Fix Version/s: 7.3
   trunk

> upgrade icu to 60.2
> ---
>
> Key: LUCENE-8122
> URL: https://issues.apache.org/jira/browse/LUCENE-8122
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Robert Muir
> Fix For: trunk, 7.3
>
> Attachments: LUCENE-8122.patch, LUCENE-8122.patch
>
>
> Currently we are at version 59.1. There is some change to breakiterator 
> behavior, but I think it simplifies our code. Also our tools needed to be 
> updated to pull some data files from new source code location. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8122) upgrade icu to 60.2

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317143#comment-16317143
 ] 

ASF subversion and git services commented on LUCENE-8122:
-

Commit 96be7b432ebd4b9fd8c2efa1b037743a376a05ec in lucene-solr's branch 
refs/heads/branch_7x from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=96be7b4 ]

LUCENE-8122: Upgrade analysis/icu to ICU 60.2


> upgrade icu to 60.2
> ---
>
> Key: LUCENE-8122
> URL: https://issues.apache.org/jira/browse/LUCENE-8122
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Robert Muir
> Attachments: LUCENE-8122.patch, LUCENE-8122.patch
>
>
> Currently we are at version 59.1. There is some change to breakiterator 
> behavior, but I think it simplifies our code. Also our tools needed to be 
> updated to pull some data files from new source code location. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Solaris (64bit/jdk1.8.0) - Build # 382 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Solaris/382/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=34029, name=jetty-launcher-7560-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)   
 2) Thread[id=34032, name=jetty-launcher-7560-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:530)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:505)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=34029, name=jetty-launcher-7560-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forP

[jira] [Commented] (LUCENE-8122) upgrade icu to 60.2

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317114#comment-16317114
 ] 

ASF subversion and git services commented on LUCENE-8122:
-

Commit 07407a5b53bf4d790c316ecf3b71046242f1e2da in lucene-solr's branch 
refs/heads/master from [~rcmuir]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=07407a5 ]

LUCENE-8122: Upgrade analysis/icu to ICU 60.2


> upgrade icu to 60.2
> ---
>
> Key: LUCENE-8122
> URL: https://issues.apache.org/jira/browse/LUCENE-8122
> Project: Lucene - Core
>  Issue Type: Improvement
>  Components: modules/analysis
>Reporter: Robert Muir
> Attachments: LUCENE-8122.patch, LUCENE-8122.patch
>
>
> Currently we are at version 59.1. There is some change to breakiterator 
> behavior, but I think it simplifies our code. Also our tools needed to be 
> updated to pull some data files from new source code location. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11692) SolrDispatchFilter.closeShield passes the shielded response object back to jetty making the stream unclose able

2018-01-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley updated SOLR-11692:

Attachment: SOLR-11692.patch

Aha; I see the problem.  Took a bit of debugging to find this nasty bug.  
Notice on line ~353 that the request var is replaced with wrappedRequest.get() 
if there's something there.  In your patch you retained both "request" and 
"httpServletRequest" as valid variables... which is conducive to causing this 
bug.  In this new version of the patch, I renamed the formal parameter names to 
have a leading underscore and then first thing I immediately assign them to the 
Http variant with the same name without the leading underscore.

Tests are running now; if it checks out I'll commit shortly.  Thanks for the 
contribution Jeff!

> SolrDispatchFilter.closeShield passes the shielded response object back to 
> jetty making the stream unclose able
> ---
>
> Key: SOLR-11692
> URL: https://issues.apache.org/jira/browse/SOLR-11692
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 7.1
> Environment: Linux/Mac tested
>Reporter: Jeff Miller
>Assignee: David Smiley
>Priority: Minor
>  Labels: dispatchlayer, jetty, newbie, streams
> Fix For: 7.3
>
> Attachments: SOLR-11692.patch, SOLR-11692.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> In test mode we trigger closeShield code in SolrDispatchFilter, however there 
> are code paths where we passthrough the objects to the DefaultHandler which 
> can no longer close the response.
> Example stack trace:
> java.lang.AssertionError: Attempted close of response output stream.
> at 
> org.apache.solr.servlet.SolrDispatchFilter$2$1.close(SolrDispatchFilter.java:528)
> at org.eclipse.jetty.server.Dispatcher.commitResponse(Dispatcher.java:315)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:279)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103)
> at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:566)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:385)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> searchserver.filter.SfdcDispatchFilter.doFilter(SfdcDispatchFilter.java:204)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:370)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
> at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
>

[jira] [Assigned] (SOLR-11692) SolrDispatchFilter.closeShield passes the shielded response object back to jetty making the stream unclose able

2018-01-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-11692:
---

 Assignee: David Smiley
Fix Version/s: 7.3

> SolrDispatchFilter.closeShield passes the shielded response object back to 
> jetty making the stream unclose able
> ---
>
> Key: SOLR-11692
> URL: https://issues.apache.org/jira/browse/SOLR-11692
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Response Writers
>Affects Versions: 7.1
> Environment: Linux/Mac tested
>Reporter: Jeff Miller
>Assignee: David Smiley
>Priority: Minor
>  Labels: dispatchlayer, jetty, newbie, streams
> Fix For: 7.3
>
> Attachments: SOLR-11692.patch
>
>   Original Estimate: 3h
>  Remaining Estimate: 3h
>
> In test mode we trigger closeShield code in SolrDispatchFilter, however there 
> are code paths where we passthrough the objects to the DefaultHandler which 
> can no longer close the response.
> Example stack trace:
> java.lang.AssertionError: Attempted close of response output stream.
> at 
> org.apache.solr.servlet.SolrDispatchFilter$2$1.close(SolrDispatchFilter.java:528)
> at org.eclipse.jetty.server.Dispatcher.commitResponse(Dispatcher.java:315)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:279)
> at org.eclipse.jetty.server.Dispatcher.forward(Dispatcher.java:103)
> at org.eclipse.jetty.servlet.DefaultServlet.doGet(DefaultServlet.java:566)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:734)
> at javax.servlet.http.HttpServlet.service(HttpServlet.java:847)
> at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:684)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1448)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:385)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
> at 
> searchserver.filter.SfdcDispatchFilter.doFilter(SfdcDispatchFilter.java:204)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
> at org.eclipse.jetty.server.Server.handle(Server.java:370)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:949)
> at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1011)
> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:644)
> at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
> at 
> org.eclipse.jetty.server.AsyncHttpConnection.handle(AsyncHttpConnection.java:82)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint.handle(SelectChannelEndPoint.java:668)
> at 
> org.eclipse.jetty.io.nio.SelectChannelEndPoint$1.run(SelectChannelEndPoint.java:52)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
> at java.lang.Thread.run(Thread.java:745)
> Related JIRA: SOLR-8933



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BugFix release 7.2.1

2018-01-08 Thread Erick Erickson
Hmmm, I think you missed my implied point. How are these metrics collected
and compared? There are about a dozen different machines running various op
systems etc. For these measurements to spot regressions and/or
improvements, they need to have a repository where the results get
published. So a report like "build XXX took YYY seconds to index ZZZ
documents" doesn't tell us anything. You need to gather then for a
_specific_ machine.

As for whether they should be run or not, an annotation could help here,
there are already @Slow, @Nightly, @Weekly and @Performance could be added.
Mike McCandless has some of these kinds of things already for Lucene, I
htink the first thing would be to check whether they are already done, it's
possible you'd be reinventing the wheel.

Best,
Erick

On Mon, Jan 8, 2018 at 11:45 AM, S G  wrote:

> We can put some lower limits on CPU and Memory for running a performance
> test.
> If those lower limits are not met, then the test will just skip execution.
>
> And then we put some lower bounds (time-wise) on the time spent by
> different parts of the test like:
>  - Max time taken to index 1 million documents
>  - Max time taken to query, facet, pivot etc
>  - Max time taken to delete 100,000 documents while read and writes are
> happening.
>
> For all of the above, we can publish metrics like 5minRate, 95thPercent
> and assert on values lower than a particular value.
>
> I know some other software compare CPU cycles across different runs as
> well but not sure how.
>
> Such tests will give us more confidence when releasing/adopting new
> features like pint compared to tint etc.
>
> Thanks
> SG
>
>
>
> On Sat, Jan 6, 2018 at 9:59 AM, Erick Erickson 
> wrote:
>
>> Not sure how performance tests in the unit tests would be interpreted. If
>> I run the same suite on two different machines how do I compare the
>> numbers?
>>
>> Or are you thinking of having some tests so someone can check out
>> different versions of Solr and run the perf tests on a single machine,
>> perhaps using bisect to pinpoint when something changed?
>>
>> I'm not opposed at all, just trying to understand how one would go about
>> using such tests.
>>
>> Best,
>> Erick
>>
>> On Fri, Jan 5, 2018 at 10:09 PM, S G  wrote:
>>
>>> Just curious to know, does the test suite include some performance test
>>> also?
>>> I would like to know the performance impact of using pints vs tints or
>>> ints etc.
>>> If they are not there, I can try to add some tests for the same.
>>>
>>> Thanks
>>> SG
>>>
>>>
>>> On Fri, Jan 5, 2018 at 5:47 PM, Đạt Cao Mạnh 
>>> wrote:
>>>
 Hi all,

 I will work on SOLR-11771
  today, It is a
 simple fix and will be great if it get fixed in 7.2.1

 On Fri, Jan 5, 2018 at 11:23 PM Erick Erickson 
 wrote:

> Neither of those Solr fixes are earth shatteringly important, they've
> both been around for quite a while. I don't think it's urgent to include
> them.
>
> That said, they're pretty simple and isolated so worth doing if Jim is
> willing. But not worth straining much. I was just clearing out some 
> backlog
> over vacation.
>
> Strictly up to you Jim.
>
> Erick
>
> On Fri, Jan 5, 2018 at 6:54 AM, David Smiley  > wrote:
>
>> https://issues.apache.org/jira/browse/SOLR-11809 is in progress,
>> should be easy and I think definitely worth backporting
>>
>> On Fri, Jan 5, 2018 at 8:52 AM Adrien Grand 
>> wrote:
>>
>>> +1
>>>
>>> Looking at the changelog, 7.3 has 3 bug fixes for now: LUCENE-8077,
>>> SOLR-11783 and SOLR-11555. The Lucene change doesn't seem worth
>>> backporting, but maybe the Solr changes should?
>>>
>>> Le ven. 5 janv. 2018 à 12:40, jim ferenczi 
>>> a écrit :
>>>
 Hi,
 We discovered a bad bug in 7x that affects indices created in 6x
 with Lucene54DocValues format. The SortedNumericDocValues created with 
 this
 format have a bug when advanceExact is used, the values retrieved for 
 the
 docs when advanceExact returns true are invalid (the pointer to the 
 values
 is not updated):
 https://issues.apache.org/jira/browse/LUCENE-8117
 This affects all indices created in 6x with sorted numeric doc
 values so I wanted to ask if anyone objects to a bugfix release for 7.2
 (7.2.1). I also volunteer to be the release manager for this one if it 
 is
 accepted.

 Jim

>>>
>>
>> --
>> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
>> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
>> http://www.solrenterprisesearchserver.com
>>
>
>
>>>
>>
>


[jira] [Resolved] (LUCENE-7473) factor our ScoreDoc.sortBy[ScoreDescThen]DocAsc() methods

2018-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved LUCENE-7473.
-
Resolution: Won't Do

> factor our ScoreDoc.sortBy[ScoreDescThen]DocAsc() methods
> -
>
> Key: LUCENE-7473
> URL: https://issues.apache.org/jira/browse/LUCENE-7473
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-7473.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11798) remove very deprecated code path in HighlightingComponent.inform

2018-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-11798.

Resolution: Fixed

> remove very deprecated code path in HighlightingComponent.inform
> 
>
> Key: SOLR-11798
> URL: https://issues.apache.org/jira/browse/SOLR-11798
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (8.0), 7.3
>
> Attachments: SOLR-11798.patch
>
>
> SOLR-1696 https://svn.apache.org/viewvc?view=revision&revision=899572 in 2010 
> deprecated top-level {{}} syntax in {{solrconfig.xml}} in 
> favour of {{}} equivalent syntax.
> The {{SolrConfig.java}} code to read the top-level highlighting syntax 
> _seems_ to be gone but {{HighlightComponent.inform}} itself still supports 
> {{SolrHighlighter}} {{PluginInfo}}.
> This ticket is to formally deprecate the old syntax from Solr 7.3.0 onwards 
> and to stop supporting it from luceneMatchVersion 7.3.0 onwards.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-11809) QueryComponent.prepare rq parameter parsing fails under SOLR 7.2

2018-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved SOLR-11809.

   Resolution: Fixed
Fix Version/s: 7.3
   master (8.0)

Thanks everyone!

> QueryComponent.prepare rq parameter parsing fails under SOLR 7.2
> 
>
> Key: SOLR-11809
> URL: https://issues.apache.org/jira/browse/SOLR-11809
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: Windows 10, java version "1.8.0_151"
>Reporter: Dariusz Wojtas
>Assignee: Christine Poerschke
> Fix For: master (8.0), 7.3, 7.2.1
>
> Attachments: SOLR-11809.patch, SOLR-11809.patch, ltr-sample.zip
>
>
> The LTR functionality that works under SOLR 7.0 and 7.1 stopped working in 
> 7.2.
> From the solr-user mailing list it appears it might be related to SOLR-11501 .
> I am attaching the minimal working collection definition (attached 
> [^ltr-sample.zip]) that shows the problem.
> Please deploy the collection (unpack under "server/solr"), run solr and 
> invoke the URL below.
>   http://localhost:8983/solr/ltr-sample/select?q=*:*
> Behaviour:
> * under 7.0 and 7.1 - empty resultset is returned (there is no data in the 
> collection)
> * under 7.2 - error: "rq parameter must be a RankQuery". The stacktrace
> {code}
> 2018-01-02 20:51:06.807 INFO  (qtp205125520-20) [   x:ltr-sample] 
> o.a.s.c.S.Request [ltr-sample]  webapp=/solr path=/select 
> params={q=*:*&_=1514909140928} status=400 QTime=23
> 2018-01-02 21:04:27.293 ERROR (qtp205125520-17) [   x:ltr-sample] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: rq parameter 
> must be a RankQuery
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:183)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   [..]
> {code}
> i have checked - the same issue exists when I try to invoke the _rerank_ 
> query parser.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-8115) fail precommit on unnecessary {@inheritDoc} use

2018-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke resolved LUCENE-8115.
-
   Resolution: Fixed
Fix Version/s: 7.3
   master (8.0)

> fail precommit on unnecessary {@inheritDoc} use
> ---
>
> Key: LUCENE-8115
> URL: https://issues.apache.org/jira/browse/LUCENE-8115
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Fix For: master (8.0), 7.3
>
> Attachments: LUCENE-8115-step1.patch, LUCENE-8115-step2.patch, 
> LUCENE-8115-step2.patch
>
>
> * Step 1: identify and remove existing unnecessary {{\{@inheritDoc\}}} use 
> e.g. via IDE tooling or {{git grep -C 1 inheritDoc}}.
> * Step 2: change {{ant validate}} so that precommit fails if/when any new 
> unnecessary {{\{@inheritDoc\}}} are introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11755) Make V2Request constructor public

2018-01-08 Thread Anshum Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anshum Gupta updated SOLR-11755:

Attachment: SOLR-11175.patch

Here's a patch that makes the constructor protected, allowing us to extend the 
V2Request class.

[~noble.paul] what do you think? I've changed the constructor to be protected 
instead of public (as we discussed).

P.S: This is after an offline discussion with Noble, where we agreed that 
making the constructor was the sensible/only way to be able to extend 
V2Request, which is required for new APIs that are v2 only.

> Make V2Request constructor public
> -
>
> Key: SOLR-11755
> URL: https://issues.apache.org/jira/browse/SOLR-11755
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Anshum Gupta
>Assignee: Anshum Gupta
> Attachments: SOLR-11175.patch
>
>
> V2Request has a private constructor that stops it from being extended. We 
> need to change the visibility for that constructor to protected or move the 
> shared methods out of that class into a common place so that SolrJ support 
> could be added for new V2 APIs.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11809) QueryComponent.prepare rq parameter parsing fails under SOLR 7.2

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16317000#comment-16317000
 ] 

ASF subversion and git services commented on SOLR-11809:


Commit 2ec80a272909588a2dd0f276ada4f798979c4fac in lucene-solr's branch 
refs/heads/branch_7_2 from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2ec80a2 ]

SOLR-11809: QueryComponent.prepare rq parsing could fail under SOLR 7.2.0 - fix:
QueryComponent's rq parameter parsing no longer considers the defType parameter.
(Christine Poerschke and David Smiley in response to bug report/analysis from 
Dariusz Wojtas and Diego Ceccarelli)


> QueryComponent.prepare rq parameter parsing fails under SOLR 7.2
> 
>
> Key: SOLR-11809
> URL: https://issues.apache.org/jira/browse/SOLR-11809
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: Windows 10, java version "1.8.0_151"
>Reporter: Dariusz Wojtas
>Assignee: Christine Poerschke
> Fix For: 7.2.1
>
> Attachments: SOLR-11809.patch, SOLR-11809.patch, ltr-sample.zip
>
>
> The LTR functionality that works under SOLR 7.0 and 7.1 stopped working in 
> 7.2.
> From the solr-user mailing list it appears it might be related to SOLR-11501 .
> I am attaching the minimal working collection definition (attached 
> [^ltr-sample.zip]) that shows the problem.
> Please deploy the collection (unpack under "server/solr"), run solr and 
> invoke the URL below.
>   http://localhost:8983/solr/ltr-sample/select?q=*:*
> Behaviour:
> * under 7.0 and 7.1 - empty resultset is returned (there is no data in the 
> collection)
> * under 7.2 - error: "rq parameter must be a RankQuery". The stacktrace
> {code}
> 2018-01-02 20:51:06.807 INFO  (qtp205125520-20) [   x:ltr-sample] 
> o.a.s.c.S.Request [ltr-sample]  webapp=/solr path=/select 
> params={q=*:*&_=1514909140928} status=400 QTime=23
> 2018-01-02 21:04:27.293 ERROR (qtp205125520-17) [   x:ltr-sample] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: rq parameter 
> must be a RankQuery
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:183)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   [..]
> {code}
> i have checked - the same issue exists when I try to invoke the _rerank_ 
> query parser.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316996#comment-16316996
 ] 

ASF subversion and git services commented on SOLR-10783:


Commit 8e30b2a8acbb26543b307ff0838be680bb9cc5eb in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=8e30b2a ]

Revert "SOLR-10783: add (partial) package-info.java to fix precommit"

This reverts commit a63c5675bbbd45604025b72149250cee8bb8a254.


> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8115) fail precommit on unnecessary {@inheritDoc} use

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316994#comment-16316994
 ] 

ASF subversion and git services commented on LUCENE-8115:
-

Commit e10f5d2cfbf7f2fe3419ee28904b412595f8f2b4 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e10f5d2 ]

LUCENE-8115: remove unnecessary-on-its-own {@inheritDoc} annotations.


> fail precommit on unnecessary {@inheritDoc} use
> ---
>
> Key: LUCENE-8115
> URL: https://issues.apache.org/jira/browse/LUCENE-8115
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8115-step1.patch, LUCENE-8115-step2.patch, 
> LUCENE-8115-step2.patch
>
>
> * Step 1: identify and remove existing unnecessary {{\{@inheritDoc\}}} use 
> e.g. via IDE tooling or {{git grep -C 1 inheritDoc}}.
> * Step 2: change {{ant validate}} so that precommit fails if/when any new 
> unnecessary {{\{@inheritDoc\}}} are introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11809) QueryComponent.prepare rq parameter parsing fails under SOLR 7.2

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316993#comment-16316993
 ] 

ASF subversion and git services commented on SOLR-11809:


Commit 6437477706b818e2c77f7e3a93ab1b2725f0563e in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=6437477 ]

SOLR-11809: QueryComponent.prepare rq parsing could fail under SOLR 7.2.0 - fix:
QueryComponent's rq parameter parsing no longer considers the defType parameter.
(Christine Poerschke and David Smiley in response to bug report/analysis from 
Dariusz Wojtas and Diego Ceccarelli)


> QueryComponent.prepare rq parameter parsing fails under SOLR 7.2
> 
>
> Key: SOLR-11809
> URL: https://issues.apache.org/jira/browse/SOLR-11809
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: Windows 10, java version "1.8.0_151"
>Reporter: Dariusz Wojtas
>Assignee: Christine Poerschke
> Fix For: 7.2.1
>
> Attachments: SOLR-11809.patch, SOLR-11809.patch, ltr-sample.zip
>
>
> The LTR functionality that works under SOLR 7.0 and 7.1 stopped working in 
> 7.2.
> From the solr-user mailing list it appears it might be related to SOLR-11501 .
> I am attaching the minimal working collection definition (attached 
> [^ltr-sample.zip]) that shows the problem.
> Please deploy the collection (unpack under "server/solr"), run solr and 
> invoke the URL below.
>   http://localhost:8983/solr/ltr-sample/select?q=*:*
> Behaviour:
> * under 7.0 and 7.1 - empty resultset is returned (there is no data in the 
> collection)
> * under 7.2 - error: "rq parameter must be a RankQuery". The stacktrace
> {code}
> 2018-01-02 20:51:06.807 INFO  (qtp205125520-20) [   x:ltr-sample] 
> o.a.s.c.S.Request [ltr-sample]  webapp=/solr path=/select 
> params={q=*:*&_=1514909140928} status=400 QTime=23
> 2018-01-02 21:04:27.293 ERROR (qtp205125520-17) [   x:ltr-sample] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: rq parameter 
> must be a RankQuery
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:183)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   [..]
> {code}
> i have checked - the same issue exists when I try to invoke the _rerank_ 
> query parser.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8115) fail precommit on unnecessary {@inheritDoc} use

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316995#comment-16316995
 ] 

ASF subversion and git services commented on LUCENE-8115:
-

Commit 88bcf22cca630aa2b409c59105682421d66e8232 in lucene-solr's branch 
refs/heads/branch_7x from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=88bcf22 ]

LUCENE-8115: fail precommit on unnecessary-on-its-own {@inheritDoc} annotations.


> fail precommit on unnecessary {@inheritDoc} use
> ---
>
> Key: LUCENE-8115
> URL: https://issues.apache.org/jira/browse/LUCENE-8115
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8115-step1.patch, LUCENE-8115-step2.patch, 
> LUCENE-8115-step2.patch
>
>
> * Step 1: identify and remove existing unnecessary {{\{@inheritDoc\}}} use 
> e.g. via IDE tooling or {{git grep -C 1 inheritDoc}}.
> * Step 2: change {{ant validate}} so that precommit fails if/when any new 
> unnecessary {{\{@inheritDoc\}}} are introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8115) fail precommit on unnecessary {@inheritDoc} use

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316982#comment-16316982
 ] 

ASF subversion and git services commented on LUCENE-8115:
-

Commit a3a0e0b11e4538ccdff998c09b1145ce9036ac33 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a3a0e0b ]

Revert "LUCENE-8115: remove one TODO-on-its-own javadoc."

This reverts commit bd69d64ad04fb0fe6f17f68dcc1fa685e15a9317.


> fail precommit on unnecessary {@inheritDoc} use
> ---
>
> Key: LUCENE-8115
> URL: https://issues.apache.org/jira/browse/LUCENE-8115
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8115-step1.patch, LUCENE-8115-step2.patch, 
> LUCENE-8115-step2.patch
>
>
> * Step 1: identify and remove existing unnecessary {{\{@inheritDoc\}}} use 
> e.g. via IDE tooling or {{git grep -C 1 inheritDoc}}.
> * Step 2: change {{ant validate}} so that precommit fails if/when any new 
> unnecessary {{\{@inheritDoc\}}} are introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-10783) Using Hadoop Credential Provider as SSL/TLS store password source

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-10783?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316961#comment-16316961
 ] 

ASF subversion and git services commented on SOLR-10783:


Commit 144616b42469c2d815a657b3c05cbff99ce20387 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=144616b ]

Revert "SOLR-10783: add (partial) package-info.java to fix precommit"

This reverts commit a864c6289a8132988fc51cc711db79238ed2ce04.


> Using Hadoop Credential Provider as SSL/TLS store password source
> -
>
> Key: SOLR-10783
> URL: https://issues.apache.org/jira/browse/SOLR-10783
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: security
>Reporter: Mano Kovacs
>Assignee: Mark Miller
> Attachments: SOLR-10783-fix.patch, SOLR-10783.patch, 
> SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch, SOLR-10783.patch
>
>
> As a second iteration of SOLR-10307, I propose support of hadoop credential 
> providers as source of SSL store passwords. 
> Motivation: When SOLR is used in hadoop environment, support of  HCP gives 
> better integration and unified method to pass sensitive credentials to SOLR.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8115) fail precommit on unnecessary {@inheritDoc} use

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316942#comment-16316942
 ] 

ASF subversion and git services commented on LUCENE-8115:
-

Commit ad6e8b82ec9c2f886c3fd14efc0b9a8634776434 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=ad6e8b8 ]

LUCENE-8115: fail precommit on unnecessary-on-its-own {@inheritDoc} annotations.


> fail precommit on unnecessary {@inheritDoc} use
> ---
>
> Key: LUCENE-8115
> URL: https://issues.apache.org/jira/browse/LUCENE-8115
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8115-step1.patch, LUCENE-8115-step2.patch, 
> LUCENE-8115-step2.patch
>
>
> * Step 1: identify and remove existing unnecessary {{\{@inheritDoc\}}} use 
> e.g. via IDE tooling or {{git grep -C 1 inheritDoc}}.
> * Step 2: change {{ant validate}} so that precommit fails if/when any new 
> unnecessary {{\{@inheritDoc\}}} are introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8115) fail precommit on unnecessary {@inheritDoc} use

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316943#comment-16316943
 ] 

ASF subversion and git services commented on LUCENE-8115:
-

Commit bd69d64ad04fb0fe6f17f68dcc1fa685e15a9317 in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=bd69d64 ]

LUCENE-8115: remove one TODO-on-its-own javadoc.


> fail precommit on unnecessary {@inheritDoc} use
> ---
>
> Key: LUCENE-8115
> URL: https://issues.apache.org/jira/browse/LUCENE-8115
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8115-step1.patch, LUCENE-8115-step2.patch, 
> LUCENE-8115-step2.patch
>
>
> * Step 1: identify and remove existing unnecessary {{\{@inheritDoc\}}} use 
> e.g. via IDE tooling or {{git grep -C 1 inheritDoc}}.
> * Step 2: change {{ant validate}} so that precommit fails if/when any new 
> unnecessary {{\{@inheritDoc\}}} are introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11809) QueryComponent.prepare rq parameter parsing fails under SOLR 7.2

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316940#comment-16316940
 ] 

ASF subversion and git services commented on SOLR-11809:


Commit 2828656892114ab7bb4c7742eac9c4e6f49f69ab in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=2828656 ]

SOLR-11809: QueryComponent.prepare rq parsing could fail under SOLR 7.2.0 - fix:
QueryComponent's rq parameter parsing no longer considers the defType parameter.
(Christine Poerschke and David Smiley in response to bug report/analysis from 
Dariusz Wojtas and Diego Ceccarelli)


> QueryComponent.prepare rq parameter parsing fails under SOLR 7.2
> 
>
> Key: SOLR-11809
> URL: https://issues.apache.org/jira/browse/SOLR-11809
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: Windows 10, java version "1.8.0_151"
>Reporter: Dariusz Wojtas
>Assignee: Christine Poerschke
> Fix For: 7.2.1
>
> Attachments: SOLR-11809.patch, SOLR-11809.patch, ltr-sample.zip
>
>
> The LTR functionality that works under SOLR 7.0 and 7.1 stopped working in 
> 7.2.
> From the solr-user mailing list it appears it might be related to SOLR-11501 .
> I am attaching the minimal working collection definition (attached 
> [^ltr-sample.zip]) that shows the problem.
> Please deploy the collection (unpack under "server/solr"), run solr and 
> invoke the URL below.
>   http://localhost:8983/solr/ltr-sample/select?q=*:*
> Behaviour:
> * under 7.0 and 7.1 - empty resultset is returned (there is no data in the 
> collection)
> * under 7.2 - error: "rq parameter must be a RankQuery". The stacktrace
> {code}
> 2018-01-02 20:51:06.807 INFO  (qtp205125520-20) [   x:ltr-sample] 
> o.a.s.c.S.Request [ltr-sample]  webapp=/solr path=/select 
> params={q=*:*&_=1514909140928} status=400 QTime=23
> 2018-01-02 21:04:27.293 ERROR (qtp205125520-17) [   x:ltr-sample] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: rq parameter 
> must be a RankQuery
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:183)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   [..]
> {code}
> i have checked - the same issue exists when I try to invoke the _rerank_ 
> query parser.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-8115) fail precommit on unnecessary {@inheritDoc} use

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316941#comment-16316941
 ] 

ASF subversion and git services commented on LUCENE-8115:
-

Commit 07afc23dcee502d84e7684a9714fe3033bd8253a in lucene-solr's branch 
refs/heads/master from [~cpoerschke]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=07afc23 ]

LUCENE-8115: remove unnecessary-on-its-own {@inheritDoc} annotations.


> fail precommit on unnecessary {@inheritDoc} use
> ---
>
> Key: LUCENE-8115
> URL: https://issues.apache.org/jira/browse/LUCENE-8115
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8115-step1.patch, LUCENE-8115-step2.patch, 
> LUCENE-8115-step2.patch
>
>
> * Step 1: identify and remove existing unnecessary {{\{@inheritDoc\}}} use 
> e.g. via IDE tooling or {{git grep -C 1 inheritDoc}}.
> * Step 2: change {{ant validate}} so that precommit fails if/when any new 
> unnecessary {{\{@inheritDoc\}}} are introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11258) ChaosMonkeySafeLeaderWithPullReplicasTest fails a lot & reproducibly: The Monkey ran for over 45 seconds and no jetties were stopped - this is worth investigating!

2018-01-08 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-11258?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316916#comment-16316916
 ] 

Tomás Fernández Löbbe commented on SOLR-11258:
--

This may be the same as SOLR-10995, which I started looking at, but apparently 
never fixed... sorry about that. I'll take a look

> ChaosMonkeySafeLeaderWithPullReplicasTest fails a lot & reproducibly:  The 
> Monkey ran for over 45 seconds and no jetties were stopped - this is worth 
> investigating!
> 
>
> Key: SOLR-11258
> URL: https://issues.apache.org/jira/browse/SOLR-11258
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>
> Between June21 & Aug18, there have been 18 failures like this...
> {noformat}
>[junit4]   2> NOTE: reproduce with: ant test  
> -Dtestcase=ChaosMonkeySafeLeaderWithPullReplicasTest -Dtests.method=test 
> -Dtests.seed=7669B63E9E4D1685 -Dtests.nightly=true -Dtests.slow=true 
> -Dtests.locale=pa-Guru -Dtests.timezone=Europe/Podgorica -Dtests.asserts=true 
> -Dtests.file.encoding=UTF-8
>[junit4] FAILURE 82.4s | ChaosMonkeySafeLeaderWithPullReplicasTest.test <<<
>[junit4]> Throwable #1: java.lang.AssertionError: The Monkey ran for 
> over 45 seconds and no jetties were stopped - this is worth investigating!
>[junit4]>at 
> __randomizedtesting.SeedInfo.seed([7669B63E9E4D1685:FE3D89E430B17B7D]:0)
>[junit4]>at 
> org.apache.solr.cloud.ChaosMonkey.stopTheMonkey(ChaosMonkey.java:587)
>[junit4]>at 
> org.apache.solr.cloud.ChaosMonkeySafeLeaderWithPullReplicasTest.test(ChaosMonkeySafeLeaderWithPullReplicasTest.java:174)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
>[junit4]>at 
> org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
>[junit4]>at java.lang.Thread.run(Thread.java:748)
> {noformat}
> In my own testing, when these failures happen, the seeds reproduce - 
> suggesting the problem is logic flaw in the test that can can happen by 
> chance.
> Perhaps the ChaosMonkey needs to be changed to get more aggressive about 
> stopping nodes bsaed on how long it's been since hte last time it stopped a 
> node?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11741) Offline training mode for schema guessing

2018-01-08 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11741?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316913#comment-16316913
 ] 

Cassandra Targett commented on SOLR-11741:
--

bq. What i suggested at one point (I don't remember where ... it may already be 
in a jira somewhere?) was an UpdateRequestProcessorFactory that could be 
configured instead of RunUpdateProcessorFactory in a chain...

The issue where Hoss mentioned this idea before was SOLR-6939. I linked it here 
for reference.

> Offline training mode for schema guessing
> -
>
> Key: SOLR-11741
> URL: https://issues.apache.org/jira/browse/SOLR-11741
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Ishan Chattopadhyaya
> Attachments: RuleForMostAccomodatingField.png, SOLR-11741-temp.patch, 
> screenshot-1.png, screenshot-3.png
>
>
> Our data driven schema guessing doesn't work under many situations. For 
> example, if the first document has a field with value "0", it is guessed as 
> Long and subsequent fields with "0.0" are rejected. Similarly, if the same 
> field had alphanumeric contents for a latter document, those documents are 
> rejected. Also, single vs. multi valued field guessing is not ideal.
> Proposing an offline training mode where Solr accepts bunch of documents and 
> returns a guessed schema (without indexing). This schema can then be used for 
> actual indexing. I think the original idea is from Hoss.
> I think initial implementation can be based on an UpdateRequestProcessor. We 
> can hash out the API soon, as we go along.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: BugFix release 7.2.1

2018-01-08 Thread S G
We can put some lower limits on CPU and Memory for running a performance
test.
If those lower limits are not met, then the test will just skip execution.

And then we put some lower bounds (time-wise) on the time spent by
different parts of the test like:
 - Max time taken to index 1 million documents
 - Max time taken to query, facet, pivot etc
 - Max time taken to delete 100,000 documents while read and writes are
happening.

For all of the above, we can publish metrics like 5minRate, 95thPercent and
assert on values lower than a particular value.

I know some other software compare CPU cycles across different runs as well
but not sure how.

Such tests will give us more confidence when releasing/adopting new
features like pint compared to tint etc.

Thanks
SG



On Sat, Jan 6, 2018 at 9:59 AM, Erick Erickson 
wrote:

> Not sure how performance tests in the unit tests would be interpreted. If
> I run the same suite on two different machines how do I compare the
> numbers?
>
> Or are you thinking of having some tests so someone can check out
> different versions of Solr and run the perf tests on a single machine,
> perhaps using bisect to pinpoint when something changed?
>
> I'm not opposed at all, just trying to understand how one would go about
> using such tests.
>
> Best,
> Erick
>
> On Fri, Jan 5, 2018 at 10:09 PM, S G  wrote:
>
>> Just curious to know, does the test suite include some performance test
>> also?
>> I would like to know the performance impact of using pints vs tints or
>> ints etc.
>> If they are not there, I can try to add some tests for the same.
>>
>> Thanks
>> SG
>>
>>
>> On Fri, Jan 5, 2018 at 5:47 PM, Đạt Cao Mạnh 
>> wrote:
>>
>>> Hi all,
>>>
>>> I will work on SOLR-11771
>>>  today, It is a
>>> simple fix and will be great if it get fixed in 7.2.1
>>>
>>> On Fri, Jan 5, 2018 at 11:23 PM Erick Erickson 
>>> wrote:
>>>
 Neither of those Solr fixes are earth shatteringly important, they've
 both been around for quite a while. I don't think it's urgent to include
 them.

 That said, they're pretty simple and isolated so worth doing if Jim is
 willing. But not worth straining much. I was just clearing out some backlog
 over vacation.

 Strictly up to you Jim.

 Erick

 On Fri, Jan 5, 2018 at 6:54 AM, David Smiley 
 wrote:

> https://issues.apache.org/jira/browse/SOLR-11809 is in progress,
> should be easy and I think definitely worth backporting
>
> On Fri, Jan 5, 2018 at 8:52 AM Adrien Grand  wrote:
>
>> +1
>>
>> Looking at the changelog, 7.3 has 3 bug fixes for now: LUCENE-8077,
>> SOLR-11783 and SOLR-11555. The Lucene change doesn't seem worth
>> backporting, but maybe the Solr changes should?
>>
>> Le ven. 5 janv. 2018 à 12:40, jim ferenczi 
>> a écrit :
>>
>>> Hi,
>>> We discovered a bad bug in 7x that affects indices created in 6x
>>> with Lucene54DocValues format. The SortedNumericDocValues created with 
>>> this
>>> format have a bug when advanceExact is used, the values retrieved for 
>>> the
>>> docs when advanceExact returns true are invalid (the pointer to the 
>>> values
>>> is not updated):
>>> https://issues.apache.org/jira/browse/LUCENE-8117
>>> This affects all indices created in 6x with sorted numeric doc
>>> values so I wanted to ask if anyone objects to a bugfix release for 7.2
>>> (7.2.1). I also volunteer to be the release manager for this one if it 
>>> is
>>> accepted.
>>>
>>> Jim
>>>
>>
>
> --
> Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
> LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
> http://www.solrenterprisesearchserver.com
>


>>
>


[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1614 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1614/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory

Error Message:
Error from server at http://127.0.0.1:36617/solr: 
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:54319/solr within 1 ms

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:36617/solr: 
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:54319/solr within 1 ms
at 
__randomizedtesting.SeedInfo.seed([284AD0DEDF26BAD0:45B67423656E45D7]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:643)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.doRequest(LBHttpSolrClient.java:483)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:413)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1104)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:884)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:817)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:194)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:942)
at org.apache.solr.client.solrj.SolrClient.query(SolrClient.java:957)
at 
org.apache.solr.handler.admin.AutoscalingHistoryHandlerTest.testHistory(AutoscalingHistoryHandlerTest.java:267)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearc

[jira] [Commented] (SOLR-11832) Restore from backup creates old format collections

2018-01-08 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316751#comment-16316751
 ] 

Erick Erickson commented on SOLR-11832:
---

Not sure I'm following here. I took a very, very, very quick glance at 
BasicDistributedZkTest and BasicDistributedZk2Test on master without this patch.

The only call I see to the core admin APIs create command is in 
BasicDistributedZkTest, and it's never used (it's in a private method that's in 
turn called by another private method that's never called). We should remove 
that code completely.

There are a couple of calls to the core admin API to get the status, but that 
should be OK.

What am I missing?


> Restore from backup creates old format collections
> --
>
> Key: SOLR-11832
> URL: https://issues.apache.org/jira/browse/SOLR-11832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.2, 6.6.2
>Reporter: Tim Owen
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11832.patch
>
>
> Restoring a collection from a backup always creates the new collection using 
> the old format state json (format 1), as a global clusterstate.json file at 
> top level of ZK. All new collections should be defaulting to use the newer 
> per-collection (format 2) in /collections/.../state.json
> As we're running clusters with many collections, the old global state format 
> isn't good for us, so as a workaround for now we're calling 
> MIGRATESTATEFORMAT immediately after the RESTORE call.
> This bug was mentioned in the comments of SOLR-5750 and also recently 
> mentioned by [~varunthacker] in SOLR-11560
> Code patch attached, but as per [~dsmiley]'s comment in the code, fixing this 
> means at least 1 test class doesn't succeed anymore. From what I can tell, 
> the BasicDistributedZk2Test fails because it's not using the official 
> collection API to create a collection, it seems to be bypassing that and 
> manually creating cores using the core admin api instead, which I think is 
> not enough to ensure the correct ZK nodes are created. The test superclass 
> has some methods to create a collection which do use the collection api so I 
> could try fixing the tests (I'm just not that familiar with why those 
> BasicDistributed*Test classes aren't using the collection api).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11829) [Ref-Guide] Indexing documents with existing id

2018-01-08 Thread Cassandra Targett (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316743#comment-16316743
 ] 

Cassandra Targett commented on SOLR-11829:
--

bq. Is there a good way to check links?

If you run the PDF build it will now check links. But I'll tell you all the 
links I noticed from skimming the patch are incorrect because they are missing 
anchor references. See 
https://lucene.apache.org/solr/guide/how-to-contribute.html#link-to-other-pages-sections-of-the-guide
 for details on how to structure these types of links.

> [Ref-Guide] Indexing documents with existing id
> ---
>
> Key: SOLR-11829
> URL: https://issues.apache.org/jira/browse/SOLR-11829
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Munendra S N
>Assignee: Erick Erickson
> Attachments: SOLR-11829.patch, SOLR-11829.patch, SOLR-11829.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Solr Documentation for [Document 
> screen|http://lucene.apache.org/solr/guide/7_2/documents-screen.html] states 
> that if overwrite is set to false, then incoming documents with the same id 
> would be dropped.
> But the documentation of 
> [Indexing|http://lucene.apache.org/solr/guide/7_2/introduction-to-solr-indexing.html#introduction-to-solr-indexing]
>  and actual behavior states otherwise (i.e, allows the duplicate addition of 
> documents with the same id)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11832) Restore from backup creates old format collections

2018-01-08 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-11832:


Assignee: Varun Thacker

> Restore from backup creates old format collections
> --
>
> Key: SOLR-11832
> URL: https://issues.apache.org/jira/browse/SOLR-11832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.2, 6.6.2
>Reporter: Tim Owen
>Assignee: Varun Thacker
>Priority: Minor
> Attachments: SOLR-11832.patch
>
>
> Restoring a collection from a backup always creates the new collection using 
> the old format state json (format 1), as a global clusterstate.json file at 
> top level of ZK. All new collections should be defaulting to use the newer 
> per-collection (format 2) in /collections/.../state.json
> As we're running clusters with many collections, the old global state format 
> isn't good for us, so as a workaround for now we're calling 
> MIGRATESTATEFORMAT immediately after the RESTORE call.
> This bug was mentioned in the comments of SOLR-5750 and also recently 
> mentioned by [~varunthacker] in SOLR-11560
> Code patch attached, but as per [~dsmiley]'s comment in the code, fixing this 
> means at least 1 test class doesn't succeed anymore. From what I can tell, 
> the BasicDistributedZk2Test fails because it's not using the official 
> collection API to create a collection, it seems to be bypassing that and 
> manually creating cores using the core admin api instead, which I think is 
> not enough to ensure the correct ZK nodes are created. The test superclass 
> has some methods to create a collection which do use the collection api so I 
> could try fixing the tests (I'm just not that familiar with why those 
> BasicDistributed*Test classes aren't using the collection api).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11829) [Ref-Guide] Indexing documents with existing id

2018-01-08 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11829?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-11829:
--
Attachment: SOLR-11829.patch

That suggestion is a good one, added it in.

Is there a good way to check links?

Will commit mid-week to give others a chance to chime in.

> [Ref-Guide] Indexing documents with existing id
> ---
>
> Key: SOLR-11829
> URL: https://issues.apache.org/jira/browse/SOLR-11829
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: documentation
>Reporter: Munendra S N
>Assignee: Erick Erickson
> Attachments: SOLR-11829.patch, SOLR-11829.patch, SOLR-11829.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> Solr Documentation for [Document 
> screen|http://lucene.apache.org/solr/guide/7_2/documents-screen.html] states 
> that if overwrite is set to false, then incoming documents with the same id 
> would be dropped.
> But the documentation of 
> [Indexing|http://lucene.apache.org/solr/guide/7_2/introduction-to-solr-indexing.html#introduction-to-solr-indexing]
>  and actual behavior states otherwise (i.e, allows the duplicate addition of 
> documents with the same id)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-MacOSX (64bit/jdk-9) - Build # 388 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-MacOSX/388/
Java: 64bit/jdk-9 -XX:+UseCompressedOops -XX:+UseParallelGC

2 tests failed.
FAILED:  org.apache.solr.cloud.ReplaceNodeTest.test

Error Message:
Captured an uncaught exception in thread: Thread[id=24859, 
name=OverseerThreadFactory-10264-thread-2, state=RUNNABLE, group=Overseer 
collection creation process.]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=24859, 
name=OverseerThreadFactory-10264-thread-2, state=RUNNABLE, group=Overseer 
collection creation process.]
Caused by: java.lang.NoClassDefFoundError: 
org/apache/solr/cloud/LeaderRecoveryWatcher
at __randomizedtesting.SeedInfo.seed([44A30A1A0FBE88]:0)
at org.apache.solr.cloud.ReplaceNodeCmd.call(ReplaceNodeCmd.java:138)
at 
org.apache.solr.cloud.OverseerCollectionMessageHandler.processMessage(OverseerCollectionMessageHandler.java:243)
at 
org.apache.solr.cloud.OverseerTaskProcessor$Runner.run(OverseerTaskProcessor.java:469)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:188)
at 
java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1167)
at 
java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:641)
at java.base/java.lang.Thread.run(Thread.java:844)
Caused by: java.lang.ClassNotFoundException: 
org.apache.solr.cloud.LeaderRecoveryWatcher
at 
java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:582)
at 
java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:185)
at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:496)
... 7 more


FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
Could not find collection:collection2

Stack Trace:
java.lang.AssertionError: Could not find collection:collection2
at 
__randomizedtesting.SeedInfo.seed([44A30A1A0FBE88:88109CD0B4F3D370]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNotNull(Assert.java:526)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:155)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:140)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:135)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:913)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrClient(FullSolrCloudDistribCmdsTest.java:612)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:152)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:993)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:968)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:36

[jira] [Commented] (SOLR-11831) Skip second grouping step if group.limit is 1 (aka Las Vegas patch)

2018-01-08 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316636#comment-16316636
 ] 

ASF GitHub Bot commented on SOLR-11831:
---

GitHub user mjosephidou opened a pull request:

https://github.com/apache/lucene-solr/pull/300

SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas 
Patch)

Summary:
In cases where we do grouping and ask for  {{group.limit=1}} only it is 
possible to skip the second grouping step. In our test datasets it improved 
speed by around 40%.

Essentially, in the first grouping step each shard returns the top K groups 
based on the highest scoring document in each group. The top K groups from each 
shard are merged in the federator and in the second step we ask all the shards 
to return the top documents from each of the top ranking groups.

If we only want to return the highest scoring document per group we can 
return the top document id in the first step, merge results in the federator to 
retain the top K groups and then skip the second grouping step entirely.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr SOLR-11831

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/300.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #300


commit 6b918c86cd0f37320c32eb669eca722a9e74f768
Author: Malvina Josephidou 
Date:   2018-01-04T15:00:35Z

SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas 
patch)

Summary:
In cases where we do grouping and ask for  {{group.limit=1}} only it is 
possible to skip the second grouping step. In our test datasets it improved 
speed by around 40%.

Essentially, in the first grouping step each shard returns the top K groups 
based on the highest scoring document in each group. The top K groups from each 
shard are merged in the federator and in the second step we ask all the shards 
to return the top documents from each of the top ranking groups.

If we only want to return the highest scoring document per group we can 
return the top document id in the first step, merge results in the federator to 
retain the top K groups and then skip the second grouping step entirely.




>  Skip second grouping step if group.limit is 1 (aka Las Vegas patch)
> 
>
> Key: SOLR-11831
> URL: https://issues.apache.org/jira/browse/SOLR-11831
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Malvina Josephidou
>Priority: Minor
>
> In cases where we do grouping and ask for  {{group.limit=1}} only it is 
> possible to skip the second grouping step. In our test datasets it improved 
> speed by around 40%.
> Essentially, in the first grouping step each shard returns the top K groups 
> based on the highest scoring document in each group. The top K groups from 
> each shard are merged in the federator and in the second step we ask all the 
> shards to return the top documents from each of the top ranking groups.
> If we only want to return the highest scoring document per group we can 
> return the top document id in the first step, merge results in the federator 
> to retain the top K groups and then skip the second grouping step entirely. 
> This is possible provided that:
> a) We do not need to know the total number of matching documents per group
> b) Within group sort and between group sort is the same. 
> c) We are not doing reranking (this is because this is done in the second 
> grouping step. It is also possible to get this to work with reranking but 
> more work and some additional assumptions are required)
>  
> This patch applies the grouping optimisation in cases where a)-c) apply and 
> we are only sorting by relevance. It is also possible to extend this work to 
> handle multiple sorting criteria and also reranking. 
> P.S. Diego and I called this patch "las vegas" because we started to write it 
> on the flight to Las Vegas for Lucene/Solr revolution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #300: SOLR-11831: Skip second grouping step if grou...

2018-01-08 Thread mjosephidou
GitHub user mjosephidou opened a pull request:

https://github.com/apache/lucene-solr/pull/300

SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas 
Patch)

Summary:
In cases where we do grouping and ask for  {{group.limit=1}} only it is 
possible to skip the second grouping step. In our test datasets it improved 
speed by around 40%.

Essentially, in the first grouping step each shard returns the top K groups 
based on the highest scoring document in each group. The top K groups from each 
shard are merged in the federator and in the second step we ask all the shards 
to return the top documents from each of the top ranking groups.

If we only want to return the highest scoring document per group we can 
return the top document id in the first step, merge results in the federator to 
retain the top K groups and then skip the second grouping step entirely.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/bloomberg/lucene-solr SOLR-11831

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/300.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #300


commit 6b918c86cd0f37320c32eb669eca722a9e74f768
Author: Malvina Josephidou 
Date:   2018-01-04T15:00:35Z

SOLR-11831: Skip second grouping step if group.limit is 1 (aka Las Vegas 
patch)

Summary:
In cases where we do grouping and ask for  {{group.limit=1}} only it is 
possible to skip the second grouping step. In our test datasets it improved 
speed by around 40%.

Essentially, in the first grouping step each shard returns the top K groups 
based on the highest scoring document in each group. The top K groups from each 
shard are merged in the federator and in the second step we ask all the shards 
to return the top documents from each of the top ranking groups.

If we only want to return the highest scoring document per group we can 
return the top document id in the first step, merge results in the federator to 
retain the top K groups and then skip the second grouping step entirely.




---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11833) Allow searchRate trigger to delete replicas

2018-01-08 Thread Andrzej Bialecki (JIRA)
Andrzej Bialecki  created SOLR-11833:


 Summary: Allow searchRate trigger to delete replicas
 Key: SOLR-11833
 URL: https://issues.apache.org/jira/browse/SOLR-11833
 Project: Solr
  Issue Type: Sub-task
  Security Level: Public (Default Security Level. Issues are Public)
  Components: AutoScaling
Reporter: Andrzej Bialecki 
Assignee: Andrzej Bialecki 


Currently {{SearchRateTrigger}} generates events when search rate thresholds 
are exceeded, and {{ComputePlanAction}} computes ADDREPLICA actions in response 
- adding replicas should allow the search rate to be reduced across the 
increased number of replicas.

However, once the peak load period is over the collection is left with too many 
replicas, which unnecessarily tie cluster resources. {{SearchRateTrigger}} 
should detect situations like this and generate events that should cause some 
of these replicas to be deleted.

{{SearchRateTrigger}} should use hysteresis to avoid thrashing when the rate is 
close to the threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11832) Restore from backup creates old format collections

2018-01-08 Thread Tim Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Owen updated SOLR-11832:

Priority: Minor  (was: Major)

> Restore from backup creates old format collections
> --
>
> Key: SOLR-11832
> URL: https://issues.apache.org/jira/browse/SOLR-11832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.2, 6.6.2
>Reporter: Tim Owen
>Priority: Minor
> Attachments: SOLR-11832.patch
>
>
> Restoring a collection from a backup always creates the new collection using 
> the old format state json (format 1), as a global clusterstate.json file at 
> top level of ZK. All new collections should be defaulting to use the newer 
> per-collection (format 2) in /collections/.../state.json
> As we're running clusters with many collections, the old global state format 
> isn't good for us, so as a workaround for now we're calling 
> MIGRATESTATEFORMAT immediately after the RESTORE call.
> This bug was mentioned in the comments of SOLR-5750 and also recently 
> mentioned by [~varunthacker] in SOLR-11560
> Code patch attached, but as per [~dsmiley]'s comment in the code, fixing this 
> means at least 1 test class doesn't succeed anymore. From what I can tell, 
> the BasicDistributedZk2Test fails because it's not using the official 
> collection API to create a collection, it seems to be bypassing that and 
> manually creating cores using the core admin api instead, which I think is 
> not enough to ensure the correct ZK nodes are created. The test superclass 
> has some methods to create a collection which do use the collection api so I 
> could try fixing the tests (I'm just not that familiar with why those 
> BasicDistributed*Test classes aren't using the collection api).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11832) Restore from backup creates old format collections

2018-01-08 Thread Tim Owen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tim Owen updated SOLR-11832:

Attachment: SOLR-11832.patch

> Restore from backup creates old format collections
> --
>
> Key: SOLR-11832
> URL: https://issues.apache.org/jira/browse/SOLR-11832
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Backup/Restore
>Affects Versions: 7.2, 6.6.2
>Reporter: Tim Owen
> Attachments: SOLR-11832.patch
>
>
> Restoring a collection from a backup always creates the new collection using 
> the old format state json (format 1), as a global clusterstate.json file at 
> top level of ZK. All new collections should be defaulting to use the newer 
> per-collection (format 2) in /collections/.../state.json
> As we're running clusters with many collections, the old global state format 
> isn't good for us, so as a workaround for now we're calling 
> MIGRATESTATEFORMAT immediately after the RESTORE call.
> This bug was mentioned in the comments of SOLR-5750 and also recently 
> mentioned by [~varunthacker] in SOLR-11560
> Code patch attached, but as per [~dsmiley]'s comment in the code, fixing this 
> means at least 1 test class doesn't succeed anymore. From what I can tell, 
> the BasicDistributedZk2Test fails because it's not using the official 
> collection API to create a collection, it seems to be bypassing that and 
> manually creating cores using the core admin api instead, which I think is 
> not enough to ensure the correct ZK nodes are created. The test superclass 
> has some methods to create a collection which do use the collection api so I 
> could try fixing the tests (I'm just not that familiar with why those 
> BasicDistributed*Test classes aren't using the collection api).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11832) Restore from backup creates old format collections

2018-01-08 Thread Tim Owen (JIRA)
Tim Owen created SOLR-11832:
---

 Summary: Restore from backup creates old format collections
 Key: SOLR-11832
 URL: https://issues.apache.org/jira/browse/SOLR-11832
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: Backup/Restore
Affects Versions: 6.6.2, 7.2
Reporter: Tim Owen


Restoring a collection from a backup always creates the new collection using 
the old format state json (format 1), as a global clusterstate.json file at top 
level of ZK. All new collections should be defaulting to use the newer 
per-collection (format 2) in /collections/.../state.json

As we're running clusters with many collections, the old global state format 
isn't good for us, so as a workaround for now we're calling MIGRATESTATEFORMAT 
immediately after the RESTORE call.

This bug was mentioned in the comments of SOLR-5750 and also recently mentioned 
by [~varunthacker] in SOLR-11560

Code patch attached, but as per [~dsmiley]'s comment in the code, fixing this 
means at least 1 test class doesn't succeed anymore. From what I can tell, the 
BasicDistributedZk2Test fails because it's not using the official collection 
API to create a collection, it seems to be bypassing that and manually creating 
cores using the core admin api instead, which I think is not enough to ensure 
the correct ZK nodes are created. The test superclass has some methods to 
create a collection which do use the collection api so I could try fixing the 
tests (I'm just not that familiar with why those BasicDistributed*Test classes 
aren't using the collection api).




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11831) Skip second grouping step if group.limit is 1 (aka Las Vegas patch)

2018-01-08 Thread Diego Ceccarelli (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316605#comment-16316605
 ] 

Diego Ceccarelli commented on SOLR-11831:
-

patch is coming, give us a few minutes :D

>  Skip second grouping step if group.limit is 1 (aka Las Vegas patch)
> 
>
> Key: SOLR-11831
> URL: https://issues.apache.org/jira/browse/SOLR-11831
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Malvina Josephidou
>Priority: Minor
>
> In cases where we do grouping and ask for  {{group.limit=1}} only it is 
> possible to skip the second grouping step. In our test datasets it improved 
> speed by around 40%.
> Essentially, in the first grouping step each shard returns the top K groups 
> based on the highest scoring document in each group. The top K groups from 
> each shard are merged in the federator and in the second step we ask all the 
> shards to return the top documents from each of the top ranking groups.
> If we only want to return the highest scoring document per group we can 
> return the top document id in the first step, merge results in the federator 
> to retain the top K groups and then skip the second grouping step entirely. 
> This is possible provided that:
> a) We do not need to know the total number of matching documents per group
> b) Within group sort and between group sort is the same. 
> c) We are not doing reranking (this is because this is done in the second 
> grouping step. It is also possible to get this to work with reranking but 
> more work and some additional assumptions are required)
>  
> This patch applies the grouping optimisation in cases where a)-c) apply and 
> we are only sorting by relevance. It is also possible to extend this work to 
> handle multiple sorting criteria and also reranking. 
> P.S. Diego and I called this patch "las vegas" because we started to write it 
> on the flight to Las Vegas for Lucene/Solr revolution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11831) Skip second grouping step if group.limit is 1 (aka Las Vegas patch)

2018-01-08 Thread Anshum Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316593#comment-16316593
 ] 

Anshum Gupta commented on SOLR-11831:
-

[~mjosephidou] I think you forgot to attach the patch :)

>  Skip second grouping step if group.limit is 1 (aka Las Vegas patch)
> 
>
> Key: SOLR-11831
> URL: https://issues.apache.org/jira/browse/SOLR-11831
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Malvina Josephidou
>Priority: Minor
>
> In cases where we do grouping and ask for  {{group.limit=1}} only it is 
> possible to skip the second grouping step. In our test datasets it improved 
> speed by around 40%.
> Essentially, in the first grouping step each shard returns the top K groups 
> based on the highest scoring document in each group. The top K groups from 
> each shard are merged in the federator and in the second step we ask all the 
> shards to return the top documents from each of the top ranking groups.
> If we only want to return the highest scoring document per group we can 
> return the top document id in the first step, merge results in the federator 
> to retain the top K groups and then skip the second grouping step entirely. 
> This is possible provided that:
> a) We do not need to know the total number of matching documents per group
> b) Within group sort and between group sort is the same. 
> c) We are not doing reranking (this is because this is done in the second 
> grouping step. It is also possible to get this to work with reranking but 
> more work and some additional assumptions are required)
>  
> This patch applies the grouping optimisation in cases where a)-c) apply and 
> we are only sorting by relevance. It is also possible to extend this work to 
> handle multiple sorting criteria and also reranking. 
> P.S. Diego and I called this patch "las vegas" because we started to write it 
> on the flight to Las Vegas for Lucene/Solr revolution. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-11831) Skip second grouping step if group.limit is 1 (aka Las Vegas patch)

2018-01-08 Thread Malvina Josephidou (JIRA)
Malvina Josephidou created SOLR-11831:
-

 Summary:  Skip second grouping step if group.limit is 1 (aka Las 
Vegas patch)
 Key: SOLR-11831
 URL: https://issues.apache.org/jira/browse/SOLR-11831
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
Reporter: Malvina Josephidou
Priority: Minor


In cases where we do grouping and ask for  {{group.limit=1}} only it is 
possible to skip the second grouping step. In our test datasets it improved 
speed by around 40%.

Essentially, in the first grouping step each shard returns the top K groups 
based on the highest scoring document in each group. The top K groups from each 
shard are merged in the federator and in the second step we ask all the shards 
to return the top documents from each of the top ranking groups.

If we only want to return the highest scoring document per group we can return 
the top document id in the first step, merge results in the federator to retain 
the top K groups and then skip the second grouping step entirely. This is 
possible provided that:

a) We do not need to know the total number of matching documents per group
b) Within group sort and between group sort is the same. 
c) We are not doing reranking (this is because this is done in the second 
grouping step. It is also possible to get this to work with reranking but more 
work and some additional assumptions are required)
 
This patch applies the grouping optimisation in cases where a)-c) apply and we 
are only sorting by relevance. It is also possible to extend this work to 
handle multiple sorting criteria and also reranking. 

P.S. Diego and I called this patch "las vegas" because we started to write it 
on the flight to Las Vegas for Lucene/Solr revolution. 





--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-10716) Add termVectors Stream Evaluator

2018-01-08 Thread Joel Bernstein (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-10716?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Joel Bernstein updated SOLR-10716:
--
Attachment: SOLR-10716.patch

> Add termVectors Stream Evaluator
> 
>
> Key: SOLR-10716
> URL: https://issues.apache.org/jira/browse/SOLR-10716
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Reporter: Joel Bernstein
>Assignee: Joel Bernstein
> Fix For: 7.0
>
> Attachments: SOLR-10716.patch, SOLR-10716.patch, SOLR-10716.patch, 
> SOLR-10716.patch, SOLR-10716.patch
>
>
> The termVectors Stream Evaluator returns tf-idf word vectors for a text field 
> in a list of tuples. 
> Syntax:
> {code}
> let(a=select(search(...), analyze(a, body) as terms),
>  b=termVectors(a, minDocFreq=".00", maxDocFreq="1.0")) 
> {code}
> The code above performs a search then uses the *select* stream and *analyze* 
> evaluator to attach a list of terms to each document.
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-master - Build # 2251 - Still unstable

2018-01-08 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-master/2251/

13 tests failed.
FAILED:  
org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance

Error Message:
training took more than 10s: 12s

Stack Trace:
java.lang.AssertionError: training took more than 10s: 12s
at 
__randomizedtesting.SeedInfo.seed([6AE944E07A33D8D5:AD08B6C21187E07A]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.lucene.classification.BooleanPerceptronClassifierTest.testPerformance(BooleanPerceptronClassifierTest.java:93)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:829)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:879)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:890)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.handler.dataimport.TestContentStreamDataSource.testCommitWithin

Error Message:
expected:<0> but was:<2>

Stack Trace:
java.lang.AssertionError: expected:<0> but was:<2>
at 
__randomizedtesting.SeedInfo.seed([C3DC9995477B3A24:790EF6EDC455D431]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.handler.dataimport.TestContentStreamDataSource.testCommitWithin(TestCont

[JENKINS] Lucene-Solr-7.x-Linux (64bit/jdk-9.0.1) - Build # 1147 - Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Linux/1147/
Java: 64bit/jdk-9.0.1 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

4 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([6393AC8795E31C94]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.cloud.AbstractZkTestCase.azt_afterClass(AbstractZkTestCase.java:147)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.base/java.lang.Thread.run(Thread.java:844)


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.BasicZkTest

Error Message:
SolrCore.getOpenCount()==2

Stack Trace:
java.lang.RuntimeException: SolrCore.getOpenCount()==2
at __randomizedtesting.SeedInfo.seed([6393AC8795E31C94]:0)
at org.apache.solr.util.TestHarness.close(TestHarness.java:379)
at org.apache.solr.SolrTestCaseJ4.deleteCore(SolrTestCaseJ4.java:792)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:288)
at jdk.internal.reflect.GeneratedMethodAccessor39.invoke(Unknown Source)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:564)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:897)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apa

[jira] [Resolved] (SOLR-11327) MODIFYCOLLECTION should be able to edit policy attribute

2018-01-08 Thread Noble Paul (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11327?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Noble Paul resolved SOLR-11327.
---
Resolution: Duplicate

> MODIFYCOLLECTION should be able to edit policy attribute
> 
>
> Key: SOLR-11327
> URL: https://issues.apache.org/jira/browse/SOLR-11327
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Noble Paul
>Assignee: Noble Paul
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-11809) QueryComponent.prepare rq parameter parsing fails under SOLR 7.2

2018-01-08 Thread David Smiley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Smiley reassigned SOLR-11809:
---

 Assignee: Christine Poerschke
Fix Version/s: 7.2.1

> QueryComponent.prepare rq parameter parsing fails under SOLR 7.2
> 
>
> Key: SOLR-11809
> URL: https://issues.apache.org/jira/browse/SOLR-11809
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: Windows 10, java version "1.8.0_151"
>Reporter: Dariusz Wojtas
>Assignee: Christine Poerschke
> Fix For: 7.2.1
>
> Attachments: SOLR-11809.patch, SOLR-11809.patch, ltr-sample.zip
>
>
> The LTR functionality that works under SOLR 7.0 and 7.1 stopped working in 
> 7.2.
> From the solr-user mailing list it appears it might be related to SOLR-11501 .
> I am attaching the minimal working collection definition (attached 
> [^ltr-sample.zip]) that shows the problem.
> Please deploy the collection (unpack under "server/solr"), run solr and 
> invoke the URL below.
>   http://localhost:8983/solr/ltr-sample/select?q=*:*
> Behaviour:
> * under 7.0 and 7.1 - empty resultset is returned (there is no data in the 
> collection)
> * under 7.2 - error: "rq parameter must be a RankQuery". The stacktrace
> {code}
> 2018-01-02 20:51:06.807 INFO  (qtp205125520-20) [   x:ltr-sample] 
> o.a.s.c.S.Request [ltr-sample]  webapp=/solr path=/select 
> params={q=*:*&_=1514909140928} status=400 QTime=23
> 2018-01-02 21:04:27.293 ERROR (qtp205125520-17) [   x:ltr-sample] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: rq parameter 
> must be a RankQuery
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:183)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   [..]
> {code}
> i have checked - the same issue exists when I try to invoke the _rerank_ 
> query parser.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11809) QueryComponent.prepare rq parameter parsing fails under SOLR 7.2

2018-01-08 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316420#comment-16316420
 ] 

David Smiley commented on SOLR-11809:
-

+1 very good; commit away

I definitely overlooked this in SOLR-11501.  And the tests for this feature use 
{{q=\{!edismax\}...}} which is unnatural to how typical Solr queries are issued 
(they would use defType).

> QueryComponent.prepare rq parameter parsing fails under SOLR 7.2
> 
>
> Key: SOLR-11809
> URL: https://issues.apache.org/jira/browse/SOLR-11809
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: Windows 10, java version "1.8.0_151"
>Reporter: Dariusz Wojtas
> Attachments: SOLR-11809.patch, SOLR-11809.patch, ltr-sample.zip
>
>
> The LTR functionality that works under SOLR 7.0 and 7.1 stopped working in 
> 7.2.
> From the solr-user mailing list it appears it might be related to SOLR-11501 .
> I am attaching the minimal working collection definition (attached 
> [^ltr-sample.zip]) that shows the problem.
> Please deploy the collection (unpack under "server/solr"), run solr and 
> invoke the URL below.
>   http://localhost:8983/solr/ltr-sample/select?q=*:*
> Behaviour:
> * under 7.0 and 7.1 - empty resultset is returned (there is no data in the 
> collection)
> * under 7.2 - error: "rq parameter must be a RankQuery". The stacktrace
> {code}
> 2018-01-02 20:51:06.807 INFO  (qtp205125520-20) [   x:ltr-sample] 
> o.a.s.c.S.Request [ltr-sample]  webapp=/solr path=/select 
> params={q=*:*&_=1514909140928} status=400 QTime=23
> 2018-01-02 21:04:27.293 ERROR (qtp205125520-17) [   x:ltr-sample] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: rq parameter 
> must be a RankQuery
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:183)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   [..]
> {code}
> i have checked - the same issue exists when I try to invoke the _rerank_ 
> query parser.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-7.x-Windows (32bit/jdk1.8.0_144) - Build # 389 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-7.x-Windows/389/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

5 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.handler.dataimport.TestVariableResolver

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestVariableResolver_91AC00AAA15BD991-001\dih-properties-007:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestVariableResolver_91AC00AAA15BD991-001\dih-properties-007

C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestVariableResolver_91AC00AAA15BD991-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestVariableResolver_91AC00AAA15BD991-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestVariableResolver_91AC00AAA15BD991-001\dih-properties-007:
 java.nio.file.AccessDeniedException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestVariableResolver_91AC00AAA15BD991-001\dih-properties-007
   
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestVariableResolver_91AC00AAA15BD991-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-7.x-Windows\solr\build\contrib\solr-dataimporthandler\test\J1\temp\solr.handler.dataimport.TestVariableResolver_91AC00AAA15BD991-001

at __randomizedtesting.SeedInfo.seed([91AC00AAA15BD991]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication

Error Message:
Index: 0, Size: 0

Stack Trace:
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at 
__randomizedtesting.SeedInfo.seed([FF9E6FF078A35F97:EBD634A55BA4E289]:0)
at java.util.ArrayList.rangeCheck(ArrayList.java:653)
at java.util.ArrayList.get(ArrayList.java:429)
at 
org.apache.solr.handler.TestReplicationHandler.doTestIndexAndConfigReplication(TestReplicationHandler.java:561)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1737)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:934)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:970)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:984)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:4

[jira] [Commented] (SOLR-11714) AddReplicaSuggester endless loop

2018-01-08 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316399#comment-16316399
 ] 

Noble Paul commented on SOLR-11714:
---

There is no means for the Policy framework to decide the no:of replicas that 
required to be added to achieve the given throughput. Instead ComputeActionPlan 
must only request it to add a certain no:of replicas (worst case, one at a 
time) and observe the result and add another 

> AddReplicaSuggester endless loop
> 
>
> Key: SOLR-11714
> URL: https://issues.apache.org/jira/browse/SOLR-11714
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.2, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Noble Paul
> Attachments: 7.2-disable-search-rate-trigger.diff, SOLR-11714.diff
>
>
> {{SearchRateTrigger}} events are processed by {{ComputePlanAction}} and 
> depending on the condition either a MoveReplicaSuggester or 
> AddReplicaSuggester is selected.
> When {{AddReplicaSuggester}} is selected there's currently a bug in master, 
> due to an API change (Hint.COLL_SHARD should be used instead of Hint.COLL). 
> However, after fixing that bug {{ComputePlanAction}} goes into an endless 
> loop because the suggester endlessly keeps creating new operations.
> Please see the patch that fixes the Hint.COLL_SHARD issue and modifies the 
> unit test to illustrate this failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11714) AddReplicaSuggester endless loop

2018-01-08 Thread Andrzej Bialecki (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11714?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316393#comment-16316393
 ] 

Andrzej Bialecki  commented on SOLR-11714:
--

The current behavior of the framework is trappy because user has to modify the 
preferences when he adds a {{searchRate}} trigger in order to avoid the loop - 
if he forgets to do that he can bring the autoscaling down.

There are two things that we can do here: {{ComputePlanAction}} should be able 
to detect infinite (or very long) loops based roughly on the cluster size and 
the total number of replicas across the cluster, eg. if we have a cluster of 10 
nodes and 20 replicas but the loop generated 1000 operations then something is 
definitely wrong.

Also, can we use some default limit, eg. 2 * replication factor, or something 
similar, for ADDREPLICA suggester, at least for events produced by 
{{searchRate}} trigger? Where do you think this default should be initialized?

> AddReplicaSuggester endless loop
> 
>
> Key: SOLR-11714
> URL: https://issues.apache.org/jira/browse/SOLR-11714
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Affects Versions: 7.2, master (8.0)
>Reporter: Andrzej Bialecki 
>Assignee: Noble Paul
> Attachments: 7.2-disable-search-rate-trigger.diff, SOLR-11714.diff
>
>
> {{SearchRateTrigger}} events are processed by {{ComputePlanAction}} and 
> depending on the condition either a MoveReplicaSuggester or 
> AddReplicaSuggester is selected.
> When {{AddReplicaSuggester}} is selected there's currently a bug in master, 
> due to an API change (Hint.COLL_SHARD should be used instead of Hint.COLL). 
> However, after fixing that bug {{ComputePlanAction}} goes into an endless 
> loop because the suggester endlessly keeps creating new operations.
> Please see the patch that fixes the Hint.COLL_SHARD issue and modifies the 
> unit test to illustrate this failure.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11730) Test NodeLost / NodeAdded dynamics

2018-01-08 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11730?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316378#comment-16316378
 ] 

ASF subversion and git services commented on SOLR-11730:


Commit a9fec9bf7caee2620d09086efde4a29b245aab7b in lucene-solr's branch 
refs/heads/master from [~ab]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a9fec9b ]

SOLR-11730: Collect more stats in the benchmark. Add simulation framework 
package docs.


> Test NodeLost / NodeAdded dynamics
> --
>
> Key: SOLR-11730
> URL: https://issues.apache.org/jira/browse/SOLR-11730
> Project: Solr
>  Issue Type: Sub-task
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: AutoScaling
>Reporter: Andrzej Bialecki 
>
> Let's consider a "flaky node" scenario.
> A node is going up and down at short intervals (eg. due to a flaky network 
> cable). If the frequency of these events coincides with {{waitFor}} interval 
> in {{nodeLost}} trigger configuration, the node may never be reported to the 
> autoscaling framework as lost. Similarly it may never be reported as added 
> back if it's lost again within the {{waitFor}} period of {{nodeAdded}} 
> trigger.
> Other scenarios are possible here too, depending on timing:
> * node being constantly reported as lost
> * node being constantly reported as added
> One possible solution for the autoscaling triggers is that the framework 
> should keep a short-term ({{waitFor * 2}} long?) memory of a node state that 
> the trigger is tracking in order to eliminate flaky nodes (ie. those that 
> transitioned between states more than once within the period).
> Situation like this is detrimental to SolrCloud behavior regardless of 
> autoscaling actions, so it should probably be addressed at a node level by 
> eg. shutting down Solr node after the number of disconnects in a time window 
> reaches a certain threshold.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-11598) Export Writer needs to support more than 4 Sort fields - Say 10, ideally it should not be bound at all, but 4 seems to really short sell the StreamRollup capabilities.

2018-01-08 Thread Amrit Sarkar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-11598?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16316288#comment-16316288
 ] 

Amrit Sarkar commented on SOLR-11598:
-

[~aroopganguly],

Do you have some metrics and numbers on Export writer with more than 4 sort 
fields. We are looking forward to them.

> Export Writer needs to support more than 4 Sort fields - Say 10, ideally it 
> should not be bound at all, but 4 seems to really short sell the StreamRollup 
> capabilities.
> ---
>
> Key: SOLR-11598
> URL: https://issues.apache.org/jira/browse/SOLR-11598
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: streaming expressions
>Affects Versions: 6.6.1, 7.0
>Reporter: Aroop
>  Labels: patch
> Attachments: SOLR-11598-6_6-streamtests, SOLR-11598-6_6.patch, 
> SOLR-11598-master.patch, SOLR-11598.patch
>
>
> I am a user of Streaming and I am currently trying to use rollups on an 10 
> dimensional document.
> I am unable to get correct results on this query as I am bounded by the 
> limitation of the export handler which supports only 4 sort fields.
> I do not see why this needs to be the case, as it could very well be 10 or 20.
> My current needs would be satisfied with 10, but one would want to ask why 
> can't it be any decent integer n, beyond which we know performance degrades, 
> but even then it should be caveat emptor.
> [~varunthacker] 
> Code Link:
> https://github.com/apache/lucene-solr/blob/19db1df81a18e6eb2cce5be973bf2305d606a9f8/solr/core/src/java/org/apache/solr/handler/ExportWriter.java#L455
> Error
> null:java.io.IOException: A max of 4 sorts can be specified
>   at 
> org.apache.solr.handler.ExportWriter.getSortDoc(ExportWriter.java:452)
>   at org.apache.solr.handler.ExportWriter.writeDocs(ExportWriter.java:228)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$1(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeIterator(JavaBinCodec.java:664)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:333)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$null$2(ExportWriter.java:219)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeKnownType(JavaBinCodec.java:354)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeVal(JavaBinCodec.java:223)
>   at org.apache.solr.common.util.JavaBinCodec$1.put(JavaBinCodec.java:394)
>   at 
> org.apache.solr.handler.ExportWriter.lambda$write$3(ExportWriter.java:217)
>   at 
> org.apache.solr.common.util.JavaBinCodec.writeMap(JavaBinCodec.java:437)
>   at org.apache.solr.handler.ExportWriter.write(ExportWriter.java:215)
>   at org.apache.solr.core.SolrCore$3.write(SolrCore.java:2601)
>   at 
> org.apache.solr.response.QueryResponseWriterUtil.writeQueryResponse(QueryResponseWriterUtil.java:49)
>   at 
> org.apache.solr.servlet.HttpSolrCall.writeResponse(HttpSolrCall.java:809)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:538)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:361)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:305)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1691)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.ja

[JENKINS] Lucene-Solr-master-Windows (32bit/jdk1.8.0_144) - Build # 7101 - Still Unstable!

2018-01-08 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/7101/
Java: 32bit/jdk1.8.0_144 -server -XX:+UseG1GC

6 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.lucene.index.TestBackwardsCompatibility

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_6525B580EE98F3F9-001\3.2.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_6525B580EE98F3F9-001\3.2.0-nocfs-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_6525B580EE98F3F9-001\3.2.0-nocfs-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\lucene\build\backward-codecs\test\J0\temp\lucene.index.TestBackwardsCompatibility_6525B580EE98F3F9-001\3.2.0-nocfs-001

at __randomizedtesting.SeedInfo.seed([6525B580EE98F3F9]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  junit.framework.TestSuite.org.apache.solr.SampleTest

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_BDEDC65708B666B9-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_BDEDC65708B666B9-001\init-core-data-001
 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_BDEDC65708B666B9-001\init-core-data-001:
 java.nio.file.DirectoryNotEmptyException: 
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.SampleTest_BDEDC65708B666B9-001\init-core-data-001

at __randomizedtesting.SeedInfo.seed([BDEDC65708B666B9]:0)
at org.apache.lucene.util.IOUtils.rm(IOUtils.java:329)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:216)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
at java.lang.Thread.run(Thread.java:748)


FAILED:  
junit.framework.TestSuite.org.apache.solr.search.function.TestSortByMinMaxFunction

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\build\solr-core\test\J0\temp\solr.search.function.TestSortByMinMaxFunction_BDEDC65708B666B9-001\init-core-data-001\spellchec

[jira] [Updated] (LUCENE-8115) fail precommit on unnecessary {@inheritDoc} use

2018-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-8115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated LUCENE-8115:

Attachment: LUCENE-8115-step1.patch

Attaching patch for step 1.

> fail precommit on unnecessary {@inheritDoc} use
> ---
>
> Key: LUCENE-8115
> URL: https://issues.apache.org/jira/browse/LUCENE-8115
> Project: Lucene - Core
>  Issue Type: Task
>Reporter: Christine Poerschke
>Assignee: Christine Poerschke
>Priority: Minor
> Attachments: LUCENE-8115-step1.patch, LUCENE-8115-step2.patch, 
> LUCENE-8115-step2.patch
>
>
> * Step 1: identify and remove existing unnecessary {{\{@inheritDoc\}}} use 
> e.g. via IDE tooling or {{git grep -C 1 inheritDoc}}.
> * Step 2: change {{ant validate}} so that precommit fails if/when any new 
> unnecessary {{\{@inheritDoc\}}} are introduced.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-11809) QueryComponent.prepare rq parameter parsing fails under SOLR 7.2

2018-01-08 Thread Christine Poerschke (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-11809?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Christine Poerschke updated SOLR-11809:
---
Attachment: SOLR-11809.patch

bq. ... for "rq" to be parsed without care for whatever defType is. ... RQ must 
produce a query of a certain type ...

Makes sense to me. Attaching revised patch.

> QueryComponent.prepare rq parameter parsing fails under SOLR 7.2
> 
>
> Key: SOLR-11809
> URL: https://issues.apache.org/jira/browse/SOLR-11809
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>Affects Versions: 7.2
> Environment: Windows 10, java version "1.8.0_151"
>Reporter: Dariusz Wojtas
> Attachments: SOLR-11809.patch, SOLR-11809.patch, ltr-sample.zip
>
>
> The LTR functionality that works under SOLR 7.0 and 7.1 stopped working in 
> 7.2.
> From the solr-user mailing list it appears it might be related to SOLR-11501 .
> I am attaching the minimal working collection definition (attached 
> [^ltr-sample.zip]) that shows the problem.
> Please deploy the collection (unpack under "server/solr"), run solr and 
> invoke the URL below.
>   http://localhost:8983/solr/ltr-sample/select?q=*:*
> Behaviour:
> * under 7.0 and 7.1 - empty resultset is returned (there is no data in the 
> collection)
> * under 7.2 - error: "rq parameter must be a RankQuery". The stacktrace
> {code}
> 2018-01-02 20:51:06.807 INFO  (qtp205125520-20) [   x:ltr-sample] 
> o.a.s.c.S.Request [ltr-sample]  webapp=/solr path=/select 
> params={q=*:*&_=1514909140928} status=400 QTime=23
> 2018-01-02 21:04:27.293 ERROR (qtp205125520-17) [   x:ltr-sample] 
> o.a.s.h.RequestHandlerBase org.apache.solr.common.SolrException: rq parameter 
> must be a RankQuery
>   at 
> org.apache.solr.handler.component.QueryComponent.prepare(QueryComponent.java:183)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:269)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:177)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:2503)
>   at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:710)
>   at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:516)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:382)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:326)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1751)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
>   [..]
> {code}
> i have checked - the same issue exists when I try to invoke the _rerank_ 
> query parser.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >