[jira] [Commented] (SOLR-9708) Expose UnifiedHighlighter in Solr

2017-01-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825503#comment-15825503
 ] 

David Smiley commented on SOLR-9708:


As I was working on highlighter documentation... I think the default we use for 
hl.maxAnalyzedChars should be equal to that of the Original Highlighter, which 
is 51200 -- certainly not less.  After all, it certainly does a faster job, and 
this parameter is a performance oriented threshold.  I see we're currently 
using the default in the UnifiedHighlighter which is 1. This fits within 
the overarching goal of making transitioning to this highlighter 
straight-forward minimizing gotchas.

[~jim.ferenczi] can I simply commit a 1-liner change to set this default in 6.4 
or shall I file a new issue?

> Expose UnifiedHighlighter in Solr
> -
>
> Key: SOLR-9708
> URL: https://issues.apache.org/jira/browse/SOLR-9708
> Project: Solr
>  Issue Type: New Feature
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: Timothy M. Rodriguez
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: SOLR-9708.patch
>
>
> This ticket is for creating a Solr plugin that can utilize the new 
> UnifiedHighlighter which was initially committed in 
> https://issues.apache.org/jira/browse/LUCENE-7438



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9935) When hl.method=unified add support for hl.fragsize param

2017-01-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15825492#comment-15825492
 ] 

David Smiley commented on SOLR-9935:


While documenting the highlighters in the Solr Ref Guide, I overlooked that 
{{hl.fragsize}} of 0 is a special value to mean don't to any fragmenting.  I 
should add this as a special case to use the WholeBreakIterator.  
[~jim.ferenczi] is it too late for 6.4?

> When hl.method=unified add support for hl.fragsize param
> 
>
> Key: SOLR-9935
> URL: https://issues.apache.org/jira/browse/SOLR-9935
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: highlighter
>Reporter: David Smiley
>Assignee: David Smiley
> Fix For: 6.4
>
> Attachments: SOLR_9935_UH_fragsize.patch, SOLR_9935_UH_fragsize.patch
>
>
> In LUCENE-7620 the UnifiedHighlighter is getting a BreakIterator that allows 
> it to support the equivalent of Solr's {{hl.fragsize}}.  So lets support this 
> on the Solr side.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_112) - Build # 2681 - Unstable!

2017-01-16 Thread Ishan Chattopadhyaya
I'll take a look at these failures related to Secure Impersonation and
Delegation Tokens tests tomorrow onwards.

On Tue, Jan 17, 2017 at 9:31 AM, Policeman Jenkins Server <
jenk...@thetaphi.de> wrote:

> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2681/
> Java: 32bit/jdk1.8.0_112 -server -XX:+UseG1GC
>
> 1 tests failed.
> FAILED:  org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.
> testDelegationTokenRenew
>
> Error Message:
> expected:<200> but was:<403>
>
> Stack Trace:
> java.lang.AssertionError: expected:<200> but was:<403>
> at __randomizedtesting.SeedInfo.seed([A46EDC52AFCF9E27:
> 93F5284C97034383]:0)
> at org.junit.Assert.fail(Assert.java:93)
> at org.junit.Assert.failNotEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:128)
> at org.junit.Assert.assertEquals(Assert.java:472)
> at org.junit.Assert.assertEquals(Assert.java:456)
> at org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.
> renewDelegationToken(TestDelegationWithHadoopAuth.java:118)
> at org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.
> verifyDelegationTokenRenew(TestDelegationWithHadoopAuth.java:301)
> at org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.
> testDelegationTokenRenew(TestDelegationWithHadoopAuth.java:318)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at sun.reflect.NativeMethodAccessorImpl.invoke(
> NativeMethodAccessorImpl.java:62)
> at sun.reflect.DelegatingMethodAccessorImpl.invoke(
> DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(
> RandomizedRunner.java:1713)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(
> RandomizedRunner.java:907)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(
> RandomizedRunner.java:943)
> at com.carrotsearch.randomizedtesting.
> RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.
> java:57)
> at org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(
> TestRuleSetupTeardownChained.java:49)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(
> TestRuleThreadAndTestName.java:48)
> at org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures
> $1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at org.apache.lucene.util.TestRuleMarkFailure$1.
> evaluate(TestRuleMarkFailure.java:47)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl$
> StatementRunner.run(ThreadLeakControl.java:367)
> at com.carrotsearch.randomizedtesting.ThreadLeakControl.
> forkTimeoutingTask(ThreadLeakControl.java:811)
> at com.carrotsearch.randomizedtesting.
> ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
> at com.carrotsearch.randomizedtesting.RandomizedRunner.
> runSingleTest(RandomizedRunner.java:916)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(
> RandomizedRunner.java:802)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(
> RandomizedRunner.java:852)
> at com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(
> RandomizedRunner.java:863)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.
> java:57)
> at org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(
> AbstractBeforeAfterRule.java:45)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(
> TestRuleStoreClassName.java:41)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> NoShadowingOrOverridesOnMethodsRule$1.evaluate(
> NoShadowingOrOverridesOnMethodsRule.java:40)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at com.carrotsearch.randomizedtesting.rules.
> StatementAdapter.evaluate(StatementAdapter.java:36)
> at org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(
> TestRuleAssertionsRequired.java:53)
> 

[JENKINS] Lucene-Solr-Tests-6.4 - Build # 4 - Unstable

2017-01-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.4/4/

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor142.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:930)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:823)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor142.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)
at org.apache.solr.core.SolrCore.(SolrCore.java:930)
at org.apache.solr.core.SolrCore.(SolrCore.java:823)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)
at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([3B1A7346C7B11289]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:269)
at sun.reflect.GeneratedMethodAccessor56.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_112) - Build # 2681 - Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2681/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseG1GC

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew

Error Message:
expected:<200> but was:<403>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<403>
at 
__randomizedtesting.SeedInfo.seed([A46EDC52AFCF9E27:93F5284C97034383]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.renewDelegationToken(TestDelegationWithHadoopAuth.java:118)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.verifyDelegationTokenRenew(TestDelegationWithHadoopAuth.java:301)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenRenew(TestDelegationWithHadoopAuth.java:318)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 

[JENKINS] Lucene-Solr-6.x-Windows (32bit/jdk1.8.0_112) - Build # 694 - Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Windows/694/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseG1GC

2 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryZkTest.test

Error Message:
Mismatch in counts between replicas

Stack Trace:
java.lang.AssertionError: Mismatch in counts between replicas
at 
__randomizedtesting.SeedInfo.seed([F8EB40306B03D8AC:70BF7FEAC5FFB554]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.RecoveryZkTest.assertShardConsistency(RecoveryZkTest.java:143)
at org.apache.solr.cloud.RecoveryZkTest.test(RecoveryZkTest.java:126)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.handler.TestReplicationHandler.doTestStressReplication

Error Message:
[index.20170117065457557, index.20170117065458068, index.properties, 
replication.properties, snapshot_metadata] expected:<1> but was:<2>

Stack Trace:
java.lang.AssertionError: [index.20170117065457557, 

[JENKINS] Lucene-Solr-master-Linux (64bit/jdk1.8.0_112) - Build # 18789 - Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18789/
Java: 64bit/jdk1.8.0_112 -XX:+UseCompressedOops -XX:+UseG1GC

1 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=13291, name=jetty-launcher-2390-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)   
 2) Thread[id=13287, name=jetty-launcher-2390-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=13291, name=jetty-launcher-2390-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Commented] (SOLR-9114) NPE using TermVectorComponent in combinition with ExactStatsCache - Solr6

2017-01-16 Thread Cao Manh Dat (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824853#comment-15824853
 ] 

Cao Manh Dat commented on SOLR-9114:


[~varunthacker] Hi Varun, I would like to commit the patch soon, unless you 
have a down vote for the patch.

> NPE using TermVectorComponent in combinition with ExactStatsCache - Solr6
> -
>
> Key: SOLR-9114
> URL: https://issues.apache.org/jira/browse/SOLR-9114
> Project: Solr
>  Issue Type: Bug
>Affects Versions: 6.0
>Reporter: Andreas Daffner
>Assignee: Varun Thacker
> Attachments: SOLR-9114.patch
>
>
> Hello,
> I am getting a NPE when using the TermVectorComponent in combinition with 
> ExactStatsCache.
> I am using SOLR 6.0.0 with 4 shards in total.
> This Bug is a duplicate of SOLR-8459
> It was already fixed in SOLR-8459 for SOLR 5.x but it is still open in the 
> new SOLR 6.0.0
> Can you please fix it for the nes SOLR 6.0.0 as well? I already tried the 
> patch of the 5.x bugfix on the SOLR 6.0.0 but the bug is still present.
> I set up my solrconfig.xml as described in these 2 links:
> TermVectorComponent:
> https://cwiki.apache.org/confluence/display/solr/The+Term+Vector+Component
> ExactStatsCache:
> https://cwiki.apache.org/confluence/display/solr/Distributed+Requests#Configuring+statsCache+implementation
> My snippets from solrconfig.xml:
> {code}
> ...
>   
>   
>   
>class="org.apache.solr.handler.component.TermVectorComponent"/>
>class="org.apache.solr.handler.component.SearchHandler">
> 
>   true
> 
> 
>   tvComponent
> 
>   
> ...
> {code}
> Unfortunately a request to SOLR like 
> "http://host/solr/corename/tvrh?q=site_url_id:74; ends up with this NPE:
> {code}
> 69730 ERROR (qtp110456297-14) [c:SingleDomainSite_28 s:shard1 r:core_node1 
> x:SingleDomainSite_28_shard1_replica1] o.a.s.s.HttpSolrCall 
> null:java.lang.NullPointerException
> at 
> org.apache.solr.handler.component.TermVectorComponent.finishStage(TermVectorComponent.java:451)
> at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:426)
> at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:155)
> at org.apache.solr.core.SolrCore.execute(SolrCore.java:2033)
> at org.apache.solr.servlet.HttpSolrCall.execute(HttpSolrCall.java:652)
> at org.apache.solr.servlet.HttpSolrCall.call(HttpSolrCall.java:460)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:229)
> at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:184)
> at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1668)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:581)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
> at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1160)
> at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:511)
> at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
> at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1092)
> at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
> at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:213)
> at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
> at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
> at org.eclipse.jetty.server.Server.handle(Server.java:518)
> at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:308)
> at 
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:244)
> at 
> org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> at 
> org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> at 
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> at 
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> at 
> 

[JENKINS] Lucene-Solr-SmokeRelease-6.x - Build # 239 - Still Failing

2017-01-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-6.x/239/

No tests ran.

Build Log:
[...truncated 41964 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 476 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 260 files to 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.8 JAVA_HOME=/home/jenkins/tools/java/latest1.8
   [smoker] NOTE: output encoding is UTF-8
   [smoker] 
   [smoker] Load release URL 
"file:/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/dist/"...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.01 sec (24.2 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-6.5.0-src.tgz...
   [smoker] 30.6 MB in 0.03 sec (1174.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.5.0.tgz...
   [smoker] 65.1 MB in 0.06 sec (1164.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-6.5.0.zip...
   [smoker] 76.1 MB in 0.07 sec (1168.3 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-6.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.5.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.8...
   [smoker]   got 6212 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-6.5.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run "ant validate"
   [smoker] run tests w/ Java 8 and testArgs='-Dtests.slow=false'...
   [smoker] test demo with 1.8...
   [smoker]   got 229 hits for query "lucene"
   [smoker] checkindex with 1.8...
   [smoker] generate javadocs w/ Java 8...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.2 MB in 0.00 sec (49.7 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-6.5.0-src.tgz...
   [smoker] 40.1 MB in 0.69 sec (58.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.5.0.tgz...
   [smoker] 140.5 MB in 1.02 sec (137.4 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-6.5.0.zip...
   [smoker] 150.0 MB in 0.14 sec (1099.8 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-6.5.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-6.5.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker] copying unpacked distribution for Java 8 ...
   [smoker] test solr example w/ Java 8...
   [smoker]   start Solr instance 
(log=/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0-java8/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   Running techproducts example on port 8983 from 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0-java8
   [smoker] Creating Solr home directory 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-6.x/lucene/build/smokeTestRelease/tmp/unpack/solr-6.5.0-java8/example/techproducts/solr
   [smoker] 
   [smoker] Starting up Solr on port 8983 using command:
   [smoker] bin/solr start -p 8983 -s "example/techproducts/solr"
   [smoker] 
   [smoker] Waiting up to 180 seconds to see Solr running on port 8983 [|]  
 [/]   [-]   [\]  
   [smoker] Started Solr server on port 8983 (pid=29414). Happy searching!
   [smoker] 
   [smoker] 
 

[jira] [Commented] (SOLR-7268) Add a script to pipe data from other programs or files to Solr using SolrJ

2017-01-16 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824775#comment-15824775
 ] 

Noble Paul commented on SOLR-7268:
--

you are right the {{-=}} can lead to conflicts we can 
just have a generic param like {{-params key1=val1=val2}} etc. Anyway , 
nobody has yet picked up the implementation

> Add a script to pipe data from other programs or files to Solr using SolrJ
> --
>
> Key: SOLR-7268
> URL: https://issues.apache.org/jira/browse/SOLR-7268
> Project: Solr
>  Issue Type: Improvement
>  Components: scripts and tools, SolrJ
>Reporter: Noble Paul
>Assignee: Noble Paul
>
> I should be able to pipe JSON/XML/CSV or whatever is possible at the 
> {{/update/*}} to a  command which in turn uses SolrJ to send the docs to the 
> correct leader in native format. 
> In the following examples , all connection details of the cluster is put into 
> a file called solrj.properties
> example :
> {noformat}
> #post a file
> cat myjson.json | bin/post -c gettingstarted -s http://localhost:8983/solr 
> #or a producer program
> myprogram | bin/post  -c gettingstarted -s http://localhost:8983/solr
> {noformat}
> The behavior of the script would be exactly similar to the behavior if I were 
> to post the request directly to solr to the specified {{qt}} . Everything 
> parameter the requesthandler accepts would be accepted as a 
> {{-=}} format. The same things could be put into a 
> properties file called {{indexer.properties}} and be passed as a -p 
> parameter. The script would expect the following extra properties {{zk.url}} 
> for cloud or {{solr.url}} for standalone. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-16 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-9941:
---
Fix Version/s: (was: 6x)
   6.5

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0), 6.5
>
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated SOLR-9584:
--
Fix Version/s: 6.5

> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0), 6.5
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824749#comment-15824749
 ] 

ASF subversion and git services commented on SOLR-9584:
---

Commit 5d0f90a833ed06decc2b57b307c1d4bff3c70cd0 in lucene-solr's branch 
refs/heads/branch_6x from [~yjzhou]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=5d0f90a ]

SOLR-9584: use relative URL path instead of absolute path starting from /solr

(cherry picked from commit e0b4cac)


> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9584) The absolute URL path in server/solr-webapp/webapp/js/angular/services.js would make context customization not work

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9584?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824748#comment-15824748
 ] 

ASF subversion and git services commented on SOLR-9584:
---

Commit 3c5393d0787db628629ef3ced088231fc2cc26af in lucene-solr's branch 
refs/heads/branch_6x from [~janhoy]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3c5393d ]

SOLR-9584: Support Solr being proxied with another endpoint than default /solr
This closes #86 - see original commit e0b4caccd3312b011cdfbb3951ea43812486ca98

(cherry picked from commit f99c967)


> The absolute URL path in server/solr-webapp/webapp/js/angular/services.js 
> would make context customization not work
> ---
>
> Key: SOLR-9584
> URL: https://issues.apache.org/jira/browse/SOLR-9584
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: UI
>Affects Versions: 6.2
>Reporter: Yun Jie Zhou
>Assignee: Jan Høydahl
>Priority: Minor
>  Labels: patch
> Fix For: master (7.0)
>
>
> The absolute path starting from /solr in 
> server/solr-webapp/webapp/js/angular/services.js would make the context 
> customization not work.
> For example, we should use $resource('admin/info/system', {"wt":"json", 
> "_":Date.now()}); instead of $resource('/solr/admin/info/system', 
> {"wt":"json", "_":Date.now()});



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7636) Fix broken links in lucene.apache.org site

2017-01-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved LUCENE-7636.
-
Resolution: Fixed

Ok, I fixed some broken Jenkins links and old links to SVN here 
http://lucene.apache.org/core/developer.html as well, so I think we're in good 
shape now. Thanks!

> Fix broken links in lucene.apache.org site
> --
>
> Key: LUCENE-7636
> URL: https://issues.apache.org/jira/browse/LUCENE-7636
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>Priority: Minor
>
> I ran a broken links tool on lucene.apache.org site, found some broken links. 
> The scan excluded link checking of Javadoc, JIRA, localhost and 401 links 
> that need login to Apache:
> Getting links from: http://lucene.apache.org/pylucene/index.html
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/pylucene/
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/core/discussion.html
> -├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)- *FIXED*
> Finished! 93 links found. 1 broken.
> Getting links from: http://lucene.apache.org/core/developer.html
> ├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
> (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
>  (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
>  (HTTP_404)
> Finished! 73 links found. 3 broken.
> Getting links from: http://lucene.apache.org/solr/resources.html
> -└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)- *FIXED*
> Finished! 188 links found. 8 broken.
> Getting links from: http://lucene.apache.org/pylucene/features.html
> ├─BROKEN─ 
> http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction
>  (HTTP_404)
> Finished! 60 links found. 1 broken.
> Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
> ├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
> ├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
> Finished! 66 links found. 2 broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-6.x - Build # 259 - Still Unstable

2017-01-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-6.x/259/

13 tests failed.
FAILED:  org.apache.lucene.search.TestFuzzyQuery.testRandom

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([41EB7146FD5F61A6]:0)


FAILED:  junit.framework.TestSuite.org.apache.lucene.search.TestFuzzyQuery

Error Message:
Suite timeout exceeded (>= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (>= 720 msec).
at __randomizedtesting.SeedInfo.seed([41EB7146FD5F61A6]:0)


FAILED:  
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange

Error Message:
Timeout waiting for CDCR replication to complete @source_collection:shard1

Stack Trace:
java.lang.RuntimeException: Timeout waiting for CDCR replication to complete 
@source_collection:shard1
at 
__randomizedtesting.SeedInfo.seed([1470C32B33CA1FCD:C6808FC86D65B9FF]:0)
at 
org.apache.solr.cloud.BaseCdcrDistributedZkTest.waitForReplicationToComplete(BaseCdcrDistributedZkTest.java:795)
at 
org.apache.solr.cloud.CdcrReplicationDistributedZkTest.testReplicationAfterLeaderChange(CdcrReplicationDistributedZkTest.java:305)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (SOLR-9893) EasyMock/Mockito no longer works with Java 9 b148+

2017-01-16 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler updated SOLR-9893:

Fix Version/s: 6.4
   master (7.0)
   6.x

> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Fix For: 6.x, master (7.0), 6.4
>
> Attachments: SOLR-9893.patch, SOLR-9893.patch
>
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib 
> behind that is trying to access a protected method inside the runtime using 
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected 
> defineClass method available to the outside, it is much more correct to just 
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25 
> tests fail. The only way is to disable all Mocking tests in Java 9. The 
> underlying issue in cglib is still not solved, master's code is here: 
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected 
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with 
> Solr completely! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Solaris (64bit/jdk1.8.0) - Build # 1084 - Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Solaris/1084/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseSerialGC

3 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CollectionReloadTest

Error Message:
java.util.concurrent.TimeoutException: Could not connect to ZooKeeper 
127.0.0.1:35725 within 3 ms

Stack Trace:
org.apache.solr.common.SolrException: java.util.concurrent.TimeoutException: 
Could not connect to ZooKeeper 127.0.0.1:35725 within 3 ms
at __randomizedtesting.SeedInfo.seed([2D7AA9AC7142190A]:0)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:182)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:116)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:111)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:98)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.waitForAllNodes(MiniSolrCloudCluster.java:260)
at 
org.apache.solr.cloud.MiniSolrCloudCluster.(MiniSolrCloudCluster.java:254)
at 
org.apache.solr.cloud.SolrCloudTestCase$Builder.configure(SolrCloudTestCase.java:188)
at 
org.apache.solr.cloud.CollectionReloadTest.setupCluster(CollectionReloadTest.java:43)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:847)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.util.concurrent.TimeoutException: Could not connect to 
ZooKeeper 127.0.0.1:35725 within 3 ms
at 
org.apache.solr.common.cloud.ConnectionManager.waitForConnected(ConnectionManager.java:233)
at 
org.apache.solr.common.cloud.SolrZkClient.(SolrZkClient.java:174)
... 31 more


FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.CollectionReloadTest

Error Message:
15 threads leaked from SUITE scope at 
org.apache.solr.cloud.CollectionReloadTest: 1) Thread[id=18994, 
name=qtp999584210-18994, state=TIMED_WAITING, group=TGRP-CollectionReloadTest]  
   at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
 at 
org.eclipse.jetty.util.BlockingArrayQueue.poll(BlockingArrayQueue.java:392) 
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.idleJobPoll(QueuedThreadPool.java:563)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool.access$800(QueuedThreadPool.java:48)
 at 
org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) 
at 

[jira] [Resolved] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-16 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya resolved SOLR-9941.

   Resolution: Fixed
Fix Version/s: 6x
   master (7.0)

Will change the Fix version to 6.5 as soon as the version is available on JIRA.

> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Ishan Chattopadhyaya
> Fix For: master (7.0), 6x
>
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824621#comment-15824621
 ] 

ASF subversion and git services commented on SOLR-9941:
---

Commit 38af094d175daebe4093782cc06e964cfc2dd14b in lucene-solr's branch 
refs/heads/master from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=38af094 ]

SOLR-9941: Moving changelog entry from 7.0.0 to 6.5.0


> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824614#comment-15824614
 ] 

ASF subversion and git services commented on SOLR-9941:
---

Commit 302ce326d5a2eee043445918fa3e3885dc003b2f in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=302ce32 ]

SOLR-9941: Clear deletes lists before log replay


> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9941) log replay redundently (pre-)applies DBQs as if they were out of order

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824615#comment-15824615
 ] 

ASF subversion and git services commented on SOLR-9941:
---

Commit 7ef8cf7d6aad25888de4cffc4c20239694a67a45 in lucene-solr's branch 
refs/heads/branch_6x from [~ichattopadhyaya]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=7ef8cf7 ]

SOLR-9941: Adding the Optimizations section to the CHANGES.txt


> log replay redundently (pre-)applies DBQs as if they were out of order
> --
>
> Key: SOLR-9941
> URL: https://issues.apache.org/jira/browse/SOLR-9941
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Hoss Man
>Assignee: Ishan Chattopadhyaya
> Attachments: SOLR-9941.hoss-test-experiment.patch, SOLR-9941.patch, 
> SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch, SOLR-9941.patch
>
>
> There's kind of an odd situation that arises when a Solr node starts up 
> (after a crash) and tries to recover from it's tlog that causes deletes to be 
> redundantly & excessively applied -- at a minimum it causes confusing really 
> log messages
> * {{UpdateLog.init(...)}} creates {{TransactionLog}} instances for the most 
> recent log files found (based on numRecordsToKeep) and then builds a 
> {{RecentUpdates}} instance from them
> * Delete entries from the {{RecentUpdates}} are used to populate 2 lists:
> ** {{deleteByQueries}}
> ** {{oldDeletes}} (for deleteById).
> * Then when {{UpdateLog.recoverFromLog}} is called a {{LogReplayer}} is used 
> to replay any (uncommited) {{TransactionLog}} enteries
> ** during replay {{UpdateLog}} delegates to the UpdateRequestProcessorChain 
> to for the various adds/deletes, etc...
> ** when an add makes it to {{RunUpdateProcessor}} it delegates to 
> {{DirectUpdateHandler2}}, which (independent of the fact that we're in log 
> replay) calls {{UpdateLog.getDBQNewer}} for every add, looking for any 
> "Reordered" deletes that have a version greater then the add
> *** if it finds _any_ DBQs "newer" then the document being added, it does a 
> low level {{IndexWriter.updateDocument}} and then immediately executes _all_ 
> the newer DBQs ... _once per add_
> ** these deletes are *also* still executed as part of the normal tlog replay, 
> because they are in the tlog.
> Which means if you are recovering from a tlog with 90 addDocs, followed by 5 
> DBQs, then *each* of those 5 DBQs will each be executed 91 times -- and for 
> 90 of those executions, a DUH2 INFO log messages will say {{"Reordered DBQs 
> detected. ..."}} even tough the only reason they are out of order is because 
> Solr is deliberately applying them out of order.
> * At a minimum we should improve the log messages
> * Ideally we should stop (pre-emptively) applying these deletes during tlog 
> replay.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-5944) Support updates of numeric DocValues

2017-01-16 Thread Ishan Chattopadhyaya (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-5944?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ishan Chattopadhyaya updated SOLR-5944:
---
Attachment: SOLR-5944.patch

Here's the latest patch from the jira/solr-5944 branch. Steve's jenkins has 
been running all tests on the branch and they seem to pass fine [0].

This is very close now, and barring new issues / review comments / suggestions, 
this patch can be committed.

[0] - http://jenkins.sarowe.net/job/Solr-tests-SOLR-5944/

> Support updates of numeric DocValues
> 
>
> Key: SOLR-5944
> URL: https://issues.apache.org/jira/browse/SOLR-5944
> Project: Solr
>  Issue Type: New Feature
>Reporter: Ishan Chattopadhyaya
>Assignee: Shalin Shekhar Mangar
> Attachments: defensive-checks.log.gz, 
> demo-why-dynamic-fields-cannot-be-inplace-updated-first-time.patch, 
> DUP.patch, hoss.62D328FA1DEA57FD.fail2.txt, hoss.62D328FA1DEA57FD.fail3.txt, 
> hoss.62D328FA1DEA57FD.fail.txt, hoss.D768DD9443A98DC.fail.txt, 
> hoss.D768DD9443A98DC.pass.txt, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, SOLR-5944.patch, 
> TestStressInPlaceUpdates.eb044ac71.beast-167-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.beast-587-failure.stdout.txt, 
> TestStressInPlaceUpdates.eb044ac71.failures.tar.gz
>
>
> LUCENE-5189 introduced support for updates to numeric docvalues. It would be 
> really nice to have Solr support this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-EA] Lucene-Solr-6.4-Linux (64bit/jdk-9-ea+152) - Build # 13 - Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.4-Linux/13/
Java: 64bit/jdk-9-ea+152 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail

Error Message:
expected:<200> but was:<404>

Stack Trace:
java.lang.AssertionError: expected:<200> but was:<404>
at 
__randomizedtesting.SeedInfo.seed([ADA40B3A0168A405:C51B3E10D1F2B6E9]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.cancelDelegationToken(TestDelegationWithHadoopAuth.java:128)
at 
org.apache.solr.security.hadoop.TestDelegationWithHadoopAuth.testDelegationTokenCancelFail(TestDelegationWithHadoopAuth.java:280)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:543)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7636) Fix broken links in lucene.apache.org site

2017-01-16 Thread Andi Vajda (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824559#comment-15824559
 ] 

Andi Vajda commented on LUCENE-7636:


Fixed the broken PyLucene and JCC links in rev 1779102.

> Fix broken links in lucene.apache.org site
> --
>
> Key: LUCENE-7636
> URL: https://issues.apache.org/jira/browse/LUCENE-7636
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>Priority: Minor
>
> I ran a broken links tool on lucene.apache.org site, found some broken links. 
> The scan excluded link checking of Javadoc, JIRA, localhost and 401 links 
> that need login to Apache:
> Getting links from: http://lucene.apache.org/pylucene/index.html
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/pylucene/
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/core/discussion.html
> -├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)- *FIXED*
> Finished! 93 links found. 1 broken.
> Getting links from: http://lucene.apache.org/core/developer.html
> ├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
> (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
>  (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
>  (HTTP_404)
> Finished! 73 links found. 3 broken.
> Getting links from: http://lucene.apache.org/solr/resources.html
> -└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)- *FIXED*
> Finished! 188 links found. 8 broken.
> Getting links from: http://lucene.apache.org/pylucene/features.html
> ├─BROKEN─ 
> http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction
>  (HTTP_404)
> Finished! 60 links found. 1 broken.
> Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
> ├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
> ├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
> Finished! 66 links found. 2 broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7636) Fix broken links in lucene.apache.org site

2017-01-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824472#comment-15824472
 ] 

Jan Høydahl commented on LUCENE-7636:
-

[~vajda] I see that some of the 404's are for pylucene, care to take a look?

> Fix broken links in lucene.apache.org site
> --
>
> Key: LUCENE-7636
> URL: https://issues.apache.org/jira/browse/LUCENE-7636
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>Priority: Minor
>
> I ran a broken links tool on lucene.apache.org site, found some broken links. 
> The scan excluded link checking of Javadoc, JIRA, localhost and 401 links 
> that need login to Apache:
> Getting links from: http://lucene.apache.org/pylucene/index.html
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/pylucene/
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/core/discussion.html
> -├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)- *FIXED*
> Finished! 93 links found. 1 broken.
> Getting links from: http://lucene.apache.org/core/developer.html
> ├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
> (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
>  (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
>  (HTTP_404)
> Finished! 73 links found. 3 broken.
> Getting links from: http://lucene.apache.org/solr/resources.html
> -└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)- *FIXED*
> Finished! 188 links found. 8 broken.
> Getting links from: http://lucene.apache.org/pylucene/features.html
> ├─BROKEN─ 
> http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction
>  (HTTP_404)
> Finished! 60 links found. 1 broken.
> Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
> ├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
> ├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
> Finished! 66 links found. 2 broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7636) Fix broken links in lucene.apache.org site

2017-01-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824469#comment-15824469
 ] 

Jan Høydahl commented on LUCENE-7636:
-

For this particular scan I used 
https://github.com/stevenvachon/broken-link-checker with this command line 
{{blc --exclude "issues.apache.org" --exclude ".org/core/6_" --exclude 
".org/core/5_" --exclude ".org/core/4_" --exclude ".org/core/3_" --exclude 
".org/core/1_" --exclude ".org/solr/6_" --exclude ".org/solr/5_" --exclude 
".org/solr/4_" --exclude ".org/solr/3_" --exclude ".org/solr/1_" -r -v 
http://lucene.apache.org/}}

It works well for sites where we want to scan the whole site, but for e.g. 
Confluence or Wiki where we only want to check a sub-site, there is no 
{{--include}} argument.

> Fix broken links in lucene.apache.org site
> --
>
> Key: LUCENE-7636
> URL: https://issues.apache.org/jira/browse/LUCENE-7636
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>Priority: Minor
>
> I ran a broken links tool on lucene.apache.org site, found some broken links. 
> The scan excluded link checking of Javadoc, JIRA, localhost and 401 links 
> that need login to Apache:
> Getting links from: http://lucene.apache.org/pylucene/index.html
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/pylucene/
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/core/discussion.html
> -├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)- *FIXED*
> Finished! 93 links found. 1 broken.
> Getting links from: http://lucene.apache.org/core/developer.html
> ├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
> (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
>  (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
>  (HTTP_404)
> Finished! 73 links found. 3 broken.
> Getting links from: http://lucene.apache.org/solr/resources.html
> -└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)- *FIXED*
> Finished! 188 links found. 8 broken.
> Getting links from: http://lucene.apache.org/pylucene/features.html
> ├─BROKEN─ 
> http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction
>  (HTTP_404)
> Finished! 60 links found. 1 broken.
> Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
> ├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
> ├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
> Finished! 66 links found. 2 broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7638) Optimize graph query produced by QueryBuilder

2017-01-16 Thread Matt Weber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824462#comment-15824462
 ] 

Matt Weber commented on LUCENE-7638:


[~jim.ferenczi] I have mixed feelings about that as I can see plus and minus of 
both.  When I was originally working on this I essentially decided that 
everything should be passed to each path as if it was the original query.  What 
do you think [~mikemccand]?  Also, there are additional use cases that we 
handle in elasticsearch that have not made their way into Lucene yet that might 
be affected by this.  Boolean with cutoff frequency, prefix queries, etc.  

> Optimize graph query produced by QueryBuilder
> -
>
> Key: LUCENE-7638
> URL: https://issues.apache.org/jira/browse/LUCENE-7638
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7638.patch
>
>
> The QueryBuilder creates a graph query when the underlying TokenStream 
> contains token with PositionLengthAttribute greater than 1.
> These TokenStreams are in fact graphs (lattice to be more precise) where 
> synonyms can span on multiple terms. 
> Currently the graph query is built by visiting all the path of the graph 
> TokenStream. For instance if you have a synonym like "ny, new york" and you 
> search for "new york city", the query builder would produce two pathes:
> "new york city", "ny city"
> This can quickly explode when the number of multi terms synonyms increase. 
> The query "ny ny" for instance would produce 4 pathes and so on.
> For boolean queries with should or must clauses it should be more efficient 
> to build a boolean query that merges all the intersections in the graph. So 
> instead of "new york city", "ny city" we could produce:
> "+((+new +york) ny) +city"
> The attached patch is a proposal to do that instead of the all path solution.
> The patch transforms multi terms synonyms in graph query for each 
> intersection in the graph. This is not done in this patch but we could also 
> create a specialized query that gives equivalent scores to multi terms 
> synonyms like the SynonymQuery does for single term synonyms.
> For phrase query this patch does not change the current behavior but we could 
> also use the new method to create optimized graph SpanQuery.
> [~mattweber] I think this patch could optimize a lot of cases where multiple 
> muli-terms synonyms are present in a single request. Could you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3783 - Still Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3783/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([7150ECE2BD11873D:F904D33813EDEAC5]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-7637) TermInSetQuery should require that all terms come from the same field

2017-01-16 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824439#comment-15824439
 ] 

Michael McCandless commented on LUCENE-7637:


+1

I just found a small typo: {{// ne need to check}} --> {{// no need to check}}.

> TermInSetQuery should require that all terms come from the same field
> -
>
> Key: LUCENE-7637
> URL: https://issues.apache.org/jira/browse/LUCENE-7637
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7637.patch
>
>
> Spin-off from LUCENE-7624. Requiring that all terms are in the same field 
> would make things simpler and more consistent with other queries. It might 
> also make it easier to improve this query in the future since other similar 
> queries like AutomatonQuery also work on the per-field basis. The only 
> downside is that querying terms across multiple fields would be less 
> efficient, but this does not seem to be a common use-case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-Linux (32bit/jdk1.8.0_112) - Build # 18787 - Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Linux/18787/
Java: 32bit/jdk1.8.0_112 -server -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([2A532E009DD49072:A20711DA3328FD8A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-7631) Enforce javac warnings

2017-01-16 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824417#comment-15824417
 ] 

Uwe Schindler commented on LUCENE-7631:
---

Yes on Solr your change is not enabled: 
https://github.com/apache/lucene-solr/blob/master/solr/common-build.xml#L30

We should also review Solr (maybe in a separate issue).

> Enforce javac warnings
> --
>
> Key: LUCENE-7631
> URL: https://issues.apache.org/jira/browse/LUCENE-7631
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Mike Drob
> Attachments: LUCENE-7631.patch
>
>
> Robert's comment on LUCENE-3973 suggested to take an incremental approach to 
> static analysis and leverage the java compiler warnings. I think this is easy 
> to do and is a reasonable change to make to protect the code base for the 
> future.
> We currently have many fewer warnings than we did a year or two years ago and 
> should ensure that we do not slide backwards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7631) Enforce javac warnings

2017-01-16 Thread Uwe Schindler (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824415#comment-15824415
 ] 

Uwe Schindler commented on LUCENE-7631:
---

Thanks Mike!
I am for using this patch. Robert's suggestion was to enable "all" warnings, 
but IMHO this is a bad idea, because if somebody compiles with a later Java 
version, the build may suddenly fail (because a later version of the compiler 
added a new warning type).

I am not sure if the warning exclusions are really needed, because we no longer 
have the general {{-Xlint}}. But it's good to have them listed!

The only downside of this patch is that we no longer get any warnings displayed 
that are currently disabled (rawtypes, unchecked). So we should fix them asap 
(in a separate issue).

BTW: Maybe we can enable rawtypes and unchecked errors earlier in Lucene and 
leave them disabled in Solr. As far as I remember we already have a separate 
warning setting for Solr. This may be the reason why Solr does not show any 
problems!?

> Enforce javac warnings
> --
>
> Key: LUCENE-7631
> URL: https://issues.apache.org/jira/browse/LUCENE-7631
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Mike Drob
> Attachments: LUCENE-7631.patch
>
>
> Robert's comment on LUCENE-3973 suggested to take an incremental approach to 
> static analysis and leverage the java compiler warnings. I think this is easy 
> to do and is a reasonable change to make to protect the code base for the 
> future.
> We currently have many fewer warnings than we did a year or two years ago and 
> should ensure that we do not slide backwards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-16 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824411#comment-15824411
 ] 

Erick Erickson commented on SOLR-9906:
--

Beasting after this latest push succeeded 100 times out of 100. Prior it  
failed for me 21/100 times.

> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: PeerSyncReplicationTest failure

2017-01-16 Thread Erick Erickson
100 beast runs and no failures so this looks fixed by Alan's latest
SOLR-9906 push.

On Mon, Jan 16, 2017 at 9:17 AM, Erick Erickson  wrote:
> This is probably SOLR-9906 right? I'll go start my beasting from
> yesterday on a new pull.
>
> On Sun, Jan 15, 2017 at 8:11 PM, Erick Erickson  
> wrote:
>> Pushkar:
>>
>> Yes, PeerSynchReplicationTest. I'm getting 21/100  failures when
>> beasting on 6x so it's not a trunk-only issue. The script I was using
>> is Mark Miller's "The best Lucene / Solr beasting script in the world.
>> TM." here: https://gist.github.com/markrmiller/dbdb792216dc98b018ad
>>
>> Here's the link to the build:
>> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3781/
>>
>> The "Console output" link won't display much, but the "Skipping 10,574
>> KB.. Full Log" link should lead you to the full output.
>>
>> My only real concern here is to determine whether this is an
>> underlying problem or a test issue for the 6.4 release.
>>
>> Thanks!
>> Erick
>>
>> On Sun, Jan 15, 2017 at 6:26 PM, Pushkar Raste  
>> wrote:
>>> Erick,
>>> Is this PeerSyncTest or PeerSyncReplicationTest?
>>>
>>> Can you send me link to Jenkins build logs(if this is happening on
>>> Jenkins).
>>>
>>> I recently sent a patch to improve the validation check (in the
>>> PeerSyncReplicationTest) made in order to figure out if node successfully
>>> recovered via PeerSync. Not sure if change was made only to the trunk or if
>>> was applied to 6.X branch as well.
>>>
>>>
>>> On Jan 15, 2017 4:12 PM, "Erick Erickson"  wrote:

 I was wondering about the failures and tried to beast it on my Pro on
 trunk which fails first time, every time with a NoSuchMethodError in
 Lucene, see below.

 I was wondering whether it would be a bad idea to release 6.4 with the
 PeerSynch Test failure that shows up but didn't really look whether it
 was trunk or 6x. Of course if this is only on trunk then it's
 irrelevant for 6x.

 Beasting 6x now.

 2> 206460 ERROR (coreCloseExecutor-64-thread-1)
 [n:127.0.0.1:51661_mx_ni c:collection1 s:shard1 r:core_node1
 x:collection1] o.a.s.u.DirectUpdateHandler2 Error in final commit
[junit4]   2> java.lang.NoSuchMethodError:

 org.apache.lucene.util.packed.DirectWriter.getInstance(Lorg/apache/lucene/store/IndexOutput;JI)Lorg/apache/lucene/util/packed/DirectWriter;
[junit4]   2> at

 org.apache.lucene.util.packed.DirectMonotonicWriter.flush(DirectMonotonicWriter.java:91)
[junit4]   2> at

 org.apache.lucene.util.packed.DirectMonotonicWriter.finish(DirectMonotonicWriter.java:127)
[junit4]   2> at

 org.apache.lucene.codecs.lucene70.Lucene70DocValuesConsumer.addTermsDict(Lucene70DocValuesConsumer.java:478)
[junit4]   2> at

 org.apache.lucene.codecs.lucene70.Lucene70DocValuesConsumer.doAddSortedField(Lucene70DocValuesConsumer.java:437)
[junit4]   2> at

 org.apache.lucene.codecs.lucene70.Lucene70DocValuesConsumer.addSortedSetField(Lucene70DocValuesConsumer.java:571)
[junit4]   2> at

 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addSortedSetField(PerFieldDocValuesFormat.java:129)
[junit4]   2> at

 org.apache.lucene.index.SortedSetDocValuesWriter.flush(SortedSetDocValuesWriter.java:221)
[junit4]   2> at

 org.apache.lucene.index.DefaultIndexingChain.writeDocValues(DefaultIndexingChain.java:248)
[junit4]   2> at

 org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:132)
[junit4]   2> at

 org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:444)
[junit4]   2> at
 org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:539)
[junit4]   2> at

 org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:653)
[junit4]   2> at

 org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3001)
[junit4]   2> at
 org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3211)
[junit4]   2> at
 org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3174)
[junit4]   2> at

 org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:792)
[junit4]   2> at

 org.apache.solr.update.DefaultSolrCoreState.closeIndexWriter(DefaultSolrCoreState.java:88)
[junit4]   2> at

 org.apache.solr.update.DefaultSolrCoreState.close(DefaultSolrCoreState.java:379)

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: 

[jira] [Created] (SOLR-9971) Parameterise where solr creates its console and gc log files

2017-01-16 Thread Jinesh Choksi (JIRA)
Jinesh Choksi created SOLR-9971:
---

 Summary: Parameterise where solr creates its console and gc log 
files
 Key: SOLR-9971
 URL: https://issues.apache.org/jira/browse/SOLR-9971
 Project: Solr
  Issue Type: Improvement
  Security Level: Public (Default Security Level. Issues are Public)
  Components: scripts and tools
Affects Versions: 6.3
Reporter: Jinesh Choksi
Priority: Minor


In the $SOLR_INSTALL_DIR/bin/solr script, the location where the solr_gc.log + 
$SOLR_PORT-console.log file are created is hard wired to be inside the 
$SOLR_LOGS_DIR folder due to the following lines of code:

* {code}
GC_LOG_OPTS+=("$gc_log_flag:$SOLR_LOGS_DIR/solr_gc.log" 
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M)
{code}

* {code}
nohup "$JAVA" "${SOLR_START_OPTS[@]}" $SOLR_ADDL_ARGS -Dsolr.log.muteconsole \
"-XX:OnOutOfMemoryError=$SOLR_TIP/bin/oom_solr.sh $SOLR_PORT 
$SOLR_LOGS_DIR" \
-jar start.jar "${SOLR_JETTY_CONFIG[@]}" \
1>"$SOLR_LOGS_DIR/solr-$SOLR_PORT-console.log" 2>&1 & echo $! > 
"$SOLR_PID_DIR/solr-$SOLR_PORT.pid"
{code}

Would it be possible to arrange for another two ENVIRONMENT variables to be 
made available which allow us to control where these two files are created?

e.g. SOLR_GC_LOG + SOLR_CONSOLE_LOG

The use case behind this request is that it is useful to keep gc and console 
logs separate from the application logs because there are different archival / 
ingestion / processing requirements for each.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Nightly Builds wiki page sadly out of date

2017-01-16 Thread Erick Erickson
Excellent, thanks!

Erick

On Mon, Jan 16, 2017 at 9:36 AM, Steve Rowe  wrote:
> I don’t think it’s obsolete, just out of date.
>
> I replaced 4.x with 6.x in most of the links and they all worked.
>
> Similarly s/trunk/master/ in links worked.
>
> I’ll go update now.
>
> --
> Steve
> www.lucidworks.com
>
>> On Jan 15, 2017, at 11:27 PM, Erick Erickson  wrote:
>>
>> https://wiki.apache.org/solr/NightlyBuilds
>>
>> The "trunk" version is 5x! Where should we be pointing this now? Or
>> should we delete it? I've marked it as Obsolete while we decide.
>>
>> Erick
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (LUCENE-7638) Optimize graph query produced by QueryBuilder

2017-01-16 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824337#comment-15824337
 ] 

Jim Ferenczi edited comment on LUCENE-7638 at 1/16/17 5:44 PM:
---

{quote}
Maybe TermAutomatonQuery would be a good fit for that problem?
{quote}


For pure phrase query it's a good fit because it's a proximity query but for 
boolean queries the problem is different. We cannot build the 
TermAutomatonQuery directly, first we need to find the start and end state of 
each multi-term synonyms in the graph. That's what the attached patch is doing 
lazily, for each intersection point it creates a multi-term synonym query. 
Currently the multi-term synonym query is a boolean query but we could change 
the logic and use the TermAutomatonQuery instead or even create a PhaseQuery 
for each path in the multi-term synonym. This patch also handles nested 
multi-term synonyms which makes the detection of intersection points harder. 
Bottom point is that if we are able to extract the multi-term synonyms of the 
graph then we can choose more easily how we want to search and score these 
inner graph. Does this makes sense ?


was (Author: jim.ferenczi):
For pure phrase query it's a good fit because it's a proximity query but for 
boolean queries the problem is different. We cannot build the 
TermAutomatonQuery directly, first we need to find the start and end state of 
each multi-term synonyms in the graph. That's what the attached patch is doing 
lazily, for each intersection point it creates a multi-term synonym query. 
Currently the multi-term synonym query is a boolean query but we could change 
the logic and use the TermAutomatonQuery instead or even create a PhaseQuery 
for each path in the multi-term synonym. This patch also handles nested 
multi-term synonyms which makes the detection of intersection points harder. 
Bottom point is that if we are able to extract the multi-term synonyms of the 
graph then we can choose more easily how we want to search and score these 
inner graph. Does this makes sense ?

> Optimize graph query produced by QueryBuilder
> -
>
> Key: LUCENE-7638
> URL: https://issues.apache.org/jira/browse/LUCENE-7638
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7638.patch
>
>
> The QueryBuilder creates a graph query when the underlying TokenStream 
> contains token with PositionLengthAttribute greater than 1.
> These TokenStreams are in fact graphs (lattice to be more precise) where 
> synonyms can span on multiple terms. 
> Currently the graph query is built by visiting all the path of the graph 
> TokenStream. For instance if you have a synonym like "ny, new york" and you 
> search for "new york city", the query builder would produce two pathes:
> "new york city", "ny city"
> This can quickly explode when the number of multi terms synonyms increase. 
> The query "ny ny" for instance would produce 4 pathes and so on.
> For boolean queries with should or must clauses it should be more efficient 
> to build a boolean query that merges all the intersections in the graph. So 
> instead of "new york city", "ny city" we could produce:
> "+((+new +york) ny) +city"
> The attached patch is a proposal to do that instead of the all path solution.
> The patch transforms multi terms synonyms in graph query for each 
> intersection in the graph. This is not done in this patch but we could also 
> create a specialized query that gives equivalent scores to multi terms 
> synonyms like the SynonymQuery does for single term synonyms.
> For phrase query this patch does not change the current behavior but we could 
> also use the new method to create optimized graph SpanQuery.
> [~mattweber] I think this patch could optimize a lot of cases where multiple 
> muli-terms synonyms are present in a single request. Could you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7638) Optimize graph query produced by QueryBuilder

2017-01-16 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824358#comment-15824358
 ] 

Jim Ferenczi commented on LUCENE-7638:
--

[~mattweber] I don't think we lose minimum should match support. It will be 
different but interestingly it would also solve some problems. For instance 
with the all path solution, a synonym like "ny, new york" with a minimum should 
match of 1, searching for "ny" would not return documents matching "new york". 
With the proposed solution each multi-term synonyms is considered as a single 
clause so "ny" and "new york" count for 1.
I like the finite strings solution because expressing the minimum should match 
in percentage gives you correct hits. This is great though it requires to 
duplicate a lot of terms so I wonder if this is something that we should really 
target. By considering each multi-term synonyms as 1 clause we could simplify 
the problem and produce more optimized query ?

> Optimize graph query produced by QueryBuilder
> -
>
> Key: LUCENE-7638
> URL: https://issues.apache.org/jira/browse/LUCENE-7638
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7638.patch
>
>
> The QueryBuilder creates a graph query when the underlying TokenStream 
> contains token with PositionLengthAttribute greater than 1.
> These TokenStreams are in fact graphs (lattice to be more precise) where 
> synonyms can span on multiple terms. 
> Currently the graph query is built by visiting all the path of the graph 
> TokenStream. For instance if you have a synonym like "ny, new york" and you 
> search for "new york city", the query builder would produce two pathes:
> "new york city", "ny city"
> This can quickly explode when the number of multi terms synonyms increase. 
> The query "ny ny" for instance would produce 4 pathes and so on.
> For boolean queries with should or must clauses it should be more efficient 
> to build a boolean query that merges all the intersections in the graph. So 
> instead of "new york city", "ny city" we could produce:
> "+((+new +york) ny) +city"
> The attached patch is a proposal to do that instead of the all path solution.
> The patch transforms multi terms synonyms in graph query for each 
> intersection in the graph. This is not done in this patch but we could also 
> create a specialized query that gives equivalent scores to multi terms 
> synonyms like the SynonymQuery does for single term synonyms.
> For phrase query this patch does not change the current behavior but we could 
> also use the new method to create optimized graph SpanQuery.
> [~mattweber] I think this patch could optimize a lot of cases where multiple 
> muli-terms synonyms are present in a single request. Could you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7631) Enforce javac warnings

2017-01-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824347#comment-15824347
 ] 

Mike Drob commented on LUCENE-7631:
---

Yes, the build passes for me with only the two additional changes in 
WordDictionary and SimpleServer.

Warnings for {{-Xlint:-auxiliaryclass -Xlint:-deprecation -Xlint:-rawtypes 
-Xlint:-serial -Xlint:-unchecked}} are all disabled. Each of those causes a 
_lot_ of errors that I'd like to see eventually followed up on. The auxiliary 
class warnings are the easiest of those, but still enough work that I felt like 
it should be a separate task.

I also have a sneaking suspicion that this only affects lucene and solr is 
somehow ignoring it, but couldn't find anything to confirm that.

> Enforce javac warnings
> --
>
> Key: LUCENE-7631
> URL: https://issues.apache.org/jira/browse/LUCENE-7631
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Mike Drob
> Attachments: LUCENE-7631.patch
>
>
> Robert's comment on LUCENE-3973 suggested to take an incremental approach to 
> static analysis and leverage the java compiler warnings. I think this is easy 
> to do and is a reasonable change to make to protect the code base for the 
> future.
> We currently have many fewer warnings than we did a year or two years ago and 
> should ensure that we do not slide backwards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Nightly Builds wiki page sadly out of date

2017-01-16 Thread Steve Rowe
I don’t think it’s obsolete, just out of date.

I replaced 4.x with 6.x in most of the links and they all worked.

Similarly s/trunk/master/ in links worked.

I’ll go update now.

--
Steve
www.lucidworks.com

> On Jan 15, 2017, at 11:27 PM, Erick Erickson  wrote:
> 
> https://wiki.apache.org/solr/NightlyBuilds
> 
> The "trunk" version is 5x! Where should we be pointing this now? Or
> should we delete it? I've marked it as Obsolete while we decide.
> 
> Erick
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7638) Optimize graph query produced by QueryBuilder

2017-01-16 Thread Jim Ferenczi (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824337#comment-15824337
 ] 

Jim Ferenczi commented on LUCENE-7638:
--

For pure phrase query it's a good fit because it's a proximity query but for 
boolean queries the problem is different. We cannot build the 
TermAutomatonQuery directly, first we need to find the start and end state of 
each multi-term synonyms in the graph. That's what the attached patch is doing 
lazily, for each intersection point it creates a multi-term synonym query. 
Currently the multi-term synonym query is a boolean query but we could change 
the logic and use the TermAutomatonQuery instead or even create a PhaseQuery 
for each path in the multi-term synonym. This patch also handles nested 
multi-term synonyms which makes the detection of intersection points harder. 
Bottom point is that if we are able to extract the multi-term synonyms of the 
graph then we can choose more easily how we want to search and score these 
inner graph. Does this makes sense ?

> Optimize graph query produced by QueryBuilder
> -
>
> Key: LUCENE-7638
> URL: https://issues.apache.org/jira/browse/LUCENE-7638
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7638.patch
>
>
> The QueryBuilder creates a graph query when the underlying TokenStream 
> contains token with PositionLengthAttribute greater than 1.
> These TokenStreams are in fact graphs (lattice to be more precise) where 
> synonyms can span on multiple terms. 
> Currently the graph query is built by visiting all the path of the graph 
> TokenStream. For instance if you have a synonym like "ny, new york" and you 
> search for "new york city", the query builder would produce two pathes:
> "new york city", "ny city"
> This can quickly explode when the number of multi terms synonyms increase. 
> The query "ny ny" for instance would produce 4 pathes and so on.
> For boolean queries with should or must clauses it should be more efficient 
> to build a boolean query that merges all the intersections in the graph. So 
> instead of "new york city", "ny city" we could produce:
> "+((+new +york) ny) +city"
> The attached patch is a proposal to do that instead of the all path solution.
> The patch transforms multi terms synonyms in graph query for each 
> intersection in the graph. This is not done in this patch but we could also 
> create a specialized query that gives equivalent scores to multi terms 
> synonyms like the SynonymQuery does for single term synonyms.
> For phrase query this patch does not change the current behavior but we could 
> also use the new method to create optimized graph SpanQuery.
> [~mattweber] I think this patch could optimize a lot of cases where multiple 
> muli-terms synonyms are present in a single request. Could you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7638) Optimize graph query produced by QueryBuilder

2017-01-16 Thread Matt Weber (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824319#comment-15824319
 ] 

Matt Weber commented on LUCENE-7638:


I think the problem here is that we lose minimum should match support as that 
is applied AFTER query generation by building a new boolean query.  Same thing 
for phrase slop even though that would not be affected by this patch.  If we 
can move this logic into rewrite method of GraphQuery then we could take all 
that information into consideration to build a more efficient query.

> Optimize graph query produced by QueryBuilder
> -
>
> Key: LUCENE-7638
> URL: https://issues.apache.org/jira/browse/LUCENE-7638
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7638.patch
>
>
> The QueryBuilder creates a graph query when the underlying TokenStream 
> contains token with PositionLengthAttribute greater than 1.
> These TokenStreams are in fact graphs (lattice to be more precise) where 
> synonyms can span on multiple terms. 
> Currently the graph query is built by visiting all the path of the graph 
> TokenStream. For instance if you have a synonym like "ny, new york" and you 
> search for "new york city", the query builder would produce two pathes:
> "new york city", "ny city"
> This can quickly explode when the number of multi terms synonyms increase. 
> The query "ny ny" for instance would produce 4 pathes and so on.
> For boolean queries with should or must clauses it should be more efficient 
> to build a boolean query that merges all the intersections in the graph. So 
> instead of "new york city", "ny city" we could produce:
> "+((+new +york) ny) +city"
> The attached patch is a proposal to do that instead of the all path solution.
> The patch transforms multi terms synonyms in graph query for each 
> intersection in the graph. This is not done in this patch but we could also 
> create a specialized query that gives equivalent scores to multi terms 
> synonyms like the SynonymQuery does for single term synonyms.
> For phrase query this patch does not change the current behavior but we could 
> also use the new method to create optimized graph SpanQuery.
> [~mattweber] I think this patch could optimize a lot of cases where multiple 
> muli-terms synonyms are present in a single request. Could you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: PeerSyncReplicationTest failure

2017-01-16 Thread Erick Erickson
This is probably SOLR-9906 right? I'll go start my beasting from
yesterday on a new pull.

On Sun, Jan 15, 2017 at 8:11 PM, Erick Erickson  wrote:
> Pushkar:
>
> Yes, PeerSynchReplicationTest. I'm getting 21/100  failures when
> beasting on 6x so it's not a trunk-only issue. The script I was using
> is Mark Miller's "The best Lucene / Solr beasting script in the world.
> TM." here: https://gist.github.com/markrmiller/dbdb792216dc98b018ad
>
> Here's the link to the build:
> https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3781/
>
> The "Console output" link won't display much, but the "Skipping 10,574
> KB.. Full Log" link should lead you to the full output.
>
> My only real concern here is to determine whether this is an
> underlying problem or a test issue for the 6.4 release.
>
> Thanks!
> Erick
>
> On Sun, Jan 15, 2017 at 6:26 PM, Pushkar Raste  
> wrote:
>> Erick,
>> Is this PeerSyncTest or PeerSyncReplicationTest?
>>
>> Can you send me link to Jenkins build logs(if this is happening on
>> Jenkins).
>>
>> I recently sent a patch to improve the validation check (in the
>> PeerSyncReplicationTest) made in order to figure out if node successfully
>> recovered via PeerSync. Not sure if change was made only to the trunk or if
>> was applied to 6.X branch as well.
>>
>>
>> On Jan 15, 2017 4:12 PM, "Erick Erickson"  wrote:
>>>
>>> I was wondering about the failures and tried to beast it on my Pro on
>>> trunk which fails first time, every time with a NoSuchMethodError in
>>> Lucene, see below.
>>>
>>> I was wondering whether it would be a bad idea to release 6.4 with the
>>> PeerSynch Test failure that shows up but didn't really look whether it
>>> was trunk or 6x. Of course if this is only on trunk then it's
>>> irrelevant for 6x.
>>>
>>> Beasting 6x now.
>>>
>>> 2> 206460 ERROR (coreCloseExecutor-64-thread-1)
>>> [n:127.0.0.1:51661_mx_ni c:collection1 s:shard1 r:core_node1
>>> x:collection1] o.a.s.u.DirectUpdateHandler2 Error in final commit
>>>[junit4]   2> java.lang.NoSuchMethodError:
>>>
>>> org.apache.lucene.util.packed.DirectWriter.getInstance(Lorg/apache/lucene/store/IndexOutput;JI)Lorg/apache/lucene/util/packed/DirectWriter;
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.util.packed.DirectMonotonicWriter.flush(DirectMonotonicWriter.java:91)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.util.packed.DirectMonotonicWriter.finish(DirectMonotonicWriter.java:127)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.codecs.lucene70.Lucene70DocValuesConsumer.addTermsDict(Lucene70DocValuesConsumer.java:478)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.codecs.lucene70.Lucene70DocValuesConsumer.doAddSortedField(Lucene70DocValuesConsumer.java:437)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.codecs.lucene70.Lucene70DocValuesConsumer.addSortedSetField(Lucene70DocValuesConsumer.java:571)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsWriter.addSortedSetField(PerFieldDocValuesFormat.java:129)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.index.SortedSetDocValuesWriter.flush(SortedSetDocValuesWriter.java:221)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.index.DefaultIndexingChain.writeDocValues(DefaultIndexingChain.java:248)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.index.DefaultIndexingChain.flush(DefaultIndexingChain.java:132)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.index.DocumentsWriterPerThread.flush(DocumentsWriterPerThread.java:444)
>>>[junit4]   2> at
>>> org.apache.lucene.index.DocumentsWriter.doFlush(DocumentsWriter.java:539)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.index.DocumentsWriter.flushAllThreads(DocumentsWriter.java:653)
>>>[junit4]   2> at
>>>
>>> org.apache.lucene.index.IndexWriter.prepareCommitInternal(IndexWriter.java:3001)
>>>[junit4]   2> at
>>> org.apache.lucene.index.IndexWriter.commitInternal(IndexWriter.java:3211)
>>>[junit4]   2> at
>>> org.apache.lucene.index.IndexWriter.commit(IndexWriter.java:3174)
>>>[junit4]   2> at
>>>
>>> org.apache.solr.update.DirectUpdateHandler2.closeWriter(DirectUpdateHandler2.java:792)
>>>[junit4]   2> at
>>>
>>> org.apache.solr.update.DefaultSolrCoreState.closeIndexWriter(DefaultSolrCoreState.java:88)
>>>[junit4]   2> at
>>>
>>> org.apache.solr.update.DefaultSolrCoreState.close(DefaultSolrCoreState.java:379)
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-master - Build # 1213 - Still Unstable

2017-01-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-master/1213/

8 tests failed.
FAILED:  
org.apache.solr.core.HdfsDirectoryFactoryTest.testInitArgsOrSysPropConfig

Error Message:
The max direct memory is likely too low.  Either increase it (by adding 
-XX:MaxDirectMemorySize=g -XX:+UseLargePages to your containers startup 
args) or disable direct allocation using 
solr.hdfs.blockcache.direct.memory.allocation=false in solrconfig.xml. If you 
are putting the block cache on the heap, your java heap size might not be large 
enough. Failed allocating ~134.217728 MB.

Stack Trace:
java.lang.RuntimeException: The max direct memory is likely too low.  Either 
increase it (by adding -XX:MaxDirectMemorySize=g -XX:+UseLargePages to 
your containers startup args) or disable direct allocation using 
solr.hdfs.blockcache.direct.memory.allocation=false in solrconfig.xml. If you 
are putting the block cache on the heap, your java heap size might not be large 
enough. Failed allocating ~134.217728 MB.
at 
__randomizedtesting.SeedInfo.seed([8F4A05739BAAD9D9:78E5CC58662333F2]:0)
at 
org.apache.solr.core.HdfsDirectoryFactory.createBlockCache(HdfsDirectoryFactory.java:307)
at 
org.apache.solr.core.HdfsDirectoryFactory.getBlockDirectoryCache(HdfsDirectoryFactory.java:283)
at 
org.apache.solr.core.HdfsDirectoryFactory.create(HdfsDirectoryFactory.java:223)
at 
org.apache.solr.core.HdfsDirectoryFactoryTest.testInitArgsOrSysPropConfig(HdfsDirectoryFactoryTest.java:108)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (LUCENE-7398) Nested Span Queries are buggy

2017-01-16 Thread Artem Lukanin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824303#comment-15824303
 ] 

Artem Lukanin commented on LUCENE-7398:
---

The patch has a bug. The following sentence is not found, because the 
look-ahead is too greedy: "the system of claim 16 further comprising a user 
location unit adapted to determine user location based on location information 
received from the user's device"

{code:java}
  @Test
  public void testNestedOrQueryLookAhead() throws IOException {
SpanNearQuery snq = new SpanNearQuery.Builder(FIELD, 
SpanNearQuery.MatchNear.ORDERED_LOOKAHEAD)
.addClause(new SpanOrQuery(
new SpanTermQuery(new Term(FIELD, "user")),
new SpanTermQuery(new Term(FIELD, "ue"))
))
.addClause(new SpanNearQuery.Builder(FIELD, 
SpanNearQuery.MatchNear.ORDERED_LOOKAHEAD)
.setSlop(3)
.addClause(new SpanTermQuery(new Term(FIELD, "location")))
.addClause(new SpanTermQuery(new Term(FIELD, "information")))
.build()
)
.build();

Spans spans = snq.createWeight(searcher, 
false).getSpans(searcher.getIndexReader().leaves().get(0), 
SpanWeight.Postings.POSITIONS);
assertEquals(6, spans.advance(0));
assertEquals(Spans.NO_MORE_DOCS, spans.nextDoc());
  }
{code}

The fix is simple, there should be an additional check inside 
shrinkToDecreaseSlop():
{code:java}
  /** The subSpans are ordered in the same doc and matchSlop is too big.
   * Try and decrease the slop by calling nextStartPosition() on all subSpans 
except the last one in reverse order.
   * Return true iff an ordered match was found with small enough slop.
   */
  private boolean shrinkToDecreaseSlop() throws IOException {
int lastStart = subSpans[subSpans.length - 1].startPosition();

for (int i = subSpans.length - 2; i >= 1; i--) { // intermediate spans for 
subSpans.length >= 3
  Spans prevSpans = subSpans[i];
  int prevStart = prevSpans.startPosition();
  int prevEnd = prevSpans.endPosition();
  while (true) { // Advance prevSpans until it is after (lastStart, 
lastEnd) or the slop increases.
if (prevSpans.nextStartPosition() == NO_MORE_POSITIONS) {
  oneExhaustedInCurrentDoc = true;
  break; // Check remaining subSpans for final match in current doc
} else {
  int ppEnd = prevSpans.endPosition();
  if (ppEnd > lastStart) { // no more ordered
break; // Check remaining subSpans.
  } else { // prevSpans still before lastStart
int ppStart = prevSpans.startPosition();
int slopIncrease = (prevEnd - prevStart) - (ppEnd - ppStart); // 
span length decrease is slop increase
if (slopIncrease > 0) {
  break; // Check remaining subSpans.
} else { // slop did not increase
prevStart = ppStart;
prevEnd = ppEnd;
matchSlop += slopIncrease;
  }
}
  }
}
  lastStart = prevStart;
}

while (true) { // for subSpans[0] only the end position influences the 
match slop.
  int prevEnd = subSpans[0].endPosition();
  if (subSpans[0].nextStartPosition() == NO_MORE_POSITIONS) {
oneExhaustedInCurrentDoc = true;
break;
  }
  int ppEnd = subSpans[0].endPosition();
  if (ppEnd > lastStart) { // no more ordered
break;
  }
  int slopIncrease = prevEnd - ppEnd;
  if (slopIncrease > 0) {
break;
  }
  // slop did not increase:
  matchStart = subSpans[0].startPosition();
  matchSlop += slopIncrease;

  // FIX STARTS
  if (matchSlop <= allowedSlop) {
break;
  }
  // FIX ENDS
}

firstSubSpansAfterMatch = true;
boolean match = matchSlop <= allowedSlop;
return match; // ordered and allowed slop
  }
{code}

Sorry for not providing a new patch. I'm on a previous version of Lucene.

> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398-20160924.patch, 
> LUCENE-7398-20160925.patch, LUCENE-7398.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate 

[jira] [Commented] (LUCENE-7636) Fix broken links in lucene.apache.org site

2017-01-16 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824285#comment-15824285
 ] 

Erik Hatcher commented on LUCENE-7636:
--

[~janhoy] - out of curiosity, what tool did you use for this link checking?

> Fix broken links in lucene.apache.org site
> --
>
> Key: LUCENE-7636
> URL: https://issues.apache.org/jira/browse/LUCENE-7636
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>Priority: Minor
>
> I ran a broken links tool on lucene.apache.org site, found some broken links. 
> The scan excluded link checking of Javadoc, JIRA, localhost and 401 links 
> that need login to Apache:
> Getting links from: http://lucene.apache.org/pylucene/index.html
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/pylucene/
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/core/discussion.html
> -├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)- *FIXED*
> Finished! 93 links found. 1 broken.
> Getting links from: http://lucene.apache.org/core/developer.html
> ├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
> (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
>  (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
>  (HTTP_404)
> Finished! 73 links found. 3 broken.
> Getting links from: http://lucene.apache.org/solr/resources.html
> -└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)- *FIXED*
> Finished! 188 links found. 8 broken.
> Getting links from: http://lucene.apache.org/pylucene/features.html
> ├─BROKEN─ 
> http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction
>  (HTTP_404)
> Finished! 60 links found. 1 broken.
> Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
> ├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
> ├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
> Finished! 66 links found. 2 broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9893) EasyMock/Mockito no longer works with Java 9 b148+

2017-01-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824282#comment-15824282
 ] 

David Smiley commented on SOLR-9893:


Awesome; thanks Uwe!

> EasyMock/Mockito no longer works with Java 9 b148+
> --
>
> Key: SOLR-9893
> URL: https://issues.apache.org/jira/browse/SOLR-9893
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: Tests
>Affects Versions: 6.x, master (7.0)
>Reporter: Uwe Schindler
>Assignee: Uwe Schindler
>Priority: Blocker
> Attachments: SOLR-9893.patch, SOLR-9893.patch
>
>
> EasyMock does not work anymore with latest Java 9, because it uses cglib 
> behind that is trying to access a protected method inside the runtime using 
> setAccessible. This is no longer allowed by Java 9.
> Actually this is really stupid. Instead of forcefully making the protected 
> defineClass method available to the outside, it is much more correct to just 
> subclass ClassLoader (like the Lucene expressions module does).
> I tried updating to easymock/mockito, but all that does not work, approx 25 
> tests fail. The only way is to disable all Mocking tests in Java 9. The 
> underlying issue in cglib is still not solved, master's code is here: 
> https://github.com/cglib/cglib/blob/master/cglib/src/main/java/net/sf/cglib/core/ReflectUtils.java#L44-L62
> As we use an old stone-aged version of mockito (1.x), a fix is not expected 
> to happen, although cglib might fix this!
> What should we do? This stupid issue prevents us from testing Java 9 with 
> Solr completely! 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Do we run findbugs or any other static code analysis tool as part of the build process?

2017-01-16 Thread David Smiley
See https://issues.apache.org/jira/browse/LUCENE-3973  Perhaps you care to
revive the issue.  I really look forward to static analysis checks.

On Sat, Jan 14, 2017 at 10:19 PM Pushkar Raste 
wrote:

> Hi,
> I saw a couple concerning bugs in the code like
>
>- comparing String, Integer, Float and objects of other wrapper types
>using == instead of .equals() method.
>- Accessing methods on potentially null reference.
>
> I ran findbugs on the code base and found a lot of other errors as well. I
> am working on fixing some of the errors (most of the errors are in Test
> cases).
>
> I was curious, if we run any static code analysis tool as part of build
> process? More importantly can we force builds to fail if findbugs errors
> cross a certain threshold.
>
-- 
Lucene/Solr Search Committer, Consultant, Developer, Author, Speaker
LinkedIn: http://linkedin.com/in/davidwsmiley | Book:
http://www.solrenterprisesearchserver.com


[jira] [Commented] (LUCENE-7636) Fix broken links in lucene.apache.org site

2017-01-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824260#comment-15824260
 ] 

David Smiley commented on LUCENE-7636:
--

Thanks for doing this housekeeping!

> Fix broken links in lucene.apache.org site
> --
>
> Key: LUCENE-7636
> URL: https://issues.apache.org/jira/browse/LUCENE-7636
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>Priority: Minor
>
> I ran a broken links tool on lucene.apache.org site, found some broken links. 
> The scan excluded link checking of Javadoc, JIRA, localhost and 401 links 
> that need login to Apache:
> Getting links from: http://lucene.apache.org/pylucene/index.html
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/pylucene/
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/core/discussion.html
> -├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)- *FIXED*
> Finished! 93 links found. 1 broken.
> Getting links from: http://lucene.apache.org/core/developer.html
> ├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
> (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
>  (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
>  (HTTP_404)
> Finished! 73 links found. 3 broken.
> Getting links from: http://lucene.apache.org/solr/resources.html
> -└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)- *FIXED*
> Finished! 188 links found. 8 broken.
> Getting links from: http://lucene.apache.org/pylucene/features.html
> ├─BROKEN─ 
> http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction
>  (HTTP_404)
> Finished! 60 links found. 1 broken.
> Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
> ├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
> ├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
> Finished! 66 links found. 2 broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7631) Enforce javac warnings

2017-01-16 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824253#comment-15824253
 ] 

David Smiley commented on LUCENE-7631:
--

Thanks for filing this issue!  Are all the changes in this patch necessary to 
get the build to pass?  So to clarify... no code (outside what the patch 
touches) needs adjustments?

> Enforce javac warnings
> --
>
> Key: LUCENE-7631
> URL: https://issues.apache.org/jira/browse/LUCENE-7631
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: general/build
>Reporter: Mike Drob
> Attachments: LUCENE-7631.patch
>
>
> Robert's comment on LUCENE-3973 suggested to take an incremental approach to 
> static analysis and leverage the java compiler warnings. I think this is easy 
> to do and is a reasonable change to make to protect the code base for the 
> future.
> We currently have many fewer warnings than we did a year or two years ago and 
> should ensure that we do not slide backwards.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9386) Upgrade Zookeeper to 3.4.10

2017-01-16 Thread Kevin Risden (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9386?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824252#comment-15824252
 ] 

Kevin Risden commented on SOLR-9386:


Updated the JIRA title and description to mention 3.4.10 instead of 3.4.9

> Upgrade Zookeeper to 3.4.10
> ---
>
> Key: SOLR-9386
> URL: https://issues.apache.org/jira/browse/SOLR-9386
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-9386.patch, 
> zookeeper-3.4.8-upgrade-tests-pass.patch, 
> zookeeper-3.4.9-upgrade-tests-fail.patch
>
>
> Zookeeper 3.4.10 release should be happening fairly soon, and the ZK issue 
> blocking incorporation into Solr (ZOOKEEPER-2383) has a 3.4.10-targetted 
> patch that fixes the test failures problem noted on SOLR-8724.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9386) Upgrade Zookeeper to 3.4.10

2017-01-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9386:
---
Summary: Upgrade Zookeeper to 3.4.10  (was: Upgrade Zookeeper to 3.4.9)

> Upgrade Zookeeper to 3.4.10
> ---
>
> Key: SOLR-9386
> URL: https://issues.apache.org/jira/browse/SOLR-9386
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-9386.patch, 
> zookeeper-3.4.8-upgrade-tests-pass.patch, 
> zookeeper-3.4.9-upgrade-tests-fail.patch
>
>
> Zookeeper 3.4.9 release should be happening fairly soon, and the ZK issue 
> blocking incorporation into Solr (ZOOKEEPER-2383) has a 3.4.9-targetted patch 
> that fixes the test failures problem noted on SOLR-8724.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9386) Upgrade Zookeeper to 3.4.10

2017-01-16 Thread Kevin Risden (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kevin Risden updated SOLR-9386:
---
Description: Zookeeper 3.4.10 release should be happening fairly soon, and 
the ZK issue blocking incorporation into Solr (ZOOKEEPER-2383) has a 
3.4.10-targetted patch that fixes the test failures problem noted on SOLR-8724. 
 (was: Zookeeper 3.4.9 release should be happening fairly soon, and the ZK 
issue blocking incorporation into Solr (ZOOKEEPER-2383) has a 3.4.9-targetted 
patch that fixes the test failures problem noted on SOLR-8724.)

> Upgrade Zookeeper to 3.4.10
> ---
>
> Key: SOLR-9386
> URL: https://issues.apache.org/jira/browse/SOLR-9386
> Project: Solr
>  Issue Type: Task
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Steve Rowe
> Attachments: SOLR-9386.patch, 
> zookeeper-3.4.8-upgrade-tests-pass.patch, 
> zookeeper-3.4.9-upgrade-tests-fail.patch
>
>
> Zookeeper 3.4.10 release should be happening fairly soon, and the ZK issue 
> blocking incorporation into Solr (ZOOKEEPER-2383) has a 3.4.10-targetted 
> patch that fixes the test failures problem noted on SOLR-8724.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7638) Optimize graph query produced by QueryBuilder

2017-01-16 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824229#comment-15824229
 ] 

Adrien Grand commented on LUCENE-7638:
--

Maybe {{TermAutomatonQuery}} would be a good fit for that problem?

> Optimize graph query produced by QueryBuilder
> -
>
> Key: LUCENE-7638
> URL: https://issues.apache.org/jira/browse/LUCENE-7638
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7638.patch
>
>
> The QueryBuilder creates a graph query when the underlying TokenStream 
> contains token with PositionLengthAttribute greater than 1.
> These TokenStreams are in fact graphs (lattice to be more precise) where 
> synonyms can span on multiple terms. 
> Currently the graph query is built by visiting all the path of the graph 
> TokenStream. For instance if you have a synonym like "ny, new york" and you 
> search for "new york city", the query builder would produce two pathes:
> "new york city", "ny city"
> This can quickly explode when the number of multi terms synonyms increase. 
> The query "ny ny" for instance would produce 4 pathes and so on.
> For boolean queries with should or must clauses it should be more efficient 
> to build a boolean query that merges all the intersections in the graph. So 
> instead of "new york city", "ny city" we could produce:
> "+((+new +york) ny) +city"
> The attached patch is a proposal to do that instead of the all path solution.
> The patch transforms multi terms synonyms in graph query for each 
> intersection in the graph. This is not done in this patch but we could also 
> create a specialized query that gives equivalent scores to multi terms 
> synonyms like the SynonymQuery does for single term synonyms.
> For phrase query this patch does not change the current behavior but we could 
> also use the new method to create optimized graph SpanQuery.
> [~mattweber] I think this patch could optimize a lot of cases where multiple 
> muli-terms synonyms are present in a single request. Could you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-9970) Wrong groups order with sort and group.sort

2017-01-16 Thread Oleg Demkovych (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Demkovych updated SOLR-9970:
-
Description: 
Solr sorting with group.sort and sort returns inappropriate order.

Documents example:
{code}
[
  {
"index": 1,
"key": "key1",
"rank": 1
  },
  {
"index": 1,
"key": "key1",
"rank": 3
  },
  {
"index": 1,
"key": "key2",
"rank": 2
  },
  {
"index": 1,
"key": "key3",
"rank": 1
  },
  {
"index": 2,
"key": "key3",
"rank": 3
  },
  {
"index": 3,
"key": "key3",
"rank": 1
  }
]
{code}

*Steps to reproduce:*

Execute query: *q=\*:\*=true=key=index 
asc=rank desc*

*Expected result:*
{code}
"groups": [
  {
"groupValue": "key2",
"doclist": {
  "numFound": 1,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key2",
  "rank": 2
}
  ]
}
  },
  {
"groupValue": "key3",
"doclist": {
  "numFound": 3,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key3",
  "rank": 1
}
  ]
}
  },
  {
"groupValue": "key1",
"doclist": {
  "numFound": 2,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key1",
  "rank": 1
}
  ]
}
  }
]
{code}
*Actual result:*
{code}
"groups": [
  {
"groupValue": "key1",
"doclist": {
  "numFound": 2,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key1",
  "rank": 1
}
  ]
}
  },
  {
"groupValue": "key3",
"doclist": {
  "numFound": 3,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key3",
  "rank": 1
}
  ]
}
  },
  {
"groupValue": "key2",
"doclist": {
  "numFound": 1,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key2",
  "rank": 2
}
  ]
}
  }
]
{code}

Groups should be ordered based on first document in group after group.sort was 
applied.

  was:
Solr sorting with group.sort and sort returns inappropriate order.

Documents example:
{code}
[
  {
"index": 1,
"key": "key1",
"rank": 1
  },
  {
"index": 1,
"key": "key1",
"rank": 3
  },
  {
"index": 1,
"key": "key2",
"rank": 2
  },
  {
"index": 1,
"key": "key3",
"rank": 1
  },
  {
"index": 2,
"key": "key3",
"rank": 3
  },
  {
"index": 3,
"key": "key3",
"rank": 1
  }
]
{code}

*Steps to reproduce:*

Execute query: *q=*:*=true=key=index asc=rank 
desc*

*Expected result:*
{code}
"groups": [
  {
"groupValue": "key2",
"doclist": {
  "numFound": 1,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key2",
  "rank": 2
}
  ]
}
  },
  {
"groupValue": "key3",
"doclist": {
  "numFound": 3,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key3",
  "rank": 1
}
  ]
}
  },
  {
"groupValue": "key1",
"doclist": {
  "numFound": 2,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key1",
  "rank": 1
}
  ]
}
  }
]
{code}
*Actual result:*
{code}
"groups": [
  {
"groupValue": "key1",
"doclist": {
  "numFound": 2,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key1",
  "rank": 1
}
  ]
}
  },
  {
"groupValue": "key3",
"doclist": {
  "numFound": 3,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key3",
  "rank": 1
}
  ]
}
  },
  {
"groupValue": "key2",
"doclist": {
  "numFound": 1,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key2",
  "rank": 2
}
  ]
}
  }
]
{code}

Groups should be ordered based on first document in group after group.sort was 
applied.


> Wrong groups order with sort and group.sort
> ---
>
> Key: SOLR-9970
> URL: https://issues.apache.org/jira/browse/SOLR-9970
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2.1, 6.3
>Reporter: Oleg Demkovych
>
> Solr sorting with group.sort and sort returns inappropriate order.
> Documents example:
> {code}
> [
>   {
> "index": 1,
> "key": "key1",
> "rank": 1
>   },
>   {
> "index": 1,
> "key": "key1",
> "rank": 3
>   },
>   {
> "index": 1,
> "key": "key2",
> "rank": 2
>   },
>   {
> "index": 1,
> "key": "key3",
> "rank": 1
>   },
>   {
> "index": 2,
> "key": "key3",
> 

[jira] [Updated] (SOLR-9970) Wrong groups order with sort and group.sort

2017-01-16 Thread Oleg Demkovych (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Demkovych updated SOLR-9970:
-
Description: 
Solr sorting with group.sort and sort returns inappropriate order.

Documents example:
{code}
[
  {
"index": 1,
"key": "key1",
"rank": 1
  },
  {
"index": 1,
"key": "key1",
"rank": 3
  },
  {
"index": 1,
"key": "key2",
"rank": 2
  },
  {
"index": 1,
"key": "key3",
"rank": 1
  },
  {
"index": 2,
"key": "key3",
"rank": 3
  },
  {
"index": 3,
"key": "key3",
"rank": 1
  }
]
{code}

*Steps to reproduce:*

Execute query: *q=*:*=true=key=index asc=rank 
desc*

*Expected result:*
{code}
"groups": [
  {
"groupValue": "key2",
"doclist": {
  "numFound": 1,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key2",
  "rank": 2
}
  ]
}
  },
  {
"groupValue": "key3",
"doclist": {
  "numFound": 3,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key3",
  "rank": 1
}
  ]
}
  },
  {
"groupValue": "key1",
"doclist": {
  "numFound": 2,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key1",
  "rank": 1
}
  ]
}
  }
]
{code}
*Actual result:*
{code}
"groups": [
  {
"groupValue": "key1",
"doclist": {
  "numFound": 2,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key1",
  "rank": 1
}
  ]
}
  },
  {
"groupValue": "key3",
"doclist": {
  "numFound": 3,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key3",
  "rank": 1
}
  ]
}
  },
  {
"groupValue": "key2",
"doclist": {
  "numFound": 1,
  "start": 0,
  "docs": [
{
  "index": 1,
  "key": "key2",
  "rank": 2
}
  ]
}
  }
]
{code}

Groups should be ordered based on first document in group after group.sort was 
applied.

  was:
Solr sorting with group.sort and sort returns inappropriate order.

Documents example:
{code}
[
  {
"index":1,
"key":"key1",
"rank":1},
  {
"index":1,
"key":"key1",
"rank":3},
  {
"index":1,
"key":"key2",
"rank":2},
  {
"index":1,
"key":"key3",
"rank":1},
  {
"index":2,
"key":"key3",
"rank":3},
  {
"index":3,
"key":"key3",
"rank":1}
]
{code}

*Steps to reproduce:*

Execute query: *q=*:*=true=key=index asc=rank 
desc*

*Expected result:*
{code}
"groups":[{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }}]
{code}
*Actual result:*
{code}
"groups":[{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }}]
{code}

Groups should be ordered based on first document in group after group.sort was 
applied.


> Wrong groups order with sort and group.sort
> ---
>
> Key: SOLR-9970
> URL: https://issues.apache.org/jira/browse/SOLR-9970
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2.1, 6.3
>Reporter: Oleg Demkovych
>
> Solr sorting with group.sort and sort returns inappropriate order.
> Documents example:
> {code}
> [
>   {
> "index": 1,
> "key": "key1",
> "rank": 1
>   },
>   {
> "index": 1,
> "key": "key1",
> "rank": 3
>   },
>   {
> "index": 1,
> "key": "key2",
> "rank": 2
>   },
>   {
> "index": 1,
> "key": "key3",
> "rank": 1
>   },
>   {
> 

[jira] [Updated] (SOLR-9970) Wrong groups order with sort and group.sort

2017-01-16 Thread Oleg Demkovych (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Demkovych updated SOLR-9970:
-
Description: 
Solr sorting with group.sort and sort returns inappropriate order.

Documents example:
{code}
[
  {
"index":1,
"key":"key1",
"rank":1},
  {
"index":1,
"key":"key1",
"rank":3},
  {
"index":1,
"key":"key2",
"rank":2},
  {
"index":1,
"key":"key3",
"rank":1},
  {
"index":2,
"key":"key3",
"rank":3},
  {
"index":3,
"key":"key3",
"rank":1}
]
{code}

*Steps to reproduce:*

Execute query: *q=*:*=true=key=index asc=rank 
desc*

*Expected result:*
{code}
"groups":[{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }}]
{code}
*Actual result:*
{code}
"groups":[{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }}]
{code}

Groups should be ordered based on first document in group after group.sort was 
applied.

  was:
Solr sorting with group.sort and sort returns inappropriate order.

Documents example:
{code}
[
  {
"index":1,
"key":"key1",
"rank":1},
  {
"index":1,
"key":"key1",
"rank":3},
  {
"index":1,
"key":"key2",
"rank":2},
  {
"index":1,
"key":"key3",
"rank":1},
  {
"index":2,
"key":"key3",
"rank":3},
  {
"index":3,
"key":"key3",
"rank":1}
]
{code}

*Steps to reproduce:*

Execute query: *q=*:*=true=key=index asc=rank 
desc*

*Expected result:*
{code}
"groups":[{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }}]
{code}
*Actual result:*
{code}
"groups":[{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }}]
{code}

Groups should be ordered after 


> Wrong groups order with sort and group.sort
> ---
>
> Key: SOLR-9970
> URL: https://issues.apache.org/jira/browse/SOLR-9970
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2.1, 6.3
>Reporter: Oleg Demkovych
>
> Solr sorting with group.sort and sort returns inappropriate order.
> Documents example:
> {code}
> [
>   {
> "index":1,
> "key":"key1",
> "rank":1},
>   {
> "index":1,
> "key":"key1",
> "rank":3},
>   {
> "index":1,
> "key":"key2",
> "rank":2},
>   {
> "index":1,
> "key":"key3",
> "rank":1},
>  

[jira] [Updated] (SOLR-9970) Wrong groups order with sort and group.sort

2017-01-16 Thread Oleg Demkovych (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-9970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Oleg Demkovych updated SOLR-9970:
-
Description: 
Solr sorting with group.sort and sort returns inappropriate order.

Documents example:
{code}
[
  {
"index":1,
"key":"key1",
"rank":1},
  {
"index":1,
"key":"key1",
"rank":3},
  {
"index":1,
"key":"key2",
"rank":2},
  {
"index":1,
"key":"key3",
"rank":1},
  {
"index":2,
"key":"key3",
"rank":3},
  {
"index":3,
"key":"key3",
"rank":1}
]
{code}

*Steps to reproduce:*

Execute query: *q=*:*=true=key=index asc=rank 
desc*

*Expected result:*
{code}
"groups":[{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }}]
{code}
*Actual result:*
{code}
"groups":[{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }}]
{code}

Groups should be ordered after 

  was:
Solr sorting with group.sort and sort returns inappropriate order.

Documents example:
[
  {
"index":1,
"key":"key1",
"rank":1},
  {
"index":1,
"key":"key1",
"rank":3},
  {
"index":1,
"key":"key2",
"rank":2},
  {
"index":1,
"key":"key3",
"rank":1},
  {
"index":2,
"key":"key3",
"rank":3},
  {
"index":3,
"key":"key3",
"rank":1}
]

Steps to reproduce:
Execute query: q=*:*=true=key=index asc=rank 
desc

Expected result:

"groups":[{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }}]

Actual result:

"groups":[{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }}]


> Wrong groups order with sort and group.sort
> ---
>
> Key: SOLR-9970
> URL: https://issues.apache.org/jira/browse/SOLR-9970
> Project: Solr
>  Issue Type: Bug
>  Security Level: Public(Default Security Level. Issues are Public) 
>  Components: search
>Affects Versions: 6.2.1, 6.3
>Reporter: Oleg Demkovych
>
> Solr sorting with group.sort and sort returns inappropriate order.
> Documents example:
> {code}
> [
>   {
> "index":1,
> "key":"key1",
> "rank":1},
>   {
> "index":1,
> "key":"key1",
> "rank":3},
>   {
> "index":1,
> "key":"key2",
> "rank":2},
>   {
> "index":1,
> "key":"key3",
> "rank":1},
>   {
> "index":2,
> "key":"key3",
> "rank":3},
>   {
> "index":3,
> "key":"key3",
> 

[jira] [Created] (SOLR-9970) Wrong groups order with sort and group.sort

2017-01-16 Thread Oleg Demkovych (JIRA)
Oleg Demkovych created SOLR-9970:


 Summary: Wrong groups order with sort and group.sort
 Key: SOLR-9970
 URL: https://issues.apache.org/jira/browse/SOLR-9970
 Project: Solr
  Issue Type: Bug
  Security Level: Public (Default Security Level. Issues are Public)
  Components: search
Affects Versions: 6.3, 6.2.1
Reporter: Oleg Demkovych


Solr sorting with group.sort and sort returns inappropriate order.

Documents example:
[
  {
"index":1,
"key":"key1",
"rank":1},
  {
"index":1,
"key":"key1",
"rank":3},
  {
"index":1,
"key":"key2",
"rank":2},
  {
"index":1,
"key":"key3",
"rank":1},
  {
"index":2,
"key":"key3",
"rank":3},
  {
"index":3,
"key":"key3",
"rank":1}
]

Steps to reproduce:
Execute query: q=*:*=true=key=index asc=rank 
desc

Expected result:

"groups":[{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }}]

Actual result:

"groups":[{
  "groupValue":"key1",
  "doclist":{"numFound":2,"start":0,"docs":[
  {
"index":1,
"key":"key1",
"rank":1}]
  }},
{
  "groupValue":"key3",
  "doclist":{"numFound":3,"start":0,"docs":[
  {
"index":1,
"key":"key3",
"rank":1}]
  }},
{
  "groupValue":"key2",
  "doclist":{"numFound":1,"start":0,"docs":[
  {
"index":1,
"key":"key2",
"rank":2}]
  }}]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-6.x-Solaris (64bit/jdk1.8.0) - Build # 622 - Still Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Solaris/622/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.hdfs.HdfsRecoveryZkTest

Error Message:
ObjectTracker found 1 object(s) that were not released!!! [HdfsTransactionLog] 
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException  at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
  at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)  
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)  at 
org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)  at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
  at sun.reflect.GeneratedConstructorAccessor105.newInstance(Unknown Source)  
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  at java.lang.reflect.Constructor.newInstance(Constructor.java:423)  at 
org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)  at 
org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)  at 
org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:930)  at 
org.apache.solr.core.SolrCore.(SolrCore.java:823)  at 
org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)  at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)  at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
  at java.util.concurrent.FutureTask.run(FutureTask.java:266)  at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
 at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
 at java.lang.Thread.run(Thread.java:745)  

Stack Trace:
java.lang.AssertionError: ObjectTracker found 1 object(s) that were not 
released!!! [HdfsTransactionLog]
org.apache.solr.common.util.ObjectReleaseTracker$ObjectTrackerException
at 
org.apache.solr.common.util.ObjectReleaseTracker.track(ObjectReleaseTracker.java:43)
at 
org.apache.solr.update.HdfsTransactionLog.(HdfsTransactionLog.java:130)
at org.apache.solr.update.HdfsUpdateLog.init(HdfsUpdateLog.java:202)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:137)
at org.apache.solr.update.UpdateHandler.(UpdateHandler.java:94)
at 
org.apache.solr.update.DirectUpdateHandler2.(DirectUpdateHandler2.java:102)
at sun.reflect.GeneratedConstructorAccessor105.newInstance(Unknown 
Source)
at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.solr.core.SolrCore.createInstance(SolrCore.java:753)
at org.apache.solr.core.SolrCore.createUpdateHandler(SolrCore.java:815)
at org.apache.solr.core.SolrCore.initUpdateHandler(SolrCore.java:1065)
at org.apache.solr.core.SolrCore.(SolrCore.java:930)
at org.apache.solr.core.SolrCore.(SolrCore.java:823)
at org.apache.solr.core.CoreContainer.create(CoreContainer.java:889)
at 
org.apache.solr.core.CoreContainer.lambda$load$3(CoreContainer.java:541)
at 
com.codahale.metrics.InstrumentedExecutorService$InstrumentedCallable.call(InstrumentedExecutorService.java:197)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:229)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)


at __randomizedtesting.SeedInfo.seed([AA1668514CF04FD7]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertNull(Assert.java:551)
at 
org.apache.solr.SolrTestCaseJ4.teardownTestCases(SolrTestCaseJ4.java:269)
at sun.reflect.GeneratedMethodAccessor73.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:870)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824147#comment-15824147
 ] 

ASF subversion and git services commented on SOLR-9906:
---

Commit efc7ee0f0c9154fe58671601fdc053540c97ff62 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=efc7ee0 ]

SOLR-9906: Fix dodgy test check


> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824146#comment-15824146
 ] 

ASF subversion and git services commented on SOLR-9906:
---

Commit e13a6fa078890c3f3e0d9cebb1bf3329d94e46a6 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=e13a6fa ]

SOLR-9906: Fix dodgy test check


> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824145#comment-15824145
 ] 

ASF subversion and git services commented on SOLR-9906:
---

Commit 3795c997257868b66306a2c105f095f8a82326c7 in lucene-solr's branch 
refs/heads/branch_6_4 from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=3795c99 ]

SOLR-9906: Fix dodgy test check


> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: 6.4 release

2017-01-16 Thread Uwe Schindler
Oh yeah,

 

please fix! The Jenkins failures are annoying. Is there an issue about that?

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

http://www.thetaphi.de  

eMail: u...@thetaphi.de

 

From: Alan Woodward [mailto:a...@flax.co.uk] 
Sent: Monday, January 16, 2017 3:58 PM
To: dev@lucene.apache.org
Subject: Re: 6.4 release

 

There’s a small bug fix on SOLR-9906 I’d like to push, which should address the 
frequent failures in PeerSyncReplicationTest we’re seeing.

 

Alan Woodward
www.flax.co.uk  

 

On 14 Jan 2017, at 14:57, jim ferenczi  > wrote:

 

Hi,

 

The release branch for 6.4 is pushed so the feature freeze phase has officially 
started.

I don't have an admin account on Jenkins so any help would be appreciated. We 
need to copy the job for the new branch. 

 

No new features may be committed to the branch.

Documentation patches, build patches and serious bug fixes may be committed to 
the branch. However, you should submit all patches you want to commit to Jira 
first to give others the chance to review and possibly vote against the patch. 
Keep in mind that it is our main intention to keep the branch as stable as 
possible.

All patches that are intended for the branch should first be committed to the 
unstable branch, merged into the stable branch, and then into the current 
release branch.

Normal unstable and stable branch development may continue as usual. However, 
if you plan to commit a big change to the unstable branch while the branch 
feature freeze is in effect, think twice: can't the addition wait a couple more 
days? Merges of bug fixes into the branch may become more difficult.

Only Jira issues with Fix version "6.4" and priority "Blocker" will delay a 
release candidate build.

 

Thanks, 

Jim

 



Re: 6.4 release

2017-01-16 Thread jim ferenczi
You can still push bug fixes Alan, I'll create the first RC tomorrow if the
all the builds are green.This will let some time for your patch to be
tested.

2017-01-16 15:57 GMT+01:00 Alan Woodward :

> There’s a small bug fix on SOLR-9906 I’d like to push, which should
> address the frequent failures in PeerSyncReplicationTest we’re seeing.
>
> Alan Woodward
> www.flax.co.uk
>
>
> On 14 Jan 2017, at 14:57, jim ferenczi  wrote:
>
> Hi,
>
> The release branch for 6.4 is pushed so the feature freeze phase has
> officially started.
> I don't have an admin account on Jenkins so any help would be appreciated.
> We need to copy the job for the new branch.
>
> No new features may be committed to the branch.
> Documentation patches, build patches and serious bug fixes may be
> committed to the branch. However, you should submit all patches you want to
> commit to Jira first to give others the chance to review and possibly vote
> against the patch. Keep in mind that it is our main intention to keep the
> branch as stable as possible.
> All patches that are intended for the branch should first be committed to
> the unstable branch, merged into the stable branch, and then into the
> current release branch.
> Normal unstable and stable branch development may continue as usual.
> However, if you plan to commit a big change to the unstable branch while
> the branch feature freeze is in effect, think twice: can't the addition
> wait a couple more days? Merges of bug fixes into the branch may become
> more difficult.
> Only Jira issues with Fix version "6.4" and priority "Blocker" will delay
> a release candidate build.
>
> Thanks,
> Jim
>
>
>


Re: 6.4 release

2017-01-16 Thread Alan Woodward
There’s a small bug fix on SOLR-9906 I’d like to push, which should address the 
frequent failures in PeerSyncReplicationTest we’re seeing.

Alan Woodward
www.flax.co.uk


> On 14 Jan 2017, at 14:57, jim ferenczi  wrote:
> 
> Hi,
> 
> The release branch for 6.4 is pushed so the feature freeze phase has 
> officially started.
> I don't have an admin account on Jenkins so any help would be appreciated. We 
> need to copy the job for the new branch. 
> 
> No new features may be committed to the branch.
> Documentation patches, build patches and serious bug fixes may be committed 
> to the branch. However, you should submit all patches you want to commit to 
> Jira first to give others the chance to review and possibly vote against the 
> patch. Keep in mind that it is our main intention to keep the branch as 
> stable as possible.
> All patches that are intended for the branch should first be committed to the 
> unstable branch, merged into the stable branch, and then into the current 
> release branch.
> Normal unstable and stable branch development may continue as usual. However, 
> if you plan to commit a big change to the unstable branch while the branch 
> feature freeze is in effect, think twice: can't the addition wait a couple 
> more days? Merges of bug fixes into the branch may become more difficult.
> Only Jira issues with Fix version "6.4" and priority "Blocker" will delay a 
> release candidate build.
> 
> Thanks, 
> Jim



[jira] [Commented] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-16 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824113#comment-15824113
 ] 

Alan Woodward commented on SOLR-9906:
-

Yes to both - don't worry about a patch, I'll make the change and push it.

> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7638) Optimize graph query produced by QueryBuilder

2017-01-16 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-7638:
-
Description: 
The QueryBuilder creates a graph query when the underlying TokenStream contains 
token with PositionLengthAttribute greater than 1.
These TokenStreams are in fact graphs (lattice to be more precise) where 
synonyms can span on multiple terms. 
Currently the graph query is built by visiting all the path of the graph 
TokenStream. For instance if you have a synonym like "ny, new york" and you 
search for "new york city", the query builder would produce two pathes:
"new york city", "ny city"
This can quickly explode when the number of multi terms synonyms increase. 
The query "ny ny" for instance would produce 4 pathes and so on.
For boolean queries with should or must clauses it should be more efficient to 
build a boolean query that merges all the intersections in the graph. So 
instead of "new york city", "ny city" we could produce:
"+((+new +york) ny) +city"

The attached patch is a proposal to do that instead of the all path solution.
The patch transforms multi terms synonyms in graph query for each intersection 
in the graph. This is not done in this patch but we could also create a 
specialized query that gives equivalent scores to multi terms synonyms like the 
SynonymQuery does for single term synonyms.
For phrase query this patch does not change the current behavior but we could 
also use the new method to create optimized graph SpanQuery.

[~mattweber] I think this patch could optimize a lot of cases where multiple 
muli-terms synonyms are present in a single request. Could you take a look ?

  was:
The QueryBuilder now creates a graph query when the underlying TokenStream 
contains token with PositionLengthAttribute greater than 1.
These TokenStreams are in fact graphs (lattice to be more precise) where 
synonyms can span on multiple terms. 
Currently the graph query is built by visiting all the path of the graph 
TokenStream. For instance if you have a synonym like "ny, new york" and you 
search for "new york city", the query builder would produce two pathes:
"new york city", "ny city"
This can quickly explode when the number of multi terms synonyms increase. 
The query "ny ny" for instance would produce 4 pathes and so on.
For boolean queries with should or must clauses it should be more efficient to 
build a boolean query that merges all the intersections in the graph. So 
instead of "new york city", "ny city" we could produce:
"+((+new +york) ny) +city"

The attached patch is a proposal to do that instead of the all path solution.
The patch transforms multi terms synonyms in graph query for each intersection 
in the graph. This is not done in this patch but we could also create a 
specialized query that gives equivalent scores to multi terms synonyms like the 
SynonymQuery does for single term synonyms.
For phrase query this patch does not change the current behavior but we could 
also use the new method to create optimized graph SpanQuery.

[~mattweber] I think this patch could optimize a lot of cases where multiple 
muli-terms synonyms are present in a single request. Could you take a look ?


> Optimize graph query produced by QueryBuilder
> -
>
> Key: LUCENE-7638
> URL: https://issues.apache.org/jira/browse/LUCENE-7638
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7638.patch
>
>
> The QueryBuilder creates a graph query when the underlying TokenStream 
> contains token with PositionLengthAttribute greater than 1.
> These TokenStreams are in fact graphs (lattice to be more precise) where 
> synonyms can span on multiple terms. 
> Currently the graph query is built by visiting all the path of the graph 
> TokenStream. For instance if you have a synonym like "ny, new york" and you 
> search for "new york city", the query builder would produce two pathes:
> "new york city", "ny city"
> This can quickly explode when the number of multi terms synonyms increase. 
> The query "ny ny" for instance would produce 4 pathes and so on.
> For boolean queries with should or must clauses it should be more efficient 
> to build a boolean query that merges all the intersections in the graph. So 
> instead of "new york city", "ny city" we could produce:
> "+((+new +york) ny) +city"
> The attached patch is a proposal to do that instead of the all path solution.
> The patch transforms multi terms synonyms in graph query for each 
> intersection in the graph. This is not done in this patch but we could also 
> create a specialized query that gives equivalent scores to multi terms 
> synonyms like the SynonymQuery does for single term synonyms.
> For phrase query this patch does not change the current behavior but we could 
> also use 

[jira] [Comment Edited] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-16 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824089#comment-15824089
 ] 

Pushkar Raste edited comment on SOLR-9906 at 1/16/17 2:48 PM:
--

[~romseygeek] - Thank you for catch the bug. I think check can be fixed by 
changing {{slice.getState() == State.ACTIVE}} to {{slice.getLeader().getState() 
== Replica.State.ACTIVE}} 

Let me know if that is correct and I will attach a patch to fix it (Not sure if 
I have attach patch for this issue in entirety or just the patch to fix the 
slice vs replica state.

By log message is badly setup, do you mean line {{log.debug("Old leader {}, new 
leader. New leader got elected in {} ms", oldLeader, 
slice.getLeader(),timeOut.timeElapsed(MILLISECONDS) );}} is missing a {} 
placeholder for the new leader?


was (Author: praste):
[~romseygeek] - Thank you for catch the bug. I think check can be fixed by 
changing {{slice.getState() == State.ACTIVE}} to {{slice.getLeader().getState() 
== Replica.State.ACTIVE}} 

Let me know if that is correct and I will attach a patch to fix it (Not sure if 
I have attach patch for this issue in entirety or just the patch to fix the 
slice vs replica state.

What do you mean by log message is badly setup?

> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-9906) Use better check to validate if node recovered via PeerSync or Replication

2017-01-16 Thread Pushkar Raste (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-9906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824089#comment-15824089
 ] 

Pushkar Raste commented on SOLR-9906:
-

[~romseygeek] - Thank you for catch the bug. I think check can be fixed by 
changing {{slice.getState() == State.ACTIVE}} to {{slice.getLeader().getState() 
== Replica.State.ACTIVE}} 

Let me know if that is correct and I will attach a patch to fix it (Not sure if 
I have attach patch for this issue in entirety or just the patch to fix the 
slice vs replica state.

What do you mean by log message is badly setup?

> Use better check to validate if node recovered via PeerSync or Replication
> --
>
> Key: SOLR-9906
> URL: https://issues.apache.org/jira/browse/SOLR-9906
> Project: Solr
>  Issue Type: Improvement
>  Security Level: Public(Default Security Level. Issues are Public) 
>Reporter: Pushkar Raste
>Assignee: Noble Paul
>Priority: Minor
> Fix For: 6.4
>
> Attachments: SOLR-9906.patch, SOLR-9906.patch, 
> SOLR-PeerSyncVsReplicationTest.diff
>
>
> Tests {{LeaderFailureAfterFreshStartTest}} and {{PeerSyncReplicationTest}} 
> currently rely on number of requests made to the leader's replication handler 
> to check if node recovered via PeerSync or replication. This check is not 
> very reliable and we have seen failures in the past. 
> While tinkering with different way to write a better test I found 
> [SOLR-9859|SOLR-9859]. Now that SOLR-9859 is fixed, here is idea for better 
> way to distinguish recovery via PeerSync vs Replication. 
> * For {{PeerSyncReplicationTest}}, if node successfully recovers via 
> PeerSync, then file {{replication.properties}} should not exist
> For {{LeaderFailureAfterFreshStartTest}}, if the freshly replicated node does 
> not go into replication recovery after the leader failure, contents 
> {{replication.properties}} should not change 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7638) Optimize graph query produced by QueryBuilder

2017-01-16 Thread Jim Ferenczi (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7638?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jim Ferenczi updated LUCENE-7638:
-
Attachment: LUCENE-7638.patch

> Optimize graph query produced by QueryBuilder
> -
>
> Key: LUCENE-7638
> URL: https://issues.apache.org/jira/browse/LUCENE-7638
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Jim Ferenczi
> Attachments: LUCENE-7638.patch
>
>
> The QueryBuilder now creates a graph query when the underlying TokenStream 
> contains token with PositionLengthAttribute greater than 1.
> These TokenStreams are in fact graphs (lattice to be more precise) where 
> synonyms can span on multiple terms. 
> Currently the graph query is built by visiting all the path of the graph 
> TokenStream. For instance if you have a synonym like "ny, new york" and you 
> search for "new york city", the query builder would produce two pathes:
> "new york city", "ny city"
> This can quickly explode when the number of multi terms synonyms increase. 
> The query "ny ny" for instance would produce 4 pathes and so on.
> For boolean queries with should or must clauses it should be more efficient 
> to build a boolean query that merges all the intersections in the graph. So 
> instead of "new york city", "ny city" we could produce:
> "+((+new +york) ny) +city"
> The attached patch is a proposal to do that instead of the all path solution.
> The patch transforms multi terms synonyms in graph query for each 
> intersection in the graph. This is not done in this patch but we could also 
> create a specialized query that gives equivalent scores to multi terms 
> synonyms like the SynonymQuery does for single term synonyms.
> For phrase query this patch does not change the current behavior but we could 
> also use the new method to create optimized graph SpanQuery.
> [~mattweber] I think this patch could optimize a lot of cases where multiple 
> muli-terms synonyms are present in a single request. Could you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7638) Optimize graph query produced by QueryBuilder

2017-01-16 Thread Jim Ferenczi (JIRA)
Jim Ferenczi created LUCENE-7638:


 Summary: Optimize graph query produced by QueryBuilder
 Key: LUCENE-7638
 URL: https://issues.apache.org/jira/browse/LUCENE-7638
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Jim Ferenczi


The QueryBuilder now creates a graph query when the underlying TokenStream 
contains token with PositionLengthAttribute greater than 1.
These TokenStreams are in fact graphs (lattice to be more precise) where 
synonyms can span on multiple terms. 
Currently the graph query is built by visiting all the path of the graph 
TokenStream. For instance if you have a synonym like "ny, new york" and you 
search for "new york city", the query builder would produce two pathes:
"new york city", "ny city"
This can quickly explode when the number of multi terms synonyms increase. 
The query "ny ny" for instance would produce 4 pathes and so on.
For boolean queries with should or must clauses it should be more efficient to 
build a boolean query that merges all the intersections in the graph. So 
instead of "new york city", "ny city" we could produce:
"+((+new +york) ny) +city"

The attached patch is a proposal to do that instead of the all path solution.
The patch transforms multi terms synonyms in graph query for each intersection 
in the graph. This is not done in this patch but we could also create a 
specialized query that gives equivalent scores to multi terms synonyms like the 
SynonymQuery does for single term synonyms.
For phrase query this patch does not change the current behavior but we could 
also use the new method to create optimized graph SpanQuery.

[~mattweber] I think this patch could optimize a lot of cases where multiple 
muli-terms synonyms are present in a single request. Could you take a look ?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7628) Add a getMatchingChildren() method to DisjunctionScorer

2017-01-16 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7628?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-7628:
--
Attachment: LUCENE-7628.patch

Here's a patch changing getChildren() to only return matching subscorers.

> Add a getMatchingChildren() method to DisjunctionScorer
> ---
>
> Key: LUCENE-7628
> URL: https://issues.apache.org/jira/browse/LUCENE-7628
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.4
>
> Attachments: LUCENE-7628.patch, LUCENE-7628.patch
>
>
> This one is a bit convoluted, so bear with me...
> The luwak highlighter works by rewriting queries into their Span-equivalents, 
> and then running them with a special Collector.  At each matching doc, the 
> highlighter gathers all the Spans objects positioned on the current doc and 
> collects their positions using the SpanCollection API.
> Some queries can't be translated into Spans.  For those queries that generate 
> Scorers with ChildScorers, like BooleanQuery, we can call .getChildren() on 
> the Scorer and see if any of them are SpanScorers, and for those that aren't 
> we can call .getChildren() again and recurse down.  For each child scorer, we 
> check that it's positioned on the current document, so non-matching 
> subscorers can be skipped.
> This all works correctly *except* in the case of a DisjunctionScorer where 
> one of the children is a two-phase iterator that has matched its 
> approximation, but not its refinement query.  A SpanScorer in this situation 
> will be correctly positioned on the current document, but its Spans will be 
> in an undefined state, meaning the highlighter will either collect incorrect 
> hits, or it will throw an Exception and prevent hits being collected from 
> other subspans.
> We've tried various ways around this (including forking SpanNearQuery and 
> adding a bunch of slow position checks to it that are used only by the 
> highlighting code), but it turns out that the simplest fix is to add a new 
> method to DisjunctionScorer that only returns the currently matching child 
> Scorers.  It's a bit of a hack, and it won't be used anywhere else, but it's 
> a fairly small and contained hack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7628) Add a getMatchingChildren() method to DisjunctionScorer

2017-01-16 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824036#comment-15824036
 ] 

Alan Woodward commented on LUCENE-7628:
---

Ah, I see what you mean.  That would still work, I think, although it would 
probably slow down highlighting batches of documents, as we'd have to create a 
new Scorer tree for every matching doc in the batch whereas now we can reuse 
them.

> Add a getMatchingChildren() method to DisjunctionScorer
> ---
>
> Key: LUCENE-7628
> URL: https://issues.apache.org/jira/browse/LUCENE-7628
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.4
>
> Attachments: LUCENE-7628.patch
>
>
> This one is a bit convoluted, so bear with me...
> The luwak highlighter works by rewriting queries into their Span-equivalents, 
> and then running them with a special Collector.  At each matching doc, the 
> highlighter gathers all the Spans objects positioned on the current doc and 
> collects their positions using the SpanCollection API.
> Some queries can't be translated into Spans.  For those queries that generate 
> Scorers with ChildScorers, like BooleanQuery, we can call .getChildren() on 
> the Scorer and see if any of them are SpanScorers, and for those that aren't 
> we can call .getChildren() again and recurse down.  For each child scorer, we 
> check that it's positioned on the current document, so non-matching 
> subscorers can be skipped.
> This all works correctly *except* in the case of a DisjunctionScorer where 
> one of the children is a two-phase iterator that has matched its 
> approximation, but not its refinement query.  A SpanScorer in this situation 
> will be correctly positioned on the current document, but its Spans will be 
> in an undefined state, meaning the highlighter will either collect incorrect 
> hits, or it will throw an Exception and prevent hits being collected from 
> other subspans.
> We've tried various ways around this (including forking SpanNearQuery and 
> adding a bunch of slow position checks to it that are used only by the 
> highlighting code), but it turns out that the simplest fix is to add a new 
> method to DisjunctionScorer that only returns the currently matching child 
> Scorers.  It's a bit of a hack, and it won't be used anywhere else, but it's 
> a fairly small and contained hack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7637) TermInSetQuery should require that all terms come from the same field

2017-01-16 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7637?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-7637:
-
Attachment: LUCENE-7637.patch

Here is a patch.

> TermInSetQuery should require that all terms come from the same field
> -
>
> Key: LUCENE-7637
> URL: https://issues.apache.org/jira/browse/LUCENE-7637
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Priority: Minor
> Attachments: LUCENE-7637.patch
>
>
> Spin-off from LUCENE-7624. Requiring that all terms are in the same field 
> would make things simpler and more consistent with other queries. It might 
> also make it easier to improve this query in the future since other similar 
> queries like AutomatonQuery also work on the per-field basis. The only 
> downside is that querying terms across multiple fields would be less 
> efficient, but this does not seem to be a common use-case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7637) TermInSetQuery should require that all terms come from the same field

2017-01-16 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-7637:


 Summary: TermInSetQuery should require that all terms come from 
the same field
 Key: LUCENE-7637
 URL: https://issues.apache.org/jira/browse/LUCENE-7637
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Priority: Minor


Spin-off from LUCENE-7624. Requiring that all terms are in the same field would 
make things simpler and more consistent with other queries. It might also make 
it easier to improve this query in the future since other similar queries like 
AutomatonQuery also work on the per-field basis. The only downside is that 
querying terms across multiple fields would be less efficient, but this does 
not seem to be a common use-case.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7628) Add a getMatchingChildren() method to DisjunctionScorer

2017-01-16 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15824014#comment-15824014
 ] 

Adrien Grand commented on LUCENE-7628:
--

I meant something similar to explain, which takes a docID, eg. 
{{Collection Weight.getMatchingChildren(int docID)}}.

> Add a getMatchingChildren() method to DisjunctionScorer
> ---
>
> Key: LUCENE-7628
> URL: https://issues.apache.org/jira/browse/LUCENE-7628
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.4
>
> Attachments: LUCENE-7628.patch
>
>
> This one is a bit convoluted, so bear with me...
> The luwak highlighter works by rewriting queries into their Span-equivalents, 
> and then running them with a special Collector.  At each matching doc, the 
> highlighter gathers all the Spans objects positioned on the current doc and 
> collects their positions using the SpanCollection API.
> Some queries can't be translated into Spans.  For those queries that generate 
> Scorers with ChildScorers, like BooleanQuery, we can call .getChildren() on 
> the Scorer and see if any of them are SpanScorers, and for those that aren't 
> we can call .getChildren() again and recurse down.  For each child scorer, we 
> check that it's positioned on the current document, so non-matching 
> subscorers can be skipped.
> This all works correctly *except* in the case of a DisjunctionScorer where 
> one of the children is a two-phase iterator that has matched its 
> approximation, but not its refinement query.  A SpanScorer in this situation 
> will be correctly positioned on the current document, but its Spans will be 
> in an undefined state, meaning the highlighter will either collect incorrect 
> hits, or it will throw an Exception and prevent hits being collected from 
> other subspans.
> We've tried various ways around this (including forking SpanNearQuery and 
> adding a bunch of slow position checks to it that are used only by the 
> highlighting code), but it turns out that the simplest fix is to add a new 
> method to DisjunctionScorer that only returns the currently matching child 
> Scorers.  It's a bit of a hack, and it won't be used anywhere else, but it's 
> a fairly small and contained hack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-7623) Add FunctionScoreQuery and FunctionMatchQuery

2017-01-16 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-7623.
---
   Resolution: Fixed
Fix Version/s: 6.5

Thanks for the reviews!

> Add FunctionScoreQuery and FunctionMatchQuery
> -
>
> Key: LUCENE-7623
> URL: https://issues.apache.org/jira/browse/LUCENE-7623
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.5
>
> Attachments: LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch, 
> LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch
>
>
> We should update the various function scoring queries to use the new 
> DoubleValues API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7636) Fix broken links in lucene.apache.org site

2017-01-16 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/LUCENE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl updated LUCENE-7636:

Description: 
I ran a broken links tool on lucene.apache.org site, found some broken links. 
The scan excluded link checking of Javadoc, JIRA, localhost and 401 links that 
need login to Apache:

Getting links from: http://lucene.apache.org/pylucene/index.html
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES 
(HTTP_404)
Finished! 174 links found. 3 broken.

Getting links from: http://lucene.apache.org/pylucene/
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES 
(HTTP_404)
Finished! 174 links found. 3 broken.

Getting links from: http://lucene.apache.org/core/discussion.html
-├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)- *FIXED*
Finished! 93 links found. 1 broken.

Getting links from: http://lucene.apache.org/core/developer.html
├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
(HTTP_404)
├─BROKEN─ 
https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
 (HTTP_404)
├─BROKEN─ 
https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
 (HTTP_404)
Finished! 73 links found. 3 broken.

Getting links from: http://lucene.apache.org/solr/resources.html
-└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)- *FIXED*
Finished! 188 links found. 8 broken.

Getting links from: http://lucene.apache.org/pylucene/features.html
├─BROKEN─ 
http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction 
(HTTP_404)
Finished! 60 links found. 1 broken.

Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
Finished! 66 links found. 2 broken.



  was:
I ran a broken links tool on lucene.apache.org site, found some broken links. 
The scan excluded link checking of Javadoc, JIRA, localhost and 401 links that 
need login to Apache:

Getting links from: http://lucene.apache.org/pylucene/index.html
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES 
(HTTP_404)
Finished! 174 links found. 3 broken.

Getting links from: http://lucene.apache.org/pylucene/
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES 
(HTTP_404)
Finished! 174 links found. 3 broken.

Getting links from: http://lucene.apache.org/core/discussion.html
├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)
Finished! 93 links found. 1 broken.

Getting links from: http://lucene.apache.org/core/developer.html
├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
(HTTP_404)
├─BROKEN─ 
https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
 (HTTP_404)
├─BROKEN─ 
https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
 (HTTP_404)
Finished! 73 links found. 3 broken.

Getting links from: http://lucene.apache.org/solr/resources.html
└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)
Finished! 188 links found. 8 broken.

Getting links from: http://lucene.apache.org/pylucene/features.html
├─BROKEN─ 
http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction 
(HTTP_404)
Finished! 60 links found. 1 broken.

Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
Finished! 66 links found. 2 broken.




> Fix broken links in lucene.apache.org site
> --
>
> Key: LUCENE-7636
> URL: https://issues.apache.org/jira/browse/LUCENE-7636
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>Priority: Minor
>
> I ran a broken links tool on lucene.apache.org 

[jira] [Commented] (LUCENE-7636) Fix broken links in lucene.apache.org site

2017-01-16 Thread JIRA

[ 
https://issues.apache.org/jira/browse/LUCENE-7636?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823963#comment-15823963
 ] 

Jan Høydahl commented on LUCENE-7636:
-

Fixed two links

> Fix broken links in lucene.apache.org site
> --
>
> Key: LUCENE-7636
> URL: https://issues.apache.org/jira/browse/LUCENE-7636
> Project: Lucene - Core
>  Issue Type: Task
>  Components: general/website
>Reporter: Jan Høydahl
>Priority: Minor
>
> I ran a broken links tool on lucene.apache.org site, found some broken links. 
> The scan excluded link checking of Javadoc, JIRA, localhost and 401 links 
> that need login to Apache:
> Getting links from: http://lucene.apache.org/pylucene/index.html
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/pylucene/
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
> (HTTP_404)
> ├─BROKEN─ 
> http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES
>  (HTTP_404)
> Finished! 174 links found. 3 broken.
> Getting links from: http://lucene.apache.org/core/discussion.html
> -├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)- *FIXED*
> Finished! 93 links found. 1 broken.
> Getting links from: http://lucene.apache.org/core/developer.html
> ├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
> (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
>  (HTTP_404)
> ├─BROKEN─ 
> https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
>  (HTTP_404)
> Finished! 73 links found. 3 broken.
> Getting links from: http://lucene.apache.org/solr/resources.html
> -└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)- *FIXED*
> Finished! 188 links found. 8 broken.
> Getting links from: http://lucene.apache.org/pylucene/features.html
> ├─BROKEN─ 
> http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction
>  (HTTP_404)
> Finished! 60 links found. 1 broken.
> Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
> ├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
> ├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
> Finished! 66 links found. 2 broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-7636) Fix broken links in lucene.apache.org site

2017-01-16 Thread JIRA
Jan Høydahl created LUCENE-7636:
---

 Summary: Fix broken links in lucene.apache.org site
 Key: LUCENE-7636
 URL: https://issues.apache.org/jira/browse/LUCENE-7636
 Project: Lucene - Core
  Issue Type: Task
  Components: general/website
Reporter: Jan Høydahl
Priority: Minor


I ran a broken links tool on lucene.apache.org site, found some broken links. 
The scan excluded link checking of Javadoc, JIRA, localhost and 401 links that 
need login to Apache:

Getting links from: http://lucene.apache.org/pylucene/index.html
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES 
(HTTP_404)
Finished! 174 links found. 3 broken.

Getting links from: http://lucene.apache.org/pylucene/
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_5/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_4/CHANGES 
(HTTP_404)
├─BROKEN─ 
http://svn.apache.org/repos/asf/lucene/pylucene/tags/pylucene_3_0_2/jcc/CHANGES 
(HTTP_404)
Finished! 174 links found. 3 broken.

Getting links from: http://lucene.apache.org/core/discussion.html
├─BROKEN─ http://freenode.net/irc_servers.shtml (HTTP_404)
Finished! 93 links found. 1 broken.

Getting links from: http://lucene.apache.org/core/developer.html
├─BROKEN─ https://builds.apache.org/job/Lucene-Artifacts-trunk/javadoc/ 
(HTTP_404)
├─BROKEN─ 
https://builds.apache.org/job/Lucene-Solr-Clover-trunk/lastSuccessfulBuild/clover-report/
 (HTTP_404)
├─BROKEN─ 
https://builds.apache.org/job/Lucene-Artifacts-trunk/lastSuccessfulBuild/artifact/lucene/dist/
 (HTTP_404)
Finished! 73 links found. 3 broken.

Getting links from: http://lucene.apache.org/solr/resources.html
└─BROKEN─ http://mathieu-nayrolles.com/ (BLC_UNKNOWN)
Finished! 188 links found. 8 broken.

Getting links from: http://lucene.apache.org/pylucene/features.html
├─BROKEN─ 
http://svn.apache.org/viewcvs.cgi/lucene/pylucene/trunk/samples/LuceneInAction 
(HTTP_404)
Finished! 60 links found. 1 broken.

Getting links from: http://lucene.apache.org/pylucene/jcc/features.html
├─BROKEN─ http://docs.python.org/ext/defining-new-types.html (HTTP_404)
├─BROKEN─ http://gcc.gnu.org/onlinedocs/gcj/About-CNI.html (HTTP_404)
Finished! 66 links found. 2 broken.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-6.x - Build # 683 - Failure

2017-01-16 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-6.x/683/

All tests passed

Build Log:
[...truncated 54427 lines...]
-ecj-javadoc-lint-src:
[mkdir] Created dir: /tmp/ecj153256266
 [ecj-lint] Compiling 453 source files to /tmp/ecj153256266
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
 (at line 26)
 [ecj-lint] import org.apache.lucene.util.AttributeSource.State;
 [ecj-lint]
 [ecj-lint] The import org.apache.lucene.util.AttributeSource.State is never 
used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 2. ERROR in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/NGramTokenFilter.java
 (at line 27)
 [ecj-lint] import org.apache.lucene.util.AttributeSource.State;
 [ecj-lint]
 [ecj-lint] The import org.apache.lucene.util.AttributeSource.State is never 
used
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 3. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/analysis/common/src/java/org/apache/lucene/analysis/synonym/SynonymFilterFactory.java
 (at line 137)
 [ecj-lint] TokenStream stream = ignoreCase ? new 
LowerCaseFilter(tokenizer) : tokenizer;
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'stream' is never closed
 [ecj-lint] --
 [ecj-lint] --
 [ecj-lint] 4. WARNING in 
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/analysis/common/src/java/org/apache/lucene/analysis/synonym/SynonymGraphFilterFactory.java
 (at line 134)
 [ecj-lint] TokenStream stream = ignoreCase ? new 
LowerCaseFilter(tokenizer) : tokenizer;
 [ecj-lint] ^^
 [ecj-lint] Resource leak: 'stream' is never closed
 [ecj-lint] --
 [ecj-lint] 4 problems (2 errors, 2 warnings)

BUILD FAILED
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:775: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/build.xml:101: The 
following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/build.xml:204: 
The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:2177:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/analysis/build.xml:142:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/analysis/build.xml:38:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:1992:
 The following error occurred while executing this line:
/x1/jenkins/jenkins-slave/workspace/Lucene-Solr-Tests-6.x/lucene/common-build.xml:2031:
 Compile failed; see the compiler error output for details.

Total time: 71 minutes 35 seconds
Build step 'Invoke Ant' marked build as failure
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any




-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-6.x-MacOSX (64bit/jdk1.8.0) - Build # 648 - Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-MacOSX/648/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
PeerSynced node did not become leader expected:http://127.0.0.1:55250/wy/qh/collection1]> but was:http://127.0.0.1:55238/wy/qh/collection1]>

Stack Trace:
java.lang.AssertionError: PeerSynced node did not become leader 
expected:http://127.0.0.1:55250/wy/qh/collection1]> but 
was:http://127.0.0.1:55238/wy/qh/collection1]>
at 
__randomizedtesting.SeedInfo.seed([A66B5E984B97B6B:82328A332A451693]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:162)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:992)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:967)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
  

[jira] [Commented] (LUCENE-7398) Nested Span Queries are buggy

2017-01-16 Thread Artem Lukanin (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823865#comment-15823865
 ] 

Artem Lukanin commented on LUCENE-7398:
---

Actually testNestedOrQuery4 works if I setSlop(3). I forgot to take into 
account one more Span, when transferring binary-nested clauses into 3-clauses 
SpanNearQuery.

> Nested Span Queries are buggy
> -
>
> Key: LUCENE-7398
> URL: https://issues.apache.org/jira/browse/LUCENE-7398
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/search
>Affects Versions: 5.5, 6.x
>Reporter: Christoph Goller
>Assignee: Alan Woodward
>Priority: Critical
> Attachments: LUCENE-7398-20160814.patch, LUCENE-7398-20160924.patch, 
> LUCENE-7398-20160925.patch, LUCENE-7398.patch, LUCENE-7398.patch, 
> LUCENE-7398.patch, TestSpanCollection.java
>
>
> Example for a nested SpanQuery that is not working:
> Document: Human Genome Organization , HUGO , is trying to coordinate gene 
> mapping research worldwide.
> Query: spanNear([body:coordinate, spanOr([spanNear([body:gene, body:mapping], 
> 0, true), body:gene]), body:research], 0, true)
> The query should match "coordinate gene mapping research" as well as 
> "coordinate gene research". It does not match  "coordinate gene mapping 
> research" with Lucene 5.5 or 6.1, it did however match with Lucene 4.10.4. It 
> probably stopped working with the changes on SpanQueries in 5.3. I will 
> attach a unit test that shows the problem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7623) Add FunctionScoreQuery and FunctionMatchQuery

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823828#comment-15823828
 ] 

ASF subversion and git services commented on LUCENE-7623:
-

Commit 85ae5de7032ca4511d598a68961864bcfc75caa2 in lucene-solr's branch 
refs/heads/branch_6x from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=85ae5de ]

LUCENE-7623: Add FunctionMatchQuery and FunctionScoreQuery


> Add FunctionScoreQuery and FunctionMatchQuery
> -
>
> Key: LUCENE-7623
> URL: https://issues.apache.org/jira/browse/LUCENE-7623
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch, 
> LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch
>
>
> We should update the various function scoring queries to use the new 
> DoubleValues API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7623) Add FunctionScoreQuery and FunctionMatchQuery

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823829#comment-15823829
 ] 

ASF subversion and git services commented on LUCENE-7623:
-

Commit fc2e0fd13324699fe1ddb15bb09960a8501f52f5 in lucene-solr's branch 
refs/heads/master from [~romseygeek]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=fc2e0fd ]

LUCENE-7623: Add FunctionMatchQuery and FunctionScoreQuery


> Add FunctionScoreQuery and FunctionMatchQuery
> -
>
> Key: LUCENE-7623
> URL: https://issues.apache.org/jira/browse/LUCENE-7623
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch, 
> LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch
>
>
> We should update the various function scoring queries to use the new 
> DoubleValues API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7628) Add a getMatchingChildren() method to DisjunctionScorer

2017-01-16 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823821#comment-15823821
 ] 

Alan Woodward commented on LUCENE-7628:
---

I'm not sure how moving the introspection API to Weight would work, though, as 
we need to check subscorers when the parent scorer is positioned on a specific 
document.

> Add a getMatchingChildren() method to DisjunctionScorer
> ---
>
> Key: LUCENE-7628
> URL: https://issues.apache.org/jira/browse/LUCENE-7628
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.4
>
> Attachments: LUCENE-7628.patch
>
>
> This one is a bit convoluted, so bear with me...
> The luwak highlighter works by rewriting queries into their Span-equivalents, 
> and then running them with a special Collector.  At each matching doc, the 
> highlighter gathers all the Spans objects positioned on the current doc and 
> collects their positions using the SpanCollection API.
> Some queries can't be translated into Spans.  For those queries that generate 
> Scorers with ChildScorers, like BooleanQuery, we can call .getChildren() on 
> the Scorer and see if any of them are SpanScorers, and for those that aren't 
> we can call .getChildren() again and recurse down.  For each child scorer, we 
> check that it's positioned on the current document, so non-matching 
> subscorers can be skipped.
> This all works correctly *except* in the case of a DisjunctionScorer where 
> one of the children is a two-phase iterator that has matched its 
> approximation, but not its refinement query.  A SpanScorer in this situation 
> will be correctly positioned on the current document, but its Spans will be 
> in an undefined state, meaning the highlighter will either collect incorrect 
> hits, or it will throw an Exception and prevent hits being collected from 
> other subspans.
> We've tried various ways around this (including forking SpanNearQuery and 
> adding a bunch of slow position checks to it that are used only by the 
> highlighting code), but it turns out that the simplest fix is to add a new 
> method to DisjunctionScorer that only returns the currently matching child 
> Scorers.  It's a bit of a hack, and it won't be used anywhere else, but it's 
> a fairly small and contained hack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: lucene-solr:branch_6x: LUCENE-7630: Fix (Edge)NGramTokenFilter to no longer drop payloads and preserve all attributes [merge branch 'edgepayloads' from Nathan Gass https://github.com/xabbu42/lucen

2017-01-16 Thread Alan Woodward
Oh, I see, it’s because TokenFilter extends AttributeSource, so the import is 
unnecessary.  Will push a fix as part of LUCENE-7623.

Alan Woodward
www.flax.co.uk


> On 16 Jan 2017, at 11:26, Alan Woodward  wrote:
> 
> This is making precommit fail for me locally:
> 
> -ecj-javadoc-lint-src:
> [mkdir] Created dir: 
> /var/folders/16/hgq2wtys7nv1_x9st6mdpwzhgp/T/ecj662445789
>  [ecj-lint] Compiling 453 source files to 
> /var/folders/16/hgq2wtys7nv1_x9st6mdpwzhgp/T/ecj662445789
>  [ecj-lint] --
>  [ecj-lint] 1. ERROR in 
> /Users/woody/asf/lucene-solr-trunk/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
>  (at line 26)
>  [ecj-lint]   import org.apache.lucene.util.AttributeSource.State;
>  [ecj-lint]  
>  [ecj-lint] The import org.apache.lucene.util.AttributeSource.State is never 
> used
>  [ecj-lint] —
> 
> Which is confusing as hell, because the import clearly *is* used.  And 
> removing the import fixes things, even though it shouldn’t then compile.
> 
> Alan Woodward
> www.flax.co.uk 
> 
> 
>> On 16 Jan 2017, at 10:27, uschind...@apache.org 
>>  wrote:
>> 
>> Repository: lucene-solr
>> Updated Branches:
>>  refs/heads/branch_6x b5b17b23c -> a69c632aa
>> 
>> 
>> LUCENE-7630: Fix (Edge)NGramTokenFilter to no longer drop payloads and 
>> preserve all attributes
>> [merge branch 'edgepayloads' from Nathan Gass 
>> https://github.com/xabbu42/lucene-solr] 
>> 
>> 
>> Signed-off-by: Uwe Schindler > >
>> 
>> 
>> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo 
>> 
>> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/a69c632a 
>> 
>> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/a69c632a 
>> 
>> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/a69c632a 
>> 
>> 
>> Branch: refs/heads/branch_6x
>> Commit: a69c632aa54d064515152145bcbcbe1e869d7061
>> Parents: b5b17b2
>> Author: Uwe Schindler >
>> Authored: Mon Jan 16 11:16:43 2017 +0100
>> Committer: Uwe Schindler > >
>> Committed: Mon Jan 16 11:24:55 2017 +0100
>> 
>> --
>> lucene/CHANGES.txt  |  7 +++
>> .../analysis/ngram/EdgeNGramTokenFilter.java| 17 ++-
>> .../lucene/analysis/ngram/NGramTokenFilter.java | 19 +++-
>> .../lucene/analysis/ngram/TestNGramFilters.java | 47 
>> 4 files changed, 63 insertions(+), 27 deletions(-)
>> --
>> 
>> 
>> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/a69c632a/lucene/CHANGES.txt
>>  
>> 
>> --
>> diff --git a/lucene/CHANGES.txt b/lucene/CHANGES.txt
>> index 5de3bab..af0ff77 100644
>> --- a/lucene/CHANGES.txt
>> +++ b/lucene/CHANGES.txt
>> @@ -6,6 +6,13 @@ http://s.apache.org/luceneversions
>> === Lucene 6.5.0 ===
>> (No Changes)
>> 
>> +=== Lucene 6.5.0 ===
>> +
>> +Bug Fixes
>> +
>> +* LUCENE-7630: Fix (Edge)NGramTokenFilter to no longer drop payloads
>> +  and preserve all attributes. (Nathan Gass via Uwe Schindler)
>> +
>> === Lucene 6.4.0 ===
>> 
>> API Changes
>> 
>> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/a69c632a/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
>> --
>> diff --git 
>> a/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
>>  
>> b/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
>> index 827e26f..47b80ff 100644
>> --- 
>> a/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
>> +++ 
>> b/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
>> @@ -22,9 +22,8 @@ import java.io.IOException;
>> import org.apache.lucene.analysis.TokenFilter;
>> import org.apache.lucene.analysis.TokenStream;
>> import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
>> -import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
>> import 

Re: lucene-solr:branch_6x: LUCENE-7630: Fix (Edge)NGramTokenFilter to no longer drop payloads and preserve all attributes [merge branch 'edgepayloads' from Nathan Gass https://github.com/xabbu42/lucen

2017-01-16 Thread Alan Woodward
This is making precommit fail for me locally:

-ecj-javadoc-lint-src:
[mkdir] Created dir: 
/var/folders/16/hgq2wtys7nv1_x9st6mdpwzhgp/T/ecj662445789
 [ecj-lint] Compiling 453 source files to 
/var/folders/16/hgq2wtys7nv1_x9st6mdpwzhgp/T/ecj662445789
 [ecj-lint] --
 [ecj-lint] 1. ERROR in 
/Users/woody/asf/lucene-solr-trunk/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
 (at line 26)
 [ecj-lint] import org.apache.lucene.util.AttributeSource.State;
 [ecj-lint]
 [ecj-lint] The import org.apache.lucene.util.AttributeSource.State is never 
used
 [ecj-lint] —

Which is confusing as hell, because the import clearly *is* used.  And removing 
the import fixes things, even though it shouldn’t then compile.

Alan Woodward
www.flax.co.uk


> On 16 Jan 2017, at 10:27, uschind...@apache.org wrote:
> 
> Repository: lucene-solr
> Updated Branches:
>  refs/heads/branch_6x b5b17b23c -> a69c632aa
> 
> 
> LUCENE-7630: Fix (Edge)NGramTokenFilter to no longer drop payloads and 
> preserve all attributes
> [merge branch 'edgepayloads' from Nathan Gass 
> https://github.com/xabbu42/lucene-solr]
> 
> Signed-off-by: Uwe Schindler 
> 
> 
> Project: http://git-wip-us.apache.org/repos/asf/lucene-solr/repo
> Commit: http://git-wip-us.apache.org/repos/asf/lucene-solr/commit/a69c632a
> Tree: http://git-wip-us.apache.org/repos/asf/lucene-solr/tree/a69c632a
> Diff: http://git-wip-us.apache.org/repos/asf/lucene-solr/diff/a69c632a
> 
> Branch: refs/heads/branch_6x
> Commit: a69c632aa54d064515152145bcbcbe1e869d7061
> Parents: b5b17b2
> Author: Uwe Schindler 
> Authored: Mon Jan 16 11:16:43 2017 +0100
> Committer: Uwe Schindler 
> Committed: Mon Jan 16 11:24:55 2017 +0100
> 
> --
> lucene/CHANGES.txt  |  7 +++
> .../analysis/ngram/EdgeNGramTokenFilter.java| 17 ++-
> .../lucene/analysis/ngram/NGramTokenFilter.java | 19 +++-
> .../lucene/analysis/ngram/TestNGramFilters.java | 47 
> 4 files changed, 63 insertions(+), 27 deletions(-)
> --
> 
> 
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/a69c632a/lucene/CHANGES.txt
> --
> diff --git a/lucene/CHANGES.txt b/lucene/CHANGES.txt
> index 5de3bab..af0ff77 100644
> --- a/lucene/CHANGES.txt
> +++ b/lucene/CHANGES.txt
> @@ -6,6 +6,13 @@ http://s.apache.org/luceneversions
> === Lucene 6.5.0 ===
> (No Changes)
> 
> +=== Lucene 6.5.0 ===
> +
> +Bug Fixes
> +
> +* LUCENE-7630: Fix (Edge)NGramTokenFilter to no longer drop payloads
> +  and preserve all attributes. (Nathan Gass via Uwe Schindler)
> +
> === Lucene 6.4.0 ===
> 
> API Changes
> 
> http://git-wip-us.apache.org/repos/asf/lucene-solr/blob/a69c632a/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
> --
> diff --git 
> a/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
>  
> b/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
> index 827e26f..47b80ff 100644
> --- 
> a/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
> +++ 
> b/lucene/analysis/common/src/java/org/apache/lucene/analysis/ngram/EdgeNGramTokenFilter.java
> @@ -22,9 +22,8 @@ import java.io.IOException;
> import org.apache.lucene.analysis.TokenFilter;
> import org.apache.lucene.analysis.TokenStream;
> import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
> -import org.apache.lucene.analysis.tokenattributes.OffsetAttribute;
> import org.apache.lucene.analysis.tokenattributes.PositionIncrementAttribute;
> -import org.apache.lucene.analysis.tokenattributes.PositionLengthAttribute;
> +import org.apache.lucene.util.AttributeSource.State;
> 
> /**
>  * Tokenizes the given token into n-grams of given size(s).
> @@ -43,15 +42,11 @@ public final class EdgeNGramTokenFilter extends 
> TokenFilter {
>   private int curTermLength;
>   private int curCodePointCount;
>   private int curGramSize;
> -  private int tokStart;
> -  private int tokEnd; // only used if the length changed before this filter
>   private int savePosIncr;
> -  private int savePosLen;
> +  private State state;
> 
>   private final CharTermAttribute termAtt = 
> addAttribute(CharTermAttribute.class);
> -  private final OffsetAttribute offsetAtt = 
> addAttribute(OffsetAttribute.class);
>   private final PositionIncrementAttribute posIncrAtt = 
> addAttribute(PositionIncrementAttribute.class);
> -  private 

[JENKINS] Lucene-Solr-6.x-Linux (32bit/jdk1.8.0_112) - Build # 2677 - Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-6.x-Linux/2677/
Java: 32bit/jdk1.8.0_112 -client -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  
junit.framework.TestSuite.org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation

Error Message:
2 threads leaked from SUITE scope at 
org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 1) 
Thread[id=25556, name=jetty-launcher-6378-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)   
 2) Thread[id=25557, name=jetty-launcher-6378-thread-1-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation] 
at sun.misc.Unsafe.park(Native Method) at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215) 
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
 at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
 at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)  
   at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
 at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)  
   at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
 at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.readValue(SharedValue.java:244)
 at 
org.apache.curator.framework.recipes.shared.SharedValue.access$100(SharedValue.java:44)
 at 
org.apache.curator.framework.recipes.shared.SharedValue$1.process(SharedValue.java:61)
 at 
org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67)
 at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)   
  at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 2 threads leaked from SUITE 
scope at org.apache.solr.cloud.TestSolrCloudWithSecureImpersonation: 
   1) Thread[id=25556, name=jetty-launcher-6378-thread-2-EventThread, 
state=TIMED_WAITING, group=TGRP-TestSolrCloudWithSecureImpersonation]
at sun.misc.Unsafe.park(Native Method)
at 
java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:277)
at 
org.apache.curator.CuratorZookeeperClient.internalBlockUntilConnectedOrTimedOut(CuratorZookeeperClient.java:323)
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:105)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:288)
at 
org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279)
at 

[jira] [Resolved] (LUCENE-7630) EdgeNGramTokenFilter drops payloads

2017-01-16 Thread Uwe Schindler (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Uwe Schindler resolved LUCENE-7630.
---
Resolution: Fixed

Thanks Nathan!

> EdgeNGramTokenFilter drops payloads
> ---
>
> Key: LUCENE-7630
> URL: https://issues.apache.org/jira/browse/LUCENE-7630
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: master (7.0)
>Reporter: Nathan Gass
>Assignee: Uwe Schindler
>Priority: Minor
> Fix For: master (7.0), 6.5
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Using an EdgeNGramTokenFilter after a DelimitedPayloadTokenFilter discards 
> the payloads, where as most other filters copy the payload to the new tokens.
> I added a test for this issue and a possible fix at 
> https://github.com/xabbu42/lucene-solr/tree/edgepayloads
> Greetings
> Nathan Gass



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-master-MacOSX (64bit/jdk1.8.0) - Build # 3782 - Still Unstable!

2017-01-16 Thread Policeman Jenkins Server
Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-MacOSX/3782/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.PeerSyncReplicationTest.test

Error Message:
timeout waiting to see all nodes active

Stack Trace:
java.lang.AssertionError: timeout waiting to see all nodes active
at 
__randomizedtesting.SeedInfo.seed([3A83B26FB49563E2:B2D78DB51A690E1A]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.waitTillNodesActive(PeerSyncReplicationTest.java:326)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.bringUpDeadNodeAndEnsureNoReplication(PeerSyncReplicationTest.java:277)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.forceNodeFailureAndDoPeerSync(PeerSyncReplicationTest.java:259)
at 
org.apache.solr.cloud.PeerSyncReplicationTest.test(PeerSyncReplicationTest.java:138)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1713)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:907)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:943)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:957)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:985)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:960)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:367)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:811)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:462)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:916)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:802)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:852)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
at 

[jira] [Commented] (LUCENE-7630) EdgeNGramTokenFilter drops payloads

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823735#comment-15823735
 ] 

ASF subversion and git services commented on LUCENE-7630:
-

Commit a69c632aa54d064515152145bcbcbe1e869d7061 in lucene-solr's branch 
refs/heads/branch_6x from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=a69c632 ]

LUCENE-7630: Fix (Edge)NGramTokenFilter to no longer drop payloads and preserve 
all attributes
[merge branch 'edgepayloads' from Nathan Gass 
https://github.com/xabbu42/lucene-solr]

Signed-off-by: Uwe Schindler 


> EdgeNGramTokenFilter drops payloads
> ---
>
> Key: LUCENE-7630
> URL: https://issues.apache.org/jira/browse/LUCENE-7630
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: master (7.0)
>Reporter: Nathan Gass
>Assignee: Uwe Schindler
>Priority: Minor
> Fix For: master (7.0), 6.5
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Using an EdgeNGramTokenFilter after a DelimitedPayloadTokenFilter discards 
> the payloads, where as most other filters copy the payload to the new tokens.
> I added a test for this issue and a possible fix at 
> https://github.com/xabbu42/lucene-solr/tree/edgepayloads
> Greetings
> Nathan Gass



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7628) Add a getMatchingChildren() method to DisjunctionScorer

2017-01-16 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823736#comment-15823736
 ] 

Adrien Grand commented on LUCENE-7628:
--

I think that would work, but Collector is another API where I'd like to be 
careful about adding new methods. I think {{needsScores}} was very compelling 
because it enabled significant optimizations as well as merging queries and 
filters. I think I would need more compelling use-cases to be convinced about 
adding such a new API on Collector. Out of curiosity, would it work for your 
use-case if the introspection API was on Weight rather than Scorer? That would 
work better for me since Weight is not exposed in Collector like Scorer and 
does not have the same performance requirements.

> Add a getMatchingChildren() method to DisjunctionScorer
> ---
>
> Key: LUCENE-7628
> URL: https://issues.apache.org/jira/browse/LUCENE-7628
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 6.4
>
> Attachments: LUCENE-7628.patch
>
>
> This one is a bit convoluted, so bear with me...
> The luwak highlighter works by rewriting queries into their Span-equivalents, 
> and then running them with a special Collector.  At each matching doc, the 
> highlighter gathers all the Spans objects positioned on the current doc and 
> collects their positions using the SpanCollection API.
> Some queries can't be translated into Spans.  For those queries that generate 
> Scorers with ChildScorers, like BooleanQuery, we can call .getChildren() on 
> the Scorer and see if any of them are SpanScorers, and for those that aren't 
> we can call .getChildren() again and recurse down.  For each child scorer, we 
> check that it's positioned on the current document, so non-matching 
> subscorers can be skipped.
> This all works correctly *except* in the case of a DisjunctionScorer where 
> one of the children is a two-phase iterator that has matched its 
> approximation, but not its refinement query.  A SpanScorer in this situation 
> will be correctly positioned on the current document, but its Spans will be 
> in an undefined state, meaning the highlighter will either collect incorrect 
> hits, or it will throw an Exception and prevent hits being collected from 
> other subspans.
> We've tried various ways around this (including forking SpanNearQuery and 
> adding a bunch of slow position checks to it that are used only by the 
> highlighting code), but it turns out that the simplest fix is to add a new 
> method to DisjunctionScorer that only returns the currently matching child 
> Scorers.  It's a bit of a hack, and it won't be used anywhere else, but it's 
> a fairly small and contained hack.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7623) Add FunctionScoreQuery and FunctionMatchQuery

2017-01-16 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823730#comment-15823730
 ] 

Adrien Grand commented on LUCENE-7623:
--

+1

> Add FunctionScoreQuery and FunctionMatchQuery
> -
>
> Key: LUCENE-7623
> URL: https://issues.apache.org/jira/browse/LUCENE-7623
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch, 
> LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch
>
>
> We should update the various function scoring queries to use the new 
> DoubleValues API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7630) EdgeNGramTokenFilter drops payloads

2017-01-16 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823722#comment-15823722
 ] 

ASF GitHub Bot commented on LUCENE-7630:


Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/138


> EdgeNGramTokenFilter drops payloads
> ---
>
> Key: LUCENE-7630
> URL: https://issues.apache.org/jira/browse/LUCENE-7630
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: master (7.0)
>Reporter: Nathan Gass
>Assignee: Uwe Schindler
>Priority: Minor
> Fix For: master (7.0), 6.5
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Using an EdgeNGramTokenFilter after a DelimitedPayloadTokenFilter discards 
> the payloads, where as most other filters copy the payload to the new tokens.
> I added a test for this issue and a possible fix at 
> https://github.com/xabbu42/lucene-solr/tree/edgepayloads
> Greetings
> Nathan Gass



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request #138: EdgeNGramTokenFilter drops payloads

2017-01-16 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/138


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-7630) EdgeNGramTokenFilter drops payloads

2017-01-16 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-7630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15823721#comment-15823721
 ] 

ASF subversion and git services commented on LUCENE-7630:
-

Commit c64a01158e972176256e257d6c1d4629b05783a2 in lucene-solr's branch 
refs/heads/master from [~thetaphi]
[ https://git-wip-us.apache.org/repos/asf?p=lucene-solr.git;h=c64a011 ]

LUCENE-7630: Fix (Edge)NGramTokenFilter to no longer drop payloads and preserve 
all attributes
[merge branch 'edgepayloads' from Nathan Gass 
https://github.com/xabbu42/lucene-solr]

Signed-off-by: Uwe Schindler 


> EdgeNGramTokenFilter drops payloads
> ---
>
> Key: LUCENE-7630
> URL: https://issues.apache.org/jira/browse/LUCENE-7630
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: modules/analysis
>Affects Versions: master (7.0)
>Reporter: Nathan Gass
>Assignee: Uwe Schindler
>Priority: Minor
> Fix For: master (7.0), 6.5
>
>   Original Estimate: 48h
>  Remaining Estimate: 48h
>
> Using an EdgeNGramTokenFilter after a DelimitedPayloadTokenFilter discards 
> the payloads, where as most other filters copy the payload to the new tokens.
> I added a test for this issue and a possible fix at 
> https://github.com/xabbu42/lucene-solr/tree/edgepayloads
> Greetings
> Nathan Gass



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-7623) Add FunctionScoreQuery and FunctionMatchQuery

2017-01-16 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-7623?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-7623:
--
Attachment: LUCENE-7623.patch

Final patch with Adrien's tweaks, will commit shortly (pending precommit)

> Add FunctionScoreQuery and FunctionMatchQuery
> -
>
> Key: LUCENE-7623
> URL: https://issues.apache.org/jira/browse/LUCENE-7623
> Project: Lucene - Core
>  Issue Type: New Feature
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Attachments: LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch, 
> LUCENE-7623.patch, LUCENE-7623.patch, LUCENE-7623.patch
>
>
> We should update the various function scoring queries to use the new 
> DoubleValues API



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >