[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 658 - Still Failing

2014-10-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/658/

3 tests failed.
REGRESSION:  
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.testDistribSearch

Error Message:
There were too many update fails - we expect it can happen, but shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails - we expect it can 
happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([7623E1151B611AA4:F7C56F0D6C3E7A98]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.doTest(ChaosMonkeyNothingIsSafeTest.java:223)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$S

[jira] [Commented] (SOLR-3191) field exclusion from fl

2014-10-24 Thread Andrea Gazzarini (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183984#comment-14183984
 ] 

Andrea Gazzarini commented on SOLR-3191:


Hi Jan,
if you refer to my long comment above, the two last points are questions
about the granularity of unit tests, so nothing important, from a
functional point of view.



> field exclusion from fl
> ---
>
> Key: SOLR-3191
> URL: https://issues.apache.org/jira/browse/SOLR-3191
> Project: Solr
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
> Attachments: SOLR-3191.patch, SOLR-3191.patch, SOLR-3191.patch
>
>
> I think it would be useful to add a way to exclude field from the Solr 
> response. If I have for example 100 stored fields and I want to return all of 
> them but one, it would be handy to list just the field I want to exclude 
> instead of the 99 fields for inclusion through fl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.8.0) - Build # 1862 - Failure!

2014-10-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/1862/
Java: 64bit/jdk1.8.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  
org.apache.solr.client.solrj.embedded.MultiCoreExampleJettyTest.testMultiCore

Error Message:
IOException occured when talking to server at: 
https://127.0.0.1:54849/example/core1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: https://127.0.0.1:54849/example/core1
at 
__randomizedtesting.SeedInfo.seed([602E949A2B8719B2:E4068F69E4541487]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:584)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at 
org.apache.solr.client.solrj.MultiCoreExampleTestBase.testMultiCore(MultiCoreExampleTestBase.java:164)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRu

[JENKINS] Lucene-Solr-trunk-MacOSX (64bit/jdk1.8.0) - Build # 1904 - Failure!

2014-10-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-MacOSX/1904/
Java: 64bit/jdk1.8.0 -XX:-UseCompressedOops -XX:+UseParallelGC

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings

Error Message:
startOffset 931 expected:<7047> but was:<7048>

Stack Trace:
java.lang.AssertionError: startOffset 931 expected:<7047> but was:<7048>
at 
__randomizedtesting.SeedInfo.seed([4B4F06D46C9C71F1:C3C6066ACF9826C4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:182)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:295)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:299)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:815)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:614)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:512)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:436)
at 
org.apache.lucene.analysis.icu.TestICUNormalizer2CharFilter.testRandomStrings(TestICUNormalizer2CharFilter.java:189)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailu

[JENKINS] Lucene-Solr-4.10-Linux (64bit/jdk1.7.0_67) - Build # 18 - Still Failing!

2014-10-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-4.10-Linux/18/
Java: 64bit/jdk1.7.0_67 -XX:-UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 61900 lines...]
BUILD FAILED
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.10-Linux/build.xml:474: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.10-Linux/build.xml:413: The following 
error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.10-Linux/extra-targets.xml:87: The 
following error occurred while executing this line:
/mnt/ssd/jenkins/workspace/Lucene-Solr-4.10-Linux/extra-targets.xml:192: Source 
checkout is dirty after running tests!!! Offending files:
* ./solr/licenses/kite-morphlines-hadoop-sequencefile-0.12.1.jar.sha1

Total time: 116 minutes 15 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0_67 
-XX:-UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_20) - Build # 4288 - Still Failing!

2014-10-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4288/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseConcMarkSweepGC

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteReplicaTest.testDistribSearch

Error Message:
No live SolrServers available to handle this request:[http://127.0.0.1:62251/_, 
http://127.0.0.1:62285/_, http://127.0.0.1:62275/_, http://127.0.0.1:62266/_, 
http://127.0.0.1:62294/_]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[http://127.0.0.1:62251/_, http://127.0.0.1:62285/_, 
http://127.0.0.1:62275/_, http://127.0.0.1:62266/_, http://127.0.0.1:62294/_]
at 
__randomizedtesting.SeedInfo.seed([98E60CDD65790F71:190082C512266F4D]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrServer.request(LBHttpSolrServer.java:333)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.sendRequest(CloudSolrServer.java:1015)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.requestWithRetryOnStaleState(CloudSolrServer.java:793)
at 
org.apache.solr.client.solrj.impl.CloudSolrServer.request(CloudSolrServer.java:736)
at 
org.apache.solr.cloud.DeleteReplicaTest.removeAndWaitForReplicaGone(DeleteReplicaTest.java:172)
at 
org.apache.solr.cloud.DeleteReplicaTest.deleteLiveReplicaTest(DeleteReplicaTest.java:145)
at 
org.apache.solr.cloud.DeleteReplicaTest.doTest(DeleteReplicaTest.java:89)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapte

[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread Jessica Cheng Mallet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183804#comment-14183804
 ] 

Jessica Cheng Mallet commented on SOLR-6650:


Updated the PR to have this be disabled by default.

> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
> Fix For: 5.0
>
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6351) Let Stats Hang off of Pivots (via 'tag')

2014-10-24 Thread Hoss Man (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hoss Man updated SOLR-6351:
---
Attachment: SOLR-6351.patch

quick question for [~vzhovtiuk]: in an earlier comment, you mentioned creating 
a new "FacetPivotSmallTest.java" test based on my earlier outline of how to 
move forward -- but in later patches that test wasn't included.  was there a 
reason for removing it ? or was that just an oversight when generating the 
later patches?  Can we resurect that test from some of your earlier patches?



Been focusing on TestCloudPivotFacet.  One particularly larg change to note...

After doing a bit of refactoring of some stuff i've mentioned in previous 
comments, i realized that buildRandomPivotStatsFields & buildStatsTagString 
didn't really make a lot of sense -- notably due to some leftover cut/paste 
comment cruft.  digging in a bit, i realized that the way buildStatsTagString 
was being called, we were only ever using the first stats tag w/ the first 
facet.pivot -- and obliterating the second pivot from the params if there was 
one.

So i ripped those methods out, and re-vamped the way the random stats.field 
params were generated, and how the tags were associated with the pivots using 
some slightly diff helper methods.

{panel:title=changes since last patch}
* TestCloudPivotFacet
** moved stats & stats.field params from pivotP to baseP
*** this simplified a lot of request logic in assertPivotStats
** buildRandomPivotStatsFields & buildStatsTagString
*** replaced with simpler logic via pickRandomStatsFields & buildPivotParamValue
 this uncovered an NPE in StatsInfo.getStatsFieldsByTag (see below)
*** fixed pickRandomStatsFields to include strings (not sure why they were 
excluded - string stats work fine in StatsComponent)
** assertPivotStats
*** switched from one verification query per stat field, to a single 
verification query that loops over each stats
*** shortened some variable names & simplified assert msgs
*** added some hack-ish sanity checks on which stats were found for each pivot
** assertPivotData
*** wrapped up both assertNumFound & assertPivotStats so that a single query is 
executed and then each of those methods validates the data in the response that 
they care about
** added "assertDoubles()" and "sanityCheckAssertDoubles()"
*** hammering on the test lead to a situation where stddev's were off due to 
double rounding because of the order that the sumOfQuares were accumulated from 
each shared (expected:<2.3005390038169265E9> but was:<2.300539003816927E9>)
*** so i added a helper method to compare these types of stats with a "small" 
epsilon relative to the size of the expected value, and a simple sanity checker 
to test-the-test.

* QueryResponse
** refactored & tightened up the pivot case statement a bit to assert on 
unexpected keys or value types

* StatsComponent
** fixed NPE in StatsInfo.getStatsFieldsByTag - if someone asks for 
"{{facet.pivot=\{!stats=bogus\}foo}}" (where 'bogus' is not a valid tag on a 
stats.field) that was causing an NPE - should be ignored just like ex=bogus
{panel}


> Let Stats Hang off of Pivots (via 'tag')
> 
>
> Key: SOLR-6351
> URL: https://issues.apache.org/jira/browse/SOLR-6351
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Hoss Man
> Attachments: SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, 
> SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch, SOLR-6351.patch
>
>
> he goal here is basically flip the notion of "stats.facet" on it's head, so 
> that instead of asking the stats component to also do some faceting 
> (something that's never worked well with the variety of field types and has 
> never worked in distributed mode) we instead ask the PivotFacet code to 
> compute some stats X for each leaf in a pivot.  We'll do this with the 
> existing {{stats.field}} params, but we'll leverage the {{tag}} local param 
> of the {{stats.field}} instances to be able to associate which stats we want 
> hanging off of which {{facet.pivot}}
> Example...
> {noformat}
> facet.pivot={!stats=s1}category,manufacturer
> stats.field={!key=avg_price tag=s1 mean=true}price
> stats.field={!tag=s1 min=true max=true}user_rating
> {noformat}
> ...with the request above, in addition to computing the min/max user_rating 
> and mean price (labeled "avg_price") over the entire result set, the 
> PivotFacet component will also include those stats for every node of the tree 
> it builds up when generating a pivot of the fields "category,manufacturer"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@luc

[jira] [Commented] (LUCENE-6023) Remove "final" modifier from four methods of TFIDFSimilarity class to make them overridable.

2014-10-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183764#comment-14183764
 ] 

Robert Muir commented on LUCENE-6023:
-

tf/idf similarity exposes its own api, which is meant for extension. Thats why 
the low level methods are final, otherwise the API would be inconsistent and 
unsafe.

If you want to change how norms are encoded and so on, that is really expert. 
extend similarity directly.

> Remove "final" modifier from four methods of TFIDFSimilarity class to make 
> them overridable.
> 
>
> Key: LUCENE-6023
> URL: https://issues.apache.org/jira/browse/LUCENE-6023
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Affects Versions: 4.2.1
>Reporter: Hafiz M Hamid
>  Labels: similarity
> Fix For: 4.2.1
>
>
> The TFIDFSimilarity has the following four of its public methods marked 
> "final" which is keeping us from overriding these methods. Apparently there 
> doesn't seem to be an obvious reason for keeping these methods 
> non-overridable.
> Here are the four methods:
> computeNorm()
> computeWeight()
> exactSimScorer()
> sloppySimScorer()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6023) Remove "final" modifier from four methods of TFIDFSimilarity class to make them overridable.

2014-10-24 Thread Hafiz M Hamid (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183754#comment-14183754
 ] 

Hafiz M Hamid commented on LUCENE-6023:
---

That would require copy/pasting code from TFIDFSimilarity into our new 
Similarity implementation which we want to avoid as that might make it hard to 
upgrade in future. Majority of the DefaultSimilarity/TFIDFSimilarity 
functionality is still useful for us, we only want to override computation of a 
single component (i.e. fieldNorm) of existing tf-idf based scoring formula. 
Also changing it in the original code would allow others to benefit from it 
without posing any risks.

I'm curious why we even have "final" modifiers on these methods. Unless it 
hurts the design/function of the class, there shouldn't be any harm in letting 
people extend/override the methods.


> Remove "final" modifier from four methods of TFIDFSimilarity class to make 
> them overridable.
> 
>
> Key: LUCENE-6023
> URL: https://issues.apache.org/jira/browse/LUCENE-6023
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Affects Versions: 4.2.1
>Reporter: Hafiz M Hamid
>  Labels: similarity
> Fix For: 4.2.1
>
>
> The TFIDFSimilarity has the following four of its public methods marked 
> "final" which is keeping us from overriding these methods. Apparently there 
> doesn't seem to be an obvious reason for keeping these methods 
> non-overridable.
> Here are the four methods:
> computeNorm()
> computeWeight()
> exactSimScorer()
> sloppySimScorer()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6077) Create 5 minute tutorial

2014-10-24 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-6077:
--

Assignee: Erik Hatcher

> Create 5 minute tutorial
> 
>
> Key: SOLR-6077
> URL: https://issues.apache.org/jira/browse/SOLR-6077
> Project: Solr
>  Issue Type: Sub-task
>Reporter: Grant Ingersoll
>Assignee: Erik Hatcher
> Attachments: 5minTutorial-v01.markdown
>
>
> Per the new site design for Solr, we'd like to have a 5 minutes to Solr 
> tutorial that covers users getting their data in and querying it.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183669#comment-14183669
 ] 

Steve Rowe commented on SOLR-6058:
--

bq. the URL is appended with a hash mark, slash, then the fragment id

This also happens on the front page, e.g. the down arrow under "Getting 
Started" rewrites to {{solr/#/anchor-2}} instead of {{solr/#anchor-2}}.

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
> Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
> Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
> Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-6058) Solr needs a new website

2014-10-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183633#comment-14183633
 ] 

Steve Rowe edited comment on SOLR-6058 at 10/24/14 10:22 PM:
-

[~FranLukesh], a couple questions/problems:

# From the Solr front page ({{solr/index.html}}), under the book cover images, 
the "Learn More" button links to {{solr/books.html}}, which is currently empty. 
 
#- The {{solr/resources.html}} page has a books section - should the button in 
question link  there instead? 
#- Should there also be a separate books page sharing the same content?
# There's a missing logo file: {{solr/assets/images/logo-eharmony.png}}, linked 
from {{templates/solr-index.html}}.
# On the {{solr/resources.html}} page, the links along the top (Tutorial, 
Release, Documentation, Books, Links, Screenshots) are supposed to go to 
sections on the same page, e.g. Tutorial->{{#tutorial}}, but instead when you 
click on one of them, the URL is appended with a hash mark, *slash*, then the 
fragment id, e.g. {{#/tutorial}}, and then the browser doesn't go where it's 
supposed to.  I don't see anything in the generated HTML that would cause this 
- the fragment URLs in the links on the generated HTML don't include the slash. 
 I've seen this on lukesh.com, as well as a locally built version of the site, 
and both via Safari on OS X and Firefox on Windows 7.


was (Author: steve_rowe):
[~FranLukesh], a couple questions/problems:

# From the Solr front page ({{solr/index.html}}), under the book cover images, 
the "Learn More" button links to {{solr/books.html}}, which is currently empty. 
 
#- The {{solr/resources.html}} page has a books section - should the button in 
question link go there instead? 
#- Should there also be a separate books page sharing the same content?
# There's a missing logo file: {{solr/assets/images/logo-eharmony.png}}, linked 
from {{templates/solr-index.html}}.
# On the {{solr/resources.html}} page, the links along the top (Tutorial, 
Release, Documentation, Books, Links, Screenshots) are supposed to go to 
sections on the same page, e.g. Tutorial->{{#tutorial}}, but instead when you 
click on one of them, the URL is appended with a hash mark, *slash*, then the 
fragment id, e.g. {{#/tutorial}}, and then the browser doesn't go where it's 
supposed to.  I don't see anything in the generated HTML that would cause this 
- the fragment URLs in the links on the generated HTML don't include the slash. 
 I've seen this on lukesh.com, as well as a locally built version of the site, 
and both via Safari on OS X and Firefox on Windows 7.

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
> Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
> Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
> Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6058) Solr needs a new website

2014-10-24 Thread Steve Rowe (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6058?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183633#comment-14183633
 ] 

Steve Rowe commented on SOLR-6058:
--

[~FranLukesh], a couple questions/problems:

# From the Solr front page ({{solr/index.html}}), under the book cover images, 
the "Learn More" button links to {{solr/books.html}}, which is currently empty. 
 
#- The {{solr/resources.html}} page has a books section - should the button in 
question link go there instead? 
#- Should there also be a separate books page sharing the same content?
# There's a missing logo file: {{solr/assets/images/logo-eharmony.png}}, linked 
from {{templates/solr-index.html}}.
# On the {{solr/resources.html}} page, the links along the top (Tutorial, 
Release, Documentation, Books, Links, Screenshots) are supposed to go to 
sections on the same page, e.g. Tutorial->{{#tutorial}}, but instead when you 
click on one of them, the URL is appended with a hash mark, *slash*, then the 
fragment id, e.g. {{#/tutorial}}, and then the browser doesn't go where it's 
supposed to.  I don't see anything in the generated HTML that would cause this 
- the fragment URLs in the links on the generated HTML don't include the slash. 
 I've seen this on lukesh.com, as well as a locally built version of the site, 
and both via Safari on OS X and Firefox on Windows 7.

> Solr needs a new website
> 
>
> Key: SOLR-6058
> URL: https://issues.apache.org/jira/browse/SOLR-6058
> Project: Solr
>  Issue Type: Task
>Reporter: Grant Ingersoll
>Assignee: Grant Ingersoll
> Attachments: HTML.rar, SOLR-6058, Solr_Icons.pdf, 
> Solr_Logo_on_black.pdf, Solr_Logo_on_black.png, Solr_Logo_on_orange.pdf, 
> Solr_Logo_on_orange.png, Solr_Logo_on_white.pdf, Solr_Logo_on_white.png, 
> Solr_Styleguide.pdf
>
>
> Solr needs a new website:  better organization of content, less verbose, more 
> pleasing graphics, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6023) Remove "final" modifier from four methods of TFIDFSimilarity class to make them overridable.

2014-10-24 Thread Hafiz M Hamid (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hafiz M Hamid updated LUCENE-6023:
--
Labels: similarity  (was: )

> Remove "final" modifier from four methods of TFIDFSimilarity class to make 
> them overridable.
> 
>
> Key: LUCENE-6023
> URL: https://issues.apache.org/jira/browse/LUCENE-6023
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Affects Versions: 4.2.1
>Reporter: Hafiz M Hamid
>  Labels: similarity
> Fix For: 4.2.1
>
>
> The TFIDFSimilarity has the following four of its public methods marked 
> "final" which is keeping us from overriding these methods. Apparently there 
> doesn't seem to be an obvious reason for keeping these methods 
> non-overridable.
> Here are the four methods:
> computeNorm()
> computeWeight()
> exactSimScorer()
> sloppySimScorer()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_20) - Build # 4389 - Still Failing!

2014-10-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4389/
Java: 64bit/jdk1.8.0_20 -XX:+UseCompressedOops -XX:+UseSerialGC

All tests passed

Build Log:
[...truncated 14557 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:525: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:473: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:61: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\extra-targets.xml:39: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build.xml:209: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:440:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:496:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\contrib\dataimporthandler-extras\build.xml:50:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\contrib\contrib-build.xml:52:
 impossible to resolve dependencies:
resolve failed - see output for details

Total time: 166 minutes 26 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.8.0_20 
-XX:+UseCompressedOops -XX:+UseSerialGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Created] (SOLR-6653) bin/solr start script should return error code >0 when something fails

2014-10-24 Thread JIRA
Jan Høydahl created SOLR-6653:
-

 Summary: bin/solr start script should return error code >0 when 
something fails
 Key: SOLR-6653
 URL: https://issues.apache.org/jira/browse/SOLR-6653
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 4.10.1
Reporter: Jan Høydahl


In order to be able to include {{bin/solr}} in scripts, it should be possible 
to test the return value for success or failure. Examples:

{noformat}
jan:solr janhoy$ bin/solr start
Waiting to see Solr listening on port 8983 [/]  
Started Solr server on port 8983 (pid=47354). Happy searching!

jan:solr janhoy$ echo $?
0
jan:solr janhoy$ bin/solr start

Solr already running on port 8983 (pid: 47354)!
Please use the 'restart' command if you want to restart this node.

jan:solr janhoy$ echo $?
0
{noformat}

The last command should return status 1

{noformat}
jan:solr janhoy$ bin/solr stop -p 1234
No process found for Solr node running on port 1234
jan:solr janhoy$ echo $?
0
{noformat}

Same here. Probably other places too.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183567#comment-14183567
 ] 

Timothy Potter commented on SOLR-6650:
--

FWIW - the original patch didn't have that problem in that it either logged as 
INFO if INFO was enabled for SolrCore's logger OR logged at WARN if the 
threshold was breached if SolrCore's logger was set to WARN ... re-opening the 
issue to pull in these additional changes

> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
> Fix For: 5.0
>
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reopened SOLR-6650:
--

> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
> Fix For: 5.0
>
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [jira] [Created] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread Jessica Mallet
Will do!

On Friday, October 24, 2014, Tomás Fernández Löbbe (JIRA) 
wrote:

>
> [
> https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183450#comment-14183450
> ]
>
> Tomás Fernández Löbbe commented on SOLR-6650:
> -
>
> bq.  It looks like this defaults to enabled with a threshold of 1 second.
> If that's the case, I don't think that's a good idea. It should default to
> disabled.
> +1 Specially now that the idea is to log WARN in addition to INFO. People
> that don't have this setting in the solrconfig (upgrading?) will start
> logging twice for many queries.
>
> > Add optional slow request logging at WARN level
> > ---
> >
> > Key: SOLR-6650
> > URL: https://issues.apache.org/jira/browse/SOLR-6650
> > Project: Solr
> >  Issue Type: Improvement
> >Reporter: Jessica Cheng Mallet
> >Assignee: Timothy Potter
> >  Labels: logging
> > Fix For: 5.0
> >
> >
> > At super high request rates, logging all the requests can become a
> bottleneck and therefore INFO logging is often turned off. However, it is
> still useful to be able to set a latency threshold above which a request is
> considered "slow" and log that request at WARN level so we can easily
> identify slow queries.
>
>
>
> --
> This message was sent by Atlassian JIRA
> (v6.3.4#6332)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org 
> For additional commands, e-mail: dev-h...@lucene.apache.org 
>
>


[jira] [Commented] (SOLR-6595) Improve error response in case distributed collection cmd fails

2014-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183481#comment-14183481
 ] 

Jan Høydahl commented on SOLR-6595:
---

Appreciate feedback and discussion on how to solve this...

> Improve error response in case distributed collection cmd fails
> ---
>
> Key: SOLR-6595
> URL: https://issues.apache.org/jira/browse/SOLR-6595
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
> Environment: SolrCloud with Client SSL
>Reporter: Sindre Fiskaa
>Priority: Minor
>
> Followed the description 
> https://cwiki.apache.org/confluence/display/solr/Enabling+SSL and generated a 
> self signed key pair. Configured a few solr-nodes and used the collection api 
> to crate a new collection. -I get error message when specify the nodes with 
> the createNodeSet param. When I don't use createNodeSet param the collection 
> gets created without error on random nodes. Could this be a bug related to 
> the createNodeSet param?- *Update: It failed due to what turned out to be 
> invalid client certificate on the overseer, and returned the following 
> response:*
> {code:xml}
> 
>   0 name="QTime">185
>   
> org.apache.solr.client.solrj.SolrServerException:IOException occured 
> when talking to server at: https://vt-searchln04:443/solr
>   
> 
> {code}
> *Update: Three problems:*
> # Status=0 when the cmd did not succeed (only ZK was updated, but cores not 
> created due to failing to connect to shard nodes to talk to core admin API).
> # The error printed does not tell which action failed. Would be helpful to 
> either get the msg from the original exception or at least some message 
> saying "Failed to create core, see log on Overseer 
> # State of collection is not clean since it exists as far as ZK is concerned 
> but cores not created. Thus retrying the CREATECOLLECTION cmd would fail. 
> Should Overseer detect error in distributed cmds and rollback changes already 
> made in ZK?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-3191) field exclusion from fl

2014-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-3191?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183474#comment-14183474
 ] 

Jan Høydahl commented on SOLR-3191:
---

Are there known issues with this patch? I see a few TODO's

> field exclusion from fl
> ---
>
> Key: SOLR-3191
> URL: https://issues.apache.org/jira/browse/SOLR-3191
> Project: Solr
>  Issue Type: Improvement
>Reporter: Luca Cavanna
>Priority: Minor
> Attachments: SOLR-3191.patch, SOLR-3191.patch, SOLR-3191.patch
>
>
> I think it would be useful to add a way to exclude field from the Solr 
> response. If I have for example 100 stored fields and I want to return all of 
> them but one, it would be handy to list just the field I want to exclude 
> instead of the 99 fields for inclusion through fl.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183450#comment-14183450
 ] 

Tomás Fernández Löbbe commented on SOLR-6650:
-

bq.  It looks like this defaults to enabled with a threshold of 1 second. If 
that's the case, I don't think that's a good idea. It should default to 
disabled.
+1 Specially now that the idea is to log WARN in addition to INFO. People that 
don't have this setting in the solrconfig (upgrading?) will start logging twice 
for many queries.

> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
> Fix For: 5.0
>
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6652) Expand Component does not search across collections like Collapse does

2014-10-24 Thread Greg Harris (JIRA)
Greg Harris created SOLR-6652:
-

 Summary: Expand Component does not search across collections like 
Collapse does
 Key: SOLR-6652
 URL: https://issues.apache.org/jira/browse/SOLR-6652
 Project: Solr
  Issue Type: Bug
  Components: query parsers
Affects Versions: 4.10
Reporter: Greg Harris


It seems the Collapse query parser supports searching multiple collections via 
parameter: collection=xx,yy,zz. However, expand=true does not support this and 
all documents are returned from a single collection. Kind of confusing since 
expand is used with Collapse. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183429#comment-14183429
 ] 

Shawn Heisey commented on SOLR-6650:


I like this idea.  I can turn off INFO logging, set a threshold, and chances 
are good that whatever I need to look at is something that will be logged, but 
the file will not be clogged with every request.

My look at the patch was very quick, so I have not noticed every detail.  It 
looks like this defaults to enabled with a threshold of 1 second.  If that's 
the case, I don't think that's a good idea.  It should default to disabled.

Hopefully it will log all queries if I set the threshold to zero.  A negative 
number in the setting (or the setting not present) would be a good way to turn 
it off. Will this only log queries, or would it log all slow requests, 
including calls to /update?


> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
> Fix For: 5.0
>
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183395#comment-14183395
 ] 

ASF GitHub Bot commented on SOLR-6650:
--

GitHub user mewmewball opened a pull request:

https://github.com/apache/lucene-solr/pull/102

SOLR-6650 - Add optional slow request logging at WARN level

Based on discussion with Chris Hostetter, make the slow warn logging an if 
condition on its own rather than an else for the info logging. Also, add "slow: 
" prefix to the log message so it's easy to spot redundancy with info level.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mewmewball/lucene-solr trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #102


commit 8096b200187b81de78862ca71026a13d6a793650
Author: Jessica Cheng 
Date:   2014-10-23T23:07:31Z

SOLR-6650 - Add optional slow request logging at WARN level

commit c25993724e48343d8cc941cc0228312e9ff4f3ab
Author: Jessica Cheng 
Date:   2014-10-23T23:08:25Z

Merge branch 'trunk' of https://github.com/mewmewball/lucene-solr into trunk

# By Jan Høydahl
# Via Jan Høydahl
* 'trunk' of https://github.com/mewmewball/lucene-solr:
  SOLR-6647: Bad error message when missing resource from ZK when parsing 
Schema

commit 5c69624555b4f3f4aa21627efd57772dfa0a477c
Author: Jessica Cheng 
Date:   2014-10-24T18:23:37Z

Merge branch 'trunk' of https://github.com/apache/lucene-solr into trunk

commit ba2a87b22dbc4355e663fb68a6dd4de16b42ff47
Author: Jessica Cheng 
Date:   2014-10-24T20:06:21Z

SOLR-6650 - Add optional slow request logging at WARN level - Based on 
discussion with Chris Hostetter, make the slow warn logging an if condition on 
its own rather than an else for the info logging. Also, add "slow: " prefix to 
the log message so it's easy to spot redundancy with info level.




> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
> Fix For: 5.0
>
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-6650 - Add optional slow request lo...

2014-10-24 Thread mewmewball
GitHub user mewmewball opened a pull request:

https://github.com/apache/lucene-solr/pull/102

SOLR-6650 - Add optional slow request logging at WARN level

Based on discussion with Chris Hostetter, make the slow warn logging an if 
condition on its own rather than an else for the info logging. Also, add "slow: 
" prefix to the log message so it's easy to spot redundancy with info level.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mewmewball/lucene-solr trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/102.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #102


commit 8096b200187b81de78862ca71026a13d6a793650
Author: Jessica Cheng 
Date:   2014-10-23T23:07:31Z

SOLR-6650 - Add optional slow request logging at WARN level

commit c25993724e48343d8cc941cc0228312e9ff4f3ab
Author: Jessica Cheng 
Date:   2014-10-23T23:08:25Z

Merge branch 'trunk' of https://github.com/mewmewball/lucene-solr into trunk

# By Jan Høydahl
# Via Jan Høydahl
* 'trunk' of https://github.com/mewmewball/lucene-solr:
  SOLR-6647: Bad error message when missing resource from ZK when parsing 
Schema

commit 5c69624555b4f3f4aa21627efd57772dfa0a477c
Author: Jessica Cheng 
Date:   2014-10-24T18:23:37Z

Merge branch 'trunk' of https://github.com/apache/lucene-solr into trunk

commit ba2a87b22dbc4355e663fb68a6dd4de16b42ff47
Author: Jessica Cheng 
Date:   2014-10-24T20:06:21Z

SOLR-6650 - Add optional slow request logging at WARN level - Based on 
discussion with Chris Hostetter, make the slow warn logging an if condition on 
its own rather than an else for the info logging. Also, add "slow: " prefix to 
the log message so it's easy to spot redundancy with info level.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6479) ExtendedDismax does not recognize operators followed by a parenthesis without space

2014-10-24 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183381#comment-14183381
 ] 

Jan Høydahl commented on SOLR-6479:
---

I'm not sure I agree that this is a bug. It may be a feature request, but that 
will be up for discussion..

https://cwiki.apache.org/confluence/display/solr/The+Extended+DisMax+Query+Parser

> ExtendedDismax does not recognize operators followed by a parenthesis without 
> space
> ---
>
> Key: SOLR-6479
> URL: https://issues.apache.org/jira/browse/SOLR-6479
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.9
> Environment: Java 7
> Linux
>Reporter: Pierre Salagnac
>Priority: Minor
>  Labels: patch
> Attachments: SOLR-6479.patch
>
>
> Before doing through the syntax parser, edismax does a pre-analysis of the 
> query to applies some parameters, like whether lower case operators are 
> recognized.
> This basic analysis in {{splitIntoClauses()}} pseudo-tokenizes the query 
> string on whitespaces. An operator directly followed by a parenthesis is not 
> recognized because only one token is created.
> {code}
> foo AND (bar) -> foo ; AND ; (bar)
> foo AND(bar)  -> foo ; AND(bar)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 666 - Still Failing

2014-10-24 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/666/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDistribSearch

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:44928/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:44928/collection1
at 
__randomizedtesting.SeedInfo.seed([46BAFA0527C8E862:C75C741D5097885E]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:583)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.client.solrj.request.QueryRequest.process(QueryRequest.java:91)
at org.apache.solr.client.solrj.SolrServer.query(SolrServer.java:301)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:223)
at 
org.apache.solr.cloud.CloudInspectUtil.compareResults(CloudInspectUtil.java:165)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testIndexingBatchPerRequestWithHttpSolrServer(FullSolrCloudDistribCmdsTest.java:414)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.doTest(FullSolrCloudDistribCmdsTest.java:144)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedte

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b34) - Build # 11357 - Still Failing!

2014-10-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11357/
Java: 32bit/jdk1.9.0-ea-b34 -server -XX:+UseParallelGC

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
 1) Thread[id=7680, name=Thread-2869, state=RUNNABLE, 
group=TGRP-HttpPartitionTest] at 
java.net.SocketInputStream.socketRead0(Native Method) at 
java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at 
java.net.SocketInputStream.read(SocketInputStream.java:170) at 
java.net.SocketInputStream.read(SocketInputStream.java:141) at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84) 
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
 at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
 at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
 at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
 at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
 at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:465)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1638)
 at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:430)
 at 
org.apache.solr.cloud.ZkController.access$100(ZkController.java:101) at 
org.apache.solr.cloud.ZkController$1.command(ZkController.java:269) at 
org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.HttpPartitionTest: 
   1) Thread[id=7680, name=Thread-2869, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 
org.apache.http.impl.client.DefaultReque

Re: [VOTE] Release 4.10.2 RC0

2014-10-24 Thread Steve Rowe
+1

SUCCESS! [0:53:44.848301]

Steve

> On Oct 24, 2014, at 12:53 PM, Michael McCandless  
> wrote:
> 
> Artifacts: 
> http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC0-rev1634084/
> 
> Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py
> http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC0-rev1634084
> 1634084 4.10.2 /tmp/smoke4102 True
> 
> SUCCESS! [0:29:20.274057]
> 
> Here's my +1
> 
> Mike McCandless
> 
> http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1634086 - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/update/processor/ core/src/test-files/solr/collection1/conf/ core/src/test/or

2014-10-24 Thread Jessica Mallet
Sounds good. I can make that change.

Thanks,
Jessica

On Fri, Oct 24, 2014 at 11:16 AM, Chris Hostetter 
wrote:

>
> : I see what you're saying. My thought was that since both lines are
> logging
> : exactly the same message, it'd be redundant to log it twice. I can
> : definitely see logging it in both levels, but modifying the warn message
> to
> : have a "slow query:" prefix or something. What do you think?
>
> yeah -- it's definitely a trade off between redundency vs convinience --
> but i'm +1 for being redundent in this case.
>
> the biggest worry i have: some existing Ops or DW team has a tier of Solr
> servers with log crunching all setup to get stats on queries, looking
> explicitly at the existing INFO messages -- and then some dev with the
> best intentions at heart comes along and adds this new config option so
> they can focus on problematic queries, w/o realizing how it will affect
> the existing monitoring -- and now the stats coming out of hte existing
> log monitoring are totally skewed.
>
>
> Anyone in a situation where the redundency would be unwelcome is probably
> either dealing with a high enough volume that INFO logging is disabled
> anyway, or a low evenough volume that they don't need this feature, they
> can check for "slow" queries themselves from the INFO level messages.
>
>
> : On Fri, Oct 24, 2014 at 10:58 AM, Chris Hostetter <
> hossman_luc...@fucit.org>
> : wrote:
> :
> : >
> : > Does it really make sense for this to be an if/else situation?
> : >
> : > it seems like the INFO logging should be completley independent from
> the
> : > WANR logging, so people could have INFO level logs of all the requests
> in
> : > one place, and WARN level logs of slow queries go to a distinct file
> for
> : > higher profile analysis.  AS things stand right now, you have to merge
> the
> : > logs if you wnat stats on all requests (ie: to compute percentiles of
> : > response time, or what the most requested fq params are, etc..)
> : >
> : > : +  if (log.isInfoEnabled()) {
> : > : +log.info(rsp.getToLogAsString(logid));
> : > : +  } else if (log.isWarnEnabled()) {
> : > : +final int qtime = (int)(rsp.getEndTime() -
> req.getStartTime());
> : > : +if (qtime >= slowQueryThresholdMillis) {
> : > : +  log.warn(rsp.getToLogAsString(logid));
> : >
> : >
> : > :  if (log.isInfoEnabled()) {
> : > : -  StringBuilder sb = new
> : > StringBuilder(rsp.getToLogAsString(req.getCore().getLogId()));
> : > : +  log.info(getLogStringAndClearRspToLog());
> : > : +} else if (log.isWarnEnabled()) {
> : > : +  long elapsed = rsp.getEndTime() - req.getStartTime();
> : > : +  if (elapsed >= slowUpdateThresholdMillis) {
> : > : +log.warn(getLogStringAndClearRspToLog());
> : > : +  }
> : >
> : >
> : >
> : > -Hoss
> : > http://www.lucidworks.com/
> : >
> : > -
> : > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> : > For additional commands, e-mail: dev-h...@lucene.apache.org
> : >
> : >
> :
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: svn commit: r1634086 - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/update/processor/ core/src/test-files/solr/collection1/conf/ core/src/test/or

2014-10-24 Thread Chris Hostetter

: I see what you're saying. My thought was that since both lines are logging
: exactly the same message, it'd be redundant to log it twice. I can
: definitely see logging it in both levels, but modifying the warn message to
: have a "slow query:" prefix or something. What do you think?

yeah -- it's definitely a trade off between redundency vs convinience -- 
but i'm +1 for being redundent in this case.

the biggest worry i have: some existing Ops or DW team has a tier of Solr 
servers with log crunching all setup to get stats on queries, looking 
explicitly at the existing INFO messages -- and then some dev with the 
best intentions at heart comes along and adds this new config option so 
they can focus on problematic queries, w/o realizing how it will affect 
the existing monitoring -- and now the stats coming out of hte existing 
log monitoring are totally skewed.


Anyone in a situation where the redundency would be unwelcome is probably 
either dealing with a high enough volume that INFO logging is disabled 
anyway, or a low evenough volume that they don't need this feature, they 
can check for "slow" queries themselves from the INFO level messages.


: On Fri, Oct 24, 2014 at 10:58 AM, Chris Hostetter 
: wrote:
: 
: >
: > Does it really make sense for this to be an if/else situation?
: >
: > it seems like the INFO logging should be completley independent from the
: > WANR logging, so people could have INFO level logs of all the requests in
: > one place, and WARN level logs of slow queries go to a distinct file for
: > higher profile analysis.  AS things stand right now, you have to merge the
: > logs if you wnat stats on all requests (ie: to compute percentiles of
: > response time, or what the most requested fq params are, etc..)
: >
: > : +  if (log.isInfoEnabled()) {
: > : +log.info(rsp.getToLogAsString(logid));
: > : +  } else if (log.isWarnEnabled()) {
: > : +final int qtime = (int)(rsp.getEndTime() - req.getStartTime());
: > : +if (qtime >= slowQueryThresholdMillis) {
: > : +  log.warn(rsp.getToLogAsString(logid));
: >
: >
: > :  if (log.isInfoEnabled()) {
: > : -  StringBuilder sb = new
: > StringBuilder(rsp.getToLogAsString(req.getCore().getLogId()));
: > : +  log.info(getLogStringAndClearRspToLog());
: > : +} else if (log.isWarnEnabled()) {
: > : +  long elapsed = rsp.getEndTime() - req.getStartTime();
: > : +  if (elapsed >= slowUpdateThresholdMillis) {
: > : +log.warn(getLogStringAndClearRspToLog());
: > : +  }
: >
: >
: >
: > -Hoss
: > http://www.lucidworks.com/
: >
: > -
: > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
: > For additional commands, e-mail: dev-h...@lucene.apache.org
: >
: >
: 

-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1634086 - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/update/processor/ core/src/test-files/solr/collection1/conf/ core/src/test/or

2014-10-24 Thread Jessica Mallet
Hi Chris,

I see what you're saying. My thought was that since both lines are logging
exactly the same message, it'd be redundant to log it twice. I can
definitely see logging it in both levels, but modifying the warn message to
have a "slow query:" prefix or something. What do you think?

Thanks,
Jessica

On Fri, Oct 24, 2014 at 10:58 AM, Chris Hostetter 
wrote:

>
> Does it really make sense for this to be an if/else situation?
>
> it seems like the INFO logging should be completley independent from the
> WANR logging, so people could have INFO level logs of all the requests in
> one place, and WARN level logs of slow queries go to a distinct file for
> higher profile analysis.  AS things stand right now, you have to merge the
> logs if you wnat stats on all requests (ie: to compute percentiles of
> response time, or what the most requested fq params are, etc..)
>
> : +  if (log.isInfoEnabled()) {
> : +log.info(rsp.getToLogAsString(logid));
> : +  } else if (log.isWarnEnabled()) {
> : +final int qtime = (int)(rsp.getEndTime() - req.getStartTime());
> : +if (qtime >= slowQueryThresholdMillis) {
> : +  log.warn(rsp.getToLogAsString(logid));
>
>
> :  if (log.isInfoEnabled()) {
> : -  StringBuilder sb = new
> StringBuilder(rsp.getToLogAsString(req.getCore().getLogId()));
> : +  log.info(getLogStringAndClearRspToLog());
> : +} else if (log.isWarnEnabled()) {
> : +  long elapsed = rsp.getEndTime() - req.getStartTime();
> : +  if (elapsed >= slowUpdateThresholdMillis) {
> : +log.warn(getLogStringAndClearRspToLog());
> : +  }
>
>
>
> -Hoss
> http://www.lucidworks.com/
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: svn commit: r1634086 - in /lucene/dev/trunk/solr: ./ core/src/java/org/apache/solr/core/ core/src/java/org/apache/solr/update/processor/ core/src/test-files/solr/collection1/conf/ core/src/test/or

2014-10-24 Thread Chris Hostetter

Does it really make sense for this to be an if/else situation?

it seems like the INFO logging should be completley independent from the 
WANR logging, so people could have INFO level logs of all the requests in 
one place, and WARN level logs of slow queries go to a distinct file for 
higher profile analysis.  AS things stand right now, you have to merge the 
logs if you wnat stats on all requests (ie: to compute percentiles of 
response time, or what the most requested fq params are, etc..)

: +  if (log.isInfoEnabled()) {
: +log.info(rsp.getToLogAsString(logid));
: +  } else if (log.isWarnEnabled()) {
: +final int qtime = (int)(rsp.getEndTime() - req.getStartTime());
: +if (qtime >= slowQueryThresholdMillis) {
: +  log.warn(rsp.getToLogAsString(logid));


:  if (log.isInfoEnabled()) {
: -  StringBuilder sb = new 
StringBuilder(rsp.getToLogAsString(req.getCore().getLogId()));
: +  log.info(getLogStringAndClearRspToLog());
: +} else if (log.isWarnEnabled()) {
: +  long elapsed = rsp.getEndTime() - req.getStartTime();
: +  if (elapsed >= slowUpdateThresholdMillis) {
: +log.warn(getLogStringAndClearRspToLog());
: +  }



-Hoss
http://www.lucidworks.com/

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6645) Refactored DocumentObjectBinder and added AnnotationListeners

2014-10-24 Thread Fabio Piro (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabio Piro updated SOLR-6645:
-
Affects Version/s: (was: 4.10.1)
   4.10.2

> Refactored DocumentObjectBinder and added AnnotationListeners
> -
>
> Key: SOLR-6645
> URL: https://issues.apache.org/jira/browse/SOLR-6645
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 4.10.2
>Reporter: Fabio Piro
>  Labels: annotations, binder, listener, solrj
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6645.patch
>
>
> Hello good people.
> It is understandable that the priority of SolrJ is to provide a stable API 
> for java and not a rich-feature client, I'm well aware of that. On the other 
> hand more features nowadays mean most of the time Spring Solr Data. Although 
> I appreciate the enrichment work of that lib, sometimes depending on its 
> monolithic dependencies and magic is not a valid option.
> So, I was thinking that the official DocumentObjectBinder could benefit from 
> some love, and I had implemented a listener pattern for the annotations. 
> You can register your annotations and they relate listeners in the binder, 
> and it will invoke the corresponding method in the listener on getBean and on 
> toSolrInputDocument, therefore granting the chance to do something during the 
> ongoing process.
> Changes are:
> * [MOD] */beans/DocumentObjectBinder*:  The new logic and a new constructor 
> for registering the annotations
> * [ADD] */impl/AccessorAnnotationListener*: Abstract utility class with the 
> former get(), set(), isArray, isList, isContainedInMap etc...
> * [ADD] */impl/FieldAnnotationListener*: all the rest of DocField for dealing 
> with @Field
> * [ADD] */AnnotationListener*: the base listener class
> * [MOD] */SolrServer*: added setBinder (this is the only tricky change, I 
> hope it's not a problem).
> It's all well documented and the code is very easy to read. Tests are all 
> green, it should be 100% backward compatible and the performance impact is 
> void (the logic flow is exactly the same as now, and I only changed the bare 
> essentials and nothing more, anyway).
> Some Examples (they are not part of the pull-request):
> The long awaited @FieldObject in 4 lines of code:
> https://issues.apache.org/jira/browse/SOLR-1945
> {code:java}
> public class FieldObjectAnnotationListener extends 
> AccessorAnnotationListener {
> public FieldObjectAnnotationListener(AnnotatedElement element, 
> FieldObject annotation) {
> super(element, annotation);
> }
> @Override
> public void onGetBean(Object obj, SolrDocument doc, DocumentObjectBinder 
> binder) {
> Object nested = binder.getBean(target.clazz, doc);
> setTo(obj, nested);
> }
> @Override
> public void onToSolrInputDocument(Object obj, SolrInputDocument doc, 
> DocumentObjectBinder binder) {
> SolrInputDocument nested = binder.toSolrInputDocument(getFrom(obj));
> for (Map.Entry entry : nested.entrySet()) {
> doc.addField(entry.getKey(), entry.getValue());
> }
> }
> }
> {code}
> Or something entirely new like an annotation for ChildDocuments:
> {code:java}
> public class ChildDocumentsAnnotationListener extends 
> AccessorAnnotationListener {
> public ChildDocumentsAnnotationListener(AnnotatedElement element, 
> ChildDocuments annotation) {
> super(element, annotation);
> if (!target.isInList || target.clazz.isPrimitive()) {
> throw new BindingException("@NestedDocuments is applicable only 
> on List.");
> }
> }
> @Override
> public void onGetBean(Object obj, SolrDocument doc, DocumentObjectBinder 
> binder) {
> List nested = new ArrayList<>();
> for (SolrDocument child : doc.getChildDocuments()) {
> nested.add(binder.getBean(target.clazz, child));// this should be 
> recursive, but it's only an example
> }
> setTo(obj, nested);
> }
> @Override
> public void onToSolrInputDocument(Object obj, SolrInputDocument doc, 
> DocumentObjectBinder binder) {
> SolrInputDocument nested = binder.toSolrInputDocument(getFrom(obj));
> doc.addChildDocuments(nested.getChildDocuments());
> }
> }
> {code}
> In addition, all the logic is encapsulated in the listener, so you can make a 
> custom FieldAnnotationListener too, and override the default one
> {code:java}
> public class CustomFieldAnnotationListener extends FieldAnnotationListener {
> private boolean isTransientPresent;
> public CustomFieldAnnotationListener(AnnotatedElement element, Field 
> annotation) {
> super(element, annotation);
> this.isTransientPresent = 
> element.isAnnotatio

Re: "final" modifier on some methods in TFIDFSimilarity class

2014-10-24 Thread Hafiz Hamid
Alan - Thanks for the idea. We don't want to invent a new scoring formula,
hence a new Similarity class. While fully leveraging what
DefaultSimilarity/TFIDFSimilarity already provides, we only want to
override computation of a single component (i.e. fieldNorm) of existing
tf-idf based scoring. Creating a new class would require copy/paste of
existing TFIDFSimilarity code and would make it hard to upgrade and keep
things in sync with future versions. Also changing it in the original code
would allow others to benefit from it without posing any risks.

In case you're interested, we want to move the length-norm computation from
index time to search time. That will allow us to change the length-norm
function and A/B test it against the default, without having to re-create
the index which is an extremely expensive task for us. We'll simply store
the raw field length (#terms) as fieldNorm and will change the scorer to
compute length-norm from it at search time.

Thanks,
Hamid

On Fri, Oct 24, 2014 at 2:21 AM, Alan Woodward  wrote:

> Hi Hamid,
>
> Can't you just extend Similarity instead?
>
> Alan Woodward
> www.flax.co.uk
>
>
> On 24 Oct 2014, at 08:04, Hafiz Hamid wrote:
>
> Hi - I wanted to check if folks would be okay with removing the "final"
> modifier from 4 methods (i.e. computeNorm,computeWeight, exactSimScorer
> and sloppySimScorer) in Lucene's TFIDFSimilarity class. It doesn't look
> like allowing to override these methods would have any negative
> implications on the function of this class. Yet it'd enable us tune the
> tf-idf scoring provided by this class to better serve our needs.
>
> I've logged a Jira issue for this: LUCENE-6023
> . If folks don't have
> any objection, I've a patch ready and can upload it.
>
> Thanks,
> Hamid
>
>
>


[jira] [Updated] (LUCENE-6024) Improve oal.util.BitSet's bulk and/or/and_not

2014-10-24 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6024?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand updated LUCENE-6024:
-
Attachment: LUCENE-6024.patch

Here is a patch
 - BitSet.and and BitSet.andNot have been improved to perform a leap-frog 
intersection of the two bit sets. I think this is a better default impl as it 
performs faster if any of the 2 sets is sparse.
 - SparseFixedBitSet.or adds specialization for two common cases: union with a 
rather dense set (used in practice when the cost is greater than maxDoc / 4096) 
and union with another SparseFixedBitSet
 - SparseFixedBitSet.and adds a minor specialization for the union with another 
SparseFixedBitSet
 - it also fixes a bug that SparseFixedBitSet.clear didn't update the count of 
non-zero longs (which is used to compute the approximate cardinality).

I also changed a bit the API/impl to:
 - not exhaust the iterator eg. in the FixedBitSet specialization. Not all 
these bulk methods require to completely exhaust the iterator (eg. 
intersection), so I rather documented that the state of the iterator is 
undefined after these methods have been called
 - require an unpositionned iterator: are there really use-cases for 
intersection/union with an iteration that was already half consumed? Adding 
this additional requirement makes things a bit simpler since you don't need to 
check if the current doc is -1 for certain specialization or if the iterator is 
not already on NO_MORE_DOCS

The main benefits are for SparseFixedBitSet's build time from another 
DocIdSetIterator since it uses BitSet.or. Here are the numbers reported by 
DocIdSetBenchmark (can be found in luceneutil) which measures how many 
instances can be built in one second based on the density of the set (maxDoc is 
hard-coded to 2^24). For reference, the SparseFixedBitSet is built from a 
RoaringDocIdSet (since it is our fastest DocIdSet iteration-wise).

|| Set density || Without the patch || With the patch ||
| 10e-5 | 174335 | 162070 |
| 10e-4 | 28253 | 26357 |
| 10e-3 | 2569 | 4148 |
| 0.01 | 303 | 520 |
| 0.1 | 39 | 56 |
| 0.5 | 10 | 13 |
| 0.9 | 7 | 9 |
| 0.99 | 7 | 9 |
| 1 | 7 | 9 |

> Improve oal.util.BitSet's bulk and/or/and_not
> -
>
> Key: LUCENE-6024
> URL: https://issues.apache.org/jira/browse/LUCENE-6024
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0
>
> Attachments: LUCENE-6024.patch
>
>
> LUCENE-6021 introduced oal.util.BitSet with default impls taken from 
> FixedBitSet. However, these default impls could be more efficient (and eg. 
> perform an actual leap frog for AND and AND_NOT).
> Additionally, SparseFixedBitSet could benefit from some specialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[VOTE] Release 4.10.2 RC0

2014-10-24 Thread Michael McCandless
Artifacts: 
http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC0-rev1634084/

Smoke tester: python3 -u dev-tools/scripts/smokeTestRelease.py
http://people.apache.org/~mikemccand/staging_area/lucene-solr-4.10.2-RC0-rev1634084
1634084 4.10.2 /tmp/smoke4102 True

SUCCESS! [0:29:20.274057]

Here's my +1

Mike McCandless

http://blog.mikemccandless.com

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183033#comment-14183033
 ] 

ASF GitHub Bot commented on SOLR-6650:
--

Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/101


> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
> Fix For: 5.0
>
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[GitHub] lucene-solr pull request: SOLR-6650 - Add optional slow request lo...

2014-10-24 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/lucene-solr/pull/101


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6024) Improve oal.util.BitSet's bulk and/or/and_not

2014-10-24 Thread Adrien Grand (JIRA)
Adrien Grand created LUCENE-6024:


 Summary: Improve oal.util.BitSet's bulk and/or/and_not
 Key: LUCENE-6024
 URL: https://issues.apache.org/jira/browse/LUCENE-6024
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: 5.0


LUCENE-6021 introduced oal.util.BitSet with default impls taken from 
FixedBitSet. However, these default impls could be more efficient (and eg. 
perform an actual leap frog for AND and AND_NOT).

Additionally, SparseFixedBitSet could benefit from some specialization.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6479) ExtendedDismax does not recognize operators followed by a parenthesis without space

2014-10-24 Thread Pierre Salagnac (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6479?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183017#comment-14183017
 ] 

Pierre Salagnac commented on SOLR-6479:
---

[~janhoy], You already integrated patch in this code.
Would you have time to have a look at this patch?
Thanks

> ExtendedDismax does not recognize operators followed by a parenthesis without 
> space
> ---
>
> Key: SOLR-6479
> URL: https://issues.apache.org/jira/browse/SOLR-6479
> Project: Solr
>  Issue Type: Bug
>  Components: query parsers
>Affects Versions: 4.9
> Environment: Java 7
> Linux
>Reporter: Pierre Salagnac
>Priority: Minor
>  Labels: patch
> Attachments: SOLR-6479.patch
>
>
> Before doing through the syntax parser, edismax does a pre-analysis of the 
> query to applies some parameters, like whether lower case operators are 
> recognized.
> This basic analysis in {{splitIntoClauses()}} pseudo-tokenizes the query 
> string on whitespaces. An operator directly followed by a parenthesis is not 
> recognized because only one token is created.
> {code}
> foo AND (bar) -> foo ; AND ; (bar)
> foo AND(bar)  -> foo ; AND(bar)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter resolved SOLR-6650.
--
   Resolution: Fixed
Fix Version/s: 5.0

> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
> Fix For: 5.0
>
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183007#comment-14183007
 ] 

ASF subversion and git services commented on SOLR-6650:
---

Commit 1634088 from [~thelabdude] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634088 ]

SOLR-6650: Add optional slow request logging at WARN level; this closes #101

> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14183004#comment-14183004
 ] 

ASF subversion and git services commented on SOLR-6650:
---

Commit 1634086 from [~thelabdude] in branch 'dev/trunk'
[ https://svn.apache.org/r1634086 ]

SOLR-6650: Add optional slow request logging at WARN level; this closes #101

> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/ibm-j9-jdk7) - Build # 11356 - Failure!

2014-10-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11356/
Java: 32bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

1 tests failed.
REGRESSION:  org.apache.solr.cloud.DeleteShardTest.testDistribSearch

Error Message:
Timeout occured while waiting response from server at: https://127.0.0.1:43317

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: Timeout occured while waiting 
response from server at: https://127.0.0.1:43317
at 
__randomizedtesting.SeedInfo.seed([4393D92D21279E75:C27557355678FE49]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:579)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.DeleteShardTest.deleteShard(DeleteShardTest.java:152)
at org.apache.solr.cloud.DeleteShardTest.doTest(DeleteShardTest.java:94)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:94)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:619)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgno

[jira] [Commented] (LUCENE-6022) DocValuesDocIdSet: check deleted docs before doc values

2014-10-24 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182921#comment-14182921
 ] 

Adrien Grand commented on LUCENE-6022:
--

I agree this thing could be merged with FilteredDocIdSet. The only additional 
thing it has is the optimization when deleted docs are a bit set but I'm 
wondering if it really helps in practice given that we try to merge more 
agressively segments that have lots of deleted documents.

> DocValuesDocIdSet: check deleted docs before doc values
> ---
>
> Key: LUCENE-6022
> URL: https://issues.apache.org/jira/browse/LUCENE-6022
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Trivial
> Fix For: 5.0
>
> Attachments: LUCENE-6022.patch
>
>
> When live documents are not null, DocValuesDocIdSet checks if doc values 
> match the document before the live docs. Given that checking if doc values 
> match could involve a heavy computation (eg. geo distance) and that the 
> default codec has live docs in memory but doc values on disk, I think it 
> makes more sense to check live docs first?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182901#comment-14182901
 ] 

Timothy Potter commented on SOLR-6650:
--

Just doing some basic testing and then will commit.

> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6650) Add optional slow request logging at WARN level

2014-10-24 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6650?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter reassigned SOLR-6650:


Assignee: Timothy Potter

> Add optional slow request logging at WARN level
> ---
>
> Key: SOLR-6650
> URL: https://issues.apache.org/jira/browse/SOLR-6650
> Project: Solr
>  Issue Type: Improvement
>Reporter: Jessica Cheng Mallet
>Assignee: Timothy Potter
>  Labels: logging
>
> At super high request rates, logging all the requests can become a bottleneck 
> and therefore INFO logging is often turned off. However, it is still useful 
> to be able to set a latency threshold above which a request is considered 
> "slow" and log that request at WARN level so we can easily identify slow 
> queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5969) Add Lucene50Codec

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5969?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182831#comment-14182831
 ] 

ASF subversion and git services commented on LUCENE-5969:
-

Commit 1634064 from [~rcmuir] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634064 ]

use correct constants/javadocs refs (backport from LUCENE-5969)

> Add Lucene50Codec
> -
>
> Key: LUCENE-5969
> URL: https://issues.apache.org/jira/browse/LUCENE-5969
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Michael McCandless
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5969.patch, LUCENE-5969.patch, 
> LUCENE-5969_part2.patch, LUCENE-5969_part3.patch
>
>
> Spinoff from LUCENE-5952:
>   * Fix .si to write Version as 3 ints, not a String that requires parsing at 
> read time.
>   * Lucene42TermVectorsFormat should not use the same codecName as 
> Lucene41StoredFieldsFormat
> It would also be nice if we had a "bumpCodecVersion" script so rolling a new 
> codec is not so daunting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6651) Fix wrong logging in waitForReplicasToComeUp

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182804#comment-14182804
 ] 

ASF subversion and git services commented on SOLR-6651:
---

Commit 1634060 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1634060 ]

SOLR-6651: Don't log found twice

> Fix wrong logging in waitForReplicasToComeUp
> 
>
> Key: SOLR-6651
> URL: https://issues.apache.org/jira/browse/SOLR-6651
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 5.0, Trunk
>
>
> {code}
> log.info("Waiting until we see more replicas up for shard " + 
> shardId + ": total="
>   + slices.getReplicasMap().size() + " found=" + found
>   + " timeoutin=" + (timeoutAt - System.nanoTime() / 
> (float)(10^9)) + "ms");
> {code}
> That code isn't calculating the timeout correctly in the logging statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6651) Fix wrong logging in waitForReplicasToComeUp

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182805#comment-14182805
 ] 

ASF subversion and git services commented on SOLR-6651:
---

Commit 1634061 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634061 ]

SOLR-6651: Don't log found twice

> Fix wrong logging in waitForReplicasToComeUp
> 
>
> Key: SOLR-6651
> URL: https://issues.apache.org/jira/browse/SOLR-6651
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 5.0, Trunk
>
>
> {code}
> log.info("Waiting until we see more replicas up for shard " + 
> shardId + ": total="
>   + slices.getReplicasMap().size() + " found=" + found
>   + " timeoutin=" + (timeoutAt - System.nanoTime() / 
> (float)(10^9)) + "ms");
> {code}
> That code isn't calculating the timeout correctly in the logging statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-6651) Fix wrong logging in waitForReplicasToComeUp

2014-10-24 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6651.
-
Resolution: Fixed

> Fix wrong logging in waitForReplicasToComeUp
> 
>
> Key: SOLR-6651
> URL: https://issues.apache.org/jira/browse/SOLR-6651
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 5.0, Trunk
>
>
> {code}
> log.info("Waiting until we see more replicas up for shard " + 
> shardId + ": total="
>   + slices.getReplicasMap().size() + " found=" + found
>   + " timeoutin=" + (timeoutAt - System.nanoTime() / 
> (float)(10^9)) + "ms");
> {code}
> That code isn't calculating the timeout correctly in the logging statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6651) Fix wrong logging in waitForReplicasToComeUp

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182792#comment-14182792
 ] 

ASF subversion and git services commented on SOLR-6651:
---

Commit 1634059 from sha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634059 ]

SOLR-6651: Fix wrong timeout logged in waitForReplicasToComeUp

> Fix wrong logging in waitForReplicasToComeUp
> 
>
> Key: SOLR-6651
> URL: https://issues.apache.org/jira/browse/SOLR-6651
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 5.0, Trunk
>
>
> {code}
> log.info("Waiting until we see more replicas up for shard " + 
> shardId + ": total="
>   + slices.getReplicasMap().size() + " found=" + found
>   + " timeoutin=" + (timeoutAt - System.nanoTime() / 
> (float)(10^9)) + "ms");
> {code}
> That code isn't calculating the timeout correctly in the logging statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6651) Fix wrong logging in waitForReplicasToComeUp

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6651?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182790#comment-14182790
 ] 

ASF subversion and git services commented on SOLR-6651:
---

Commit 1634057 from sha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1634057 ]

SOLR-6651: Fix wrong timeout logged in waitForReplicasToComeUp

> Fix wrong logging in waitForReplicasToComeUp
> 
>
> Key: SOLR-6651
> URL: https://issues.apache.org/jira/browse/SOLR-6651
> Project: Solr
>  Issue Type: Bug
>  Components: SolrCloud
>Affects Versions: 4.10
>Reporter: Shalin Shekhar Mangar
>Assignee: Shalin Shekhar Mangar
>Priority: Trivial
> Fix For: 5.0, Trunk
>
>
> {code}
> log.info("Waiting until we see more replicas up for shard " + 
> shardId + ": total="
>   + slices.getReplicasMap().size() + " found=" + found
>   + " timeoutin=" + (timeoutAt - System.nanoTime() / 
> (float)(10^9)) + "ms");
> {code}
> That code isn't calculating the timeout correctly in the logging statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6651) Fix wrong logging in waitForReplicasToComeUp

2014-10-24 Thread Shalin Shekhar Mangar (JIRA)
Shalin Shekhar Mangar created SOLR-6651:
---

 Summary: Fix wrong logging in waitForReplicasToComeUp
 Key: SOLR-6651
 URL: https://issues.apache.org/jira/browse/SOLR-6651
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10
Reporter: Shalin Shekhar Mangar
Assignee: Shalin Shekhar Mangar
Priority: Trivial
 Fix For: 5.0, Trunk


{code}
log.info("Waiting until we see more replicas up for shard " + 
shardId + ": total="
  + slices.getReplicasMap().size() + " found=" + found
  + " timeoutin=" + (timeoutAt - System.nanoTime() / (float)(10^9)) 
+ "ms");
{code}

That code isn't calculating the timeout correctly in the logging statement.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182771#comment-14182771
 ] 

Alan Woodward commented on LUCENE-5911:
---

OK, will revert.

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182779#comment-14182779
 ] 

Michael McCandless commented on LUCENE-5911:


Thanks Alan, I'll spin an RC now...

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-5911:
--
Fix Version/s: (was: 4.10.2)

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182775#comment-14182775
 ] 

ASF subversion and git services commented on LUCENE-5911:
-

Commit 1634054 from [~romseygeek] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1634054 ]

LUCENE-5911: Revert backport

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182738#comment-14182738
 ] 

Michael McCandless commented on LUCENE-5911:


bq. I thought we weren't doing a 4.11?

I don't think we are.  The next feature release is going to be 5.0, hopefully 
soon...

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread Alan Woodward (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182737#comment-14182737
 ] 

Alan Woodward commented on LUCENE-5911:
---

Is it?  I thought we weren't doing a 4.11?

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182735#comment-14182735
 ] 

Michael McCandless commented on LUCENE-5911:


I think this is too big a change to push into 4.10.x branch?  That branch is 
for bug fixes only?

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6645) Refactored DocumentObjectBinder and added AnnotationListeners

2014-10-24 Thread Fabio Piro (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6645?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Fabio Piro updated SOLR-6645:
-
Affects Version/s: 4.10.1
Fix Version/s: Trunk
   5.0

> Refactored DocumentObjectBinder and added AnnotationListeners
> -
>
> Key: SOLR-6645
> URL: https://issues.apache.org/jira/browse/SOLR-6645
> Project: Solr
>  Issue Type: New Feature
>  Components: clients - java
>Affects Versions: 4.10.1
>Reporter: Fabio Piro
>  Labels: annotations, binder, listener, solrj
> Fix For: 5.0, Trunk
>
> Attachments: SOLR-6645.patch
>
>
> Hello good people.
> It is understandable that the priority of SolrJ is to provide a stable API 
> for java and not a rich-feature client, I'm well aware of that. On the other 
> hand more features nowadays mean most of the time Spring Solr Data. Although 
> I appreciate the enrichment work of that lib, sometimes depending on its 
> monolithic dependencies and magic is not a valid option.
> So, I was thinking that the official DocumentObjectBinder could benefit from 
> some love, and I had implemented a listener pattern for the annotations. 
> You can register your annotations and they relate listeners in the binder, 
> and it will invoke the corresponding method in the listener on getBean and on 
> toSolrInputDocument, therefore granting the chance to do something during the 
> ongoing process.
> Changes are:
> * [MOD] */beans/DocumentObjectBinder*:  The new logic and a new constructor 
> for registering the annotations
> * [ADD] */impl/AccessorAnnotationListener*: Abstract utility class with the 
> former get(), set(), isArray, isList, isContainedInMap etc...
> * [ADD] */impl/FieldAnnotationListener*: all the rest of DocField for dealing 
> with @Field
> * [ADD] */AnnotationListener*: the base listener class
> * [MOD] */SolrServer*: added setBinder (this is the only tricky change, I 
> hope it's not a problem).
> It's all well documented and the code is very easy to read. Tests are all 
> green, it should be 100% backward compatible and the performance impact is 
> void (the logic flow is exactly the same as now, and I only changed the bare 
> essentials and nothing more, anyway).
> Some Examples (they are not part of the pull-request):
> The long awaited @FieldObject in 4 lines of code:
> https://issues.apache.org/jira/browse/SOLR-1945
> {code:java}
> public class FieldObjectAnnotationListener extends 
> AccessorAnnotationListener {
> public FieldObjectAnnotationListener(AnnotatedElement element, 
> FieldObject annotation) {
> super(element, annotation);
> }
> @Override
> public void onGetBean(Object obj, SolrDocument doc, DocumentObjectBinder 
> binder) {
> Object nested = binder.getBean(target.clazz, doc);
> setTo(obj, nested);
> }
> @Override
> public void onToSolrInputDocument(Object obj, SolrInputDocument doc, 
> DocumentObjectBinder binder) {
> SolrInputDocument nested = binder.toSolrInputDocument(getFrom(obj));
> for (Map.Entry entry : nested.entrySet()) {
> doc.addField(entry.getKey(), entry.getValue());
> }
> }
> }
> {code}
> Or something entirely new like an annotation for ChildDocuments:
> {code:java}
> public class ChildDocumentsAnnotationListener extends 
> AccessorAnnotationListener {
> public ChildDocumentsAnnotationListener(AnnotatedElement element, 
> ChildDocuments annotation) {
> super(element, annotation);
> if (!target.isInList || target.clazz.isPrimitive()) {
> throw new BindingException("@NestedDocuments is applicable only 
> on List.");
> }
> }
> @Override
> public void onGetBean(Object obj, SolrDocument doc, DocumentObjectBinder 
> binder) {
> List nested = new ArrayList<>();
> for (SolrDocument child : doc.getChildDocuments()) {
> nested.add(binder.getBean(target.clazz, child));// this should be 
> recursive, but it's only an example
> }
> setTo(obj, nested);
> }
> @Override
> public void onToSolrInputDocument(Object obj, SolrInputDocument doc, 
> DocumentObjectBinder binder) {
> SolrInputDocument nested = binder.toSolrInputDocument(getFrom(obj));
> doc.addChildDocuments(nested.getChildDocuments());
> }
> }
> {code}
> In addition, all the logic is encapsulated in the listener, so you can make a 
> custom FieldAnnotationListener too, and override the default one
> {code:java}
> public class CustomFieldAnnotationListener extends FieldAnnotationListener {
> private boolean isTransientPresent;
> public CustomFieldAnnotationListener(AnnotatedElement element, Field 
> annotation) {
> super(element, annotation);
> this.isTransientPresent = 
> elem

[jira] [Commented] (SOLR-6545) Query field list with wild card on dynamic field fails

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182721#comment-14182721
 ] 

ASF subversion and git services commented on SOLR-6545:
---

Commit 1634044 from sha...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1634044 ]

SOLR-6545: Query field list with wild card on dynamic field fails

> Query field list with wild card on dynamic field fails
> --
>
> Key: SOLR-6545
> URL: https://issues.apache.org/jira/browse/SOLR-6545
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10
> Environment: Mac OS X 10.9.5, Ubuntu 14.04.1 LTS
>Reporter: Burke Webster
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: SOLR-6545.patch, SOLR-6545.patch
>
>
> Downloaded 4.10.0, unpacked, and setup a solrcloud 2-node cluster by running: 
>   bin/solr -e cloud 
> Accepting all the default options and you will have a 2 node cloud running 
> with replication factor of 2.  
> Now add 2 documents by going to example/exampledocs, creating the following 
> file named my_test.xml:
> 
>  
>   1000
>   test 1
>   Text about test 1.
>   Category A
>  
>  
>   1001
>   test 2
>   Stuff about test 2.
>   Category B
>  
> 
> Then import these documents by running:
>   java -Durl=http://localhost:7574/solr/gettingstarted/update -jar post.jar 
> my_test.xml
> Verify the docs are there by hitting:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*
> Now run a query and ask for only the id and cat_*_s fields:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,cat_*
> You will only get the id fields back.  Change the query a little to include a 
> third field:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,name,cat_*
> You will now get the following exception:
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.Blockin

[jira] [Resolved] (SOLR-6545) Query field list with wild card on dynamic field fails

2014-10-24 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar resolved SOLR-6545.
-
   Resolution: Fixed
Fix Version/s: Trunk
   5.0

This is fixed. Thanks everyone!

> Query field list with wild card on dynamic field fails
> --
>
> Key: SOLR-6545
> URL: https://issues.apache.org/jira/browse/SOLR-6545
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10
> Environment: Mac OS X 10.9.5, Ubuntu 14.04.1 LTS
>Reporter: Burke Webster
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: SOLR-6545.patch, SOLR-6545.patch
>
>
> Downloaded 4.10.0, unpacked, and setup a solrcloud 2-node cluster by running: 
>   bin/solr -e cloud 
> Accepting all the default options and you will have a 2 node cloud running 
> with replication factor of 2.  
> Now add 2 documents by going to example/exampledocs, creating the following 
> file named my_test.xml:
> 
>  
>   1000
>   test 1
>   Text about test 1.
>   Category A
>  
>  
>   1001
>   test 2
>   Stuff about test 2.
>   Category B
>  
> 
> Then import these documents by running:
>   java -Durl=http://localhost:7574/solr/gettingstarted/update -jar post.jar 
> my_test.xml
> Verify the docs are there by hitting:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*
> Now run a query and ask for only the id and cat_*_s fields:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,cat_*
> You will only get the id fields back.  Change the query a little to include a 
> third field:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,name,cat_*
> You will now get the following exception:
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.j

[jira] [Updated] (SOLR-6545) Query field list with wild card on dynamic field fails

2014-10-24 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6545:

  Component/s: search
Fix Version/s: 4.10.2

> Query field list with wild card on dynamic field fails
> --
>
> Key: SOLR-6545
> URL: https://issues.apache.org/jira/browse/SOLR-6545
> Project: Solr
>  Issue Type: Bug
>  Components: search
>Affects Versions: 4.10
> Environment: Mac OS X 10.9.5, Ubuntu 14.04.1 LTS
>Reporter: Burke Webster
>Assignee: Shalin Shekhar Mangar
>Priority: Critical
> Fix For: 4.10.2
>
> Attachments: SOLR-6545.patch, SOLR-6545.patch
>
>
> Downloaded 4.10.0, unpacked, and setup a solrcloud 2-node cluster by running: 
>   bin/solr -e cloud 
> Accepting all the default options and you will have a 2 node cloud running 
> with replication factor of 2.  
> Now add 2 documents by going to example/exampledocs, creating the following 
> file named my_test.xml:
> 
>  
>   1000
>   test 1
>   Text about test 1.
>   Category A
>  
>  
>   1001
>   test 2
>   Stuff about test 2.
>   Category B
>  
> 
> Then import these documents by running:
>   java -Durl=http://localhost:7574/solr/gettingstarted/update -jar post.jar 
> my_test.xml
> Verify the docs are there by hitting:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*
> Now run a query and ask for only the id and cat_*_s fields:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,cat_*
> You will only get the id fields back.  Change the query a little to include a 
> third field:
>   http://localhost:8983/solr/gettingstarted/select?q=*:*&fl=id,name,cat_*
> You will now get the following exception:
> java.lang.NullPointerException
>   at 
> org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)
>   at 
> org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)
>   at 
> org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)
>   at 
> org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
>   at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
>   at 
> org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
>   at 
> org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
>   at 
> org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
>   at 
> org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
>   at 
> org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
>   at 
> org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
>   at 
> org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
>   at 
> org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
>   at 
> org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
>   at 
> org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
>   at org.eclipse.jetty.server.Server.handle(Server.java:368)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
>   at 
> org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
>   at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
>   at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
>   at 
> org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
>   at 
> org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
>   at 
> org.eclipse.jetty.util.thread.QueuedThreadPoo

Solr field operation use cases wiki is now in the Solr reference guide

2014-10-24 Thread Jack Krupansky
I just noticed that this page has been completely moved to the Solr reference 
guide, so it needs a tombstone:
https://wiki.apache.org/solr/FieldOptionsByUseCase

Replaced by:
https://cwiki.apache.org/confluence/display/solr/Field+Properties+by+Use+Case

-- Jack Krupansky

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.7.0_67) - Build # 4287 - Still Failing!

2014-10-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4287/
Java: 32bit/jdk1.7.0_67 -client -XX:+UseG1GC

All tests passed

Build Log:
[...truncated 15014 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:525: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:473: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\build.xml:61: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\extra-targets.xml:39: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\build.xml:209: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\common-build.xml:440:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\common-build.xml:496:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\contrib\dataimporthandler-extras\build.xml:50:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-5.x-Windows\solr\contrib\contrib-build.xml:52:
 impossible to resolve dependencies:
resolve failed - see output for details

Total time: 219 minutes 22 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 32bit/jdk1.7.0_67 -client 
-XX:+UseG1GC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (LUCENE-6023) Remove "final" modifier from four methods of TFIDFSimilarity class to make them overridable.

2014-10-24 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6023?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182667#comment-14182667
 ] 

Robert Muir commented on LUCENE-6023:
-

Override Similarity directly instead.

> Remove "final" modifier from four methods of TFIDFSimilarity class to make 
> them overridable.
> 
>
> Key: LUCENE-6023
> URL: https://issues.apache.org/jira/browse/LUCENE-6023
> Project: Lucene - Core
>  Issue Type: Bug
>  Components: core/query/scoring
>Affects Versions: 4.2.1
>Reporter: Hafiz M Hamid
> Fix For: 4.2.1
>
>
> The TFIDFSimilarity has the following four of its public methods marked 
> "final" which is keeping us from overriding these methods. Apparently there 
> doesn't seem to be an obvious reason for keeping these methods 
> non-overridable.
> Here are the four methods:
> computeNorm()
> computeWeight()
> exactSimScorer()
> sloppySimScorer()



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: "final" modifier on some methods in TFIDFSimilarity class

2014-10-24 Thread Alan Woodward
Hi Hamid,

Can't you just extend Similarity instead?

Alan Woodward
www.flax.co.uk


On 24 Oct 2014, at 08:04, Hafiz Hamid wrote:

> Hi - I wanted to check if folks would be okay with removing the "final" 
> modifier from 4 methods (i.e. computeNorm,computeWeight, exactSimScorer and 
> sloppySimScorer) in Lucene's TFIDFSimilarity class. It doesn't look like 
> allowing to override these methods would have any negative implications on 
> the function of this class. Yet it'd enable us tune the tf-idf scoring 
> provided by this class to better serve our needs.
> 
> I've logged a Jira issue for this: LUCENE-6023. If folks don't have any 
> objection, I've a patch ready and can upload it.
> 
> Thanks,
> Hamid



[jira] [Resolved] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward resolved LUCENE-5911.
---
Resolution: Fixed

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Reopened] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward reopened LUCENE-5911:
---

Reopening for backport to 4.10.2

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182609#comment-14182609
 ] 

ASF subversion and git services commented on LUCENE-5911:
-

Commit 1634036 from [~romseygeek] in branch 'dev/trunk'
[ https://svn.apache.org/r1634036 ]

LUCENE-5911: Update trunk CHANGES.txt

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread Alan Woodward (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alan Woodward updated LUCENE-5911:
--
Fix Version/s: 4.10.2

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182608#comment-14182608
 ] 

ASF subversion and git services commented on LUCENE-5911:
-

Commit 1634035 from [~romseygeek] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634035 ]

LUCENE-5911: Update 5x CHANGES.txt

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-5911) Make MemoryIndex thread-safe for queries

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-5911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182605#comment-14182605
 ] 

ASF subversion and git services commented on LUCENE-5911:
-

Commit 1634034 from [~romseygeek] in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1634034 ]

LUCENE-5911: Add freeze() method to MemoryIndex to allow thread-safe searching

> Make MemoryIndex thread-safe for queries
> 
>
> Key: LUCENE-5911
> URL: https://issues.apache.org/jira/browse/LUCENE-5911
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Alan Woodward
>Assignee: Alan Woodward
>Priority: Minor
> Fix For: 5.0, Trunk
>
> Attachments: LUCENE-5911-2.patch, LUCENE-5911.patch, LUCENE-5911.patch
>
>
> We want to be able to run multiple queries at once over a MemoryIndex in 
> luwak (see 
> https://github.com/flaxsearch/luwak/commit/49a8fba5764020c2f0e4dc29d80d93abb0231191),
>  but this isn't possible with the current implementation.  However, looking 
> at the code, it seems that it would be relatively simple to make MemoryIndex 
> thread-safe for reads/queries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.7.0_67) - Build # 4388 - Still Failing!

2014-10-24 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4388/
Java: 64bit/jdk1.7.0_67 -XX:+UseCompressedOops -XX:+UseParallelGC

All tests passed

Build Log:
[...truncated 14759 lines...]
BUILD FAILED
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:525: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:473: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\build.xml:61: The 
following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\extra-targets.xml:39: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build.xml:209: 
The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:440:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\common-build.xml:496:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\contrib\dataimporthandler-extras\build.xml:50:
 The following error occurred while executing this line:
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\contrib\contrib-build.xml:52:
 impossible to resolve dependencies:
resolve failed - see output for details

Total time: 169 minutes 42 seconds
Build step 'Invoke Ant' marked build as failure
[description-setter] Description set: Java: 64bit/jdk1.7.0_67 
-XX:+UseCompressedOops -XX:+UseParallelGC
Archiving artifacts
Recording test results
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Resolved] (SOLR-6647) Bad error message when missing resource from ZK when parsing Schema

2014-10-24 Thread JIRA

 [ 
https://issues.apache.org/jira/browse/SOLR-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jan Høydahl resolved SOLR-6647.
---
Resolution: Fixed

Closing. Followup with further improvements in SOLR-6649

> Bad error message when missing resource from ZK when parsing Schema
> ---
>
> Key: SOLR-6647
> URL: https://issues.apache.org/jira/browse/SOLR-6647
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis, SolrCloud
>Affects Versions: 4.10.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging, solrcloud, zookeeper
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: SOLR-6647.patch, SOLR-6647.patch
>
>
> Creating a collection via Collection API. Schema points to a file which is 
> not in our config folder in ZooKeeper. Getting the infamous error message 
> {{ZkSolrResourceLoader does not support getConfigDir()}} instead of the more 
> helpful message about which resource is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6647) Bad error message when missing resource from ZK when parsing Schema

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6647?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182558#comment-14182558
 ] 

ASF subversion and git services commented on SOLR-6647:
---

Commit 1634015 from jan...@apache.org in branch 'dev/branches/lucene_solr_4_10'
[ https://svn.apache.org/r1634015 ]

SOLR-6647: Bad error message when missing resource from ZK when parsing Schema 
(backport)

> Bad error message when missing resource from ZK when parsing Schema
> ---
>
> Key: SOLR-6647
> URL: https://issues.apache.org/jira/browse/SOLR-6647
> Project: Solr
>  Issue Type: Bug
>  Components: Schema and Analysis, SolrCloud
>Affects Versions: 4.10.1
>Reporter: Jan Høydahl
>Assignee: Jan Høydahl
>  Labels: logging, solrcloud, zookeeper
> Fix For: 4.10.2, 5.0, Trunk
>
> Attachments: SOLR-6647.patch, SOLR-6647.patch
>
>
> Creating a collection via Collection API. Schema points to a file which is 
> not in our config folder in ZooKeeper. Getting the infamous error message 
> {{ZkSolrResourceLoader does not support getConfigDir()}} instead of the more 
> helpful message about which resource is missing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6021) Make FixedBitSet and SparseFixedBitSet share a wider common interface

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182545#comment-14182545
 ] 

ASF subversion and git services commented on LUCENE-6021:
-

Commit 1634013 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1634013 ]

LUCENE-6021: Make SparseFixedBitSet and FixedBitSet share a common "BitSet" 
interface.

> Make FixedBitSet and SparseFixedBitSet share a wider common interface
> -
>
> Key: LUCENE-6021
> URL: https://issues.apache.org/jira/browse/LUCENE-6021
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0
>
> Attachments: LUCENE-6021.patch, LUCENE-6021.patch
>
>
> Today, the only common interfaces that these two classes share are Bits and 
> Accountable. I would like to add a BitSet base class that would be both 
> extended by FixedBitSet and SparseFixedBitSet. The idea is to share more code 
> between these two impls and make them interchangeable for more use-cases so 
> that we could just use one or the other based on the density of the data that 
> we are working on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6021) Make FixedBitSet and SparseFixedBitSet share a wider common interface

2014-10-24 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14182532#comment-14182532
 ] 

ASF subversion and git services commented on LUCENE-6021:
-

Commit 1634012 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1634012 ]

LUCENE-6021: Make SparseFixedBitSet and FixedBitSet share a common "BitSet" 
interface.

> Make FixedBitSet and SparseFixedBitSet share a wider common interface
> -
>
> Key: LUCENE-6021
> URL: https://issues.apache.org/jira/browse/LUCENE-6021
> Project: Lucene - Core
>  Issue Type: Improvement
>Reporter: Adrien Grand
>Assignee: Adrien Grand
>Priority: Minor
> Fix For: 5.0
>
> Attachments: LUCENE-6021.patch, LUCENE-6021.patch
>
>
> Today, the only common interfaces that these two classes share are Bits and 
> Accountable. I would like to add a BitSet base class that would be both 
> extended by FixedBitSet and SparseFixedBitSet. The idea is to share more code 
> between these two impls and make them interchangeable for more use-cases so 
> that we could just use one or the other based on the density of the data that 
> we are working on.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



"final" modifier on some methods in TFIDFSimilarity class

2014-10-24 Thread Hafiz Hamid
Hi - I wanted to check if folks would be okay with removing the "final"
modifier from 4 methods (i.e. computeNorm,computeWeight, exactSimScorer
and sloppySimScorer) in Lucene's TFIDFSimilarity class. It doesn't look
like allowing to override these methods would have any negative
implications on the function of this class. Yet it'd enable us tune the
tf-idf scoring provided by this class to better serve our needs.

I've logged a Jira issue for this: LUCENE-6023
. If folks don't have
any objection, I've a patch ready and can upload it.

Thanks,
Hamid