[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.8.0_20) - Build # 11453 - Failure!

2014-11-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11453/
Java: 32bit/jdk1.8.0_20 -client -XX:+UseSerialGC (asserts: true)

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.testDistribSearch

Error Message:
There are still nodes recoverying - waited for 30 seconds

Stack Trace:
java.lang.AssertionError: There are still nodes recoverying - waited for 30 
seconds
at 
__randomizedtesting.SeedInfo.seed([6BCA88657BF569F1:EA2C067D0CAA09CD]:0)
at org.junit.Assert.fail(Assert.java:93)
at 
org.apache.solr.cloud.AbstractDistribZkTestBase.waitForRecoveriesToFinish(AbstractDistribZkTestBase.java:178)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForRecoveriesToFinish(AbstractFullDistribZkTestBase.java:840)
at 
org.apache.solr.cloud.AbstractFullDistribZkTestBase.waitForThingsToLevelOut(AbstractFullDistribZkTestBase.java:1459)
at 
org.apache.solr.cloud.DistribDocExpirationUpdateProcessorTest.doTest(DistribDocExpirationUpdateProcessorTest.java:79)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 

[JENKINS] Lucene-Solr-trunk-Windows (64bit/jdk1.8.0_40-ea-b09) - Build # 4433 - Failure!

2014-11-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4433/
Java: 64bit/jdk1.8.0_40-ea-b09 -XX:-UseCompressedOops -XX:+UseSerialGC 
(asserts: true)

2 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.cloud.HttpPartitionTest

Error Message:
1 thread leaked from SUITE scope at org.apache.solr.cloud.HttpPartitionTest:
 1) Thread[id=10226, name=Thread-3393, state=RUNNABLE, 
group=TGRP-HttpPartitionTest] at 
java.net.SocketInputStream.socketRead0(Native Method) at 
java.net.SocketInputStream.socketRead(SocketInputStream.java:116) at 
java.net.SocketInputStream.read(SocketInputStream.java:170) at 
java.net.SocketInputStream.read(SocketInputStream.java:141) at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
 at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84) 
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
 at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
 at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
 at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
 at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
 at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
 at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
 at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
 at 
org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:682)
 at 
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:486)
 at 
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:863)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:106)
 at 
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:57)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:465)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
 at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
 at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1650)
 at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:430)
 at 
org.apache.solr.cloud.ZkController.access$100(ZkController.java:101) at 
org.apache.solr.cloud.ZkController$1.command(ZkController.java:269) at 
org.apache.solr.common.cloud.ConnectionManager$1$1.run(ConnectionManager.java:166)

Stack Trace:
com.carrotsearch.randomizedtesting.ThreadLeakError: 1 thread leaked from SUITE 
scope at org.apache.solr.cloud.HttpPartitionTest: 
   1) Thread[id=10226, name=Thread-3393, state=RUNNABLE, 
group=TGRP-HttpPartitionTest]
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
at java.net.SocketInputStream.read(SocketInputStream.java:170)
at java.net.SocketInputStream.read(SocketInputStream.java:141)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.fillBuffer(AbstractSessionInputBuffer.java:160)
at 
org.apache.http.impl.io.SocketInputBuffer.fillBuffer(SocketInputBuffer.java:84)
at 
org.apache.http.impl.io.AbstractSessionInputBuffer.readLine(AbstractSessionInputBuffer.java:273)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:140)
at 
org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:57)
at 
org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:260)
at 
org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:283)
at 
org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:251)
at 
org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:197)
at 
org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:271)
at 
org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at 

[JENKINS-MAVEN] Lucene-Solr-Maven-5.x #759: POMs out of sync

2014-11-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-5.x/759/

3 tests failed.
FAILED:  
org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest.org.apache.solr.hadoop.MapReduceIndexerToolArgumentParserTest

Error Message:
null

Stack Trace:
java.lang.AssertionError: null
at __randomizedtesting.SeedInfo.seed([F4627E38BB3EE154]:0)
at 
org.apache.lucene.util.TestRuleTemporaryFilesCleanup.before(TestRuleTemporaryFilesCleanup.java:92)
at 
com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.before(TestRuleAdapter.java:26)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:35)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at java.lang.Thread.run(Thread.java:745)


FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.testPathParts

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([7753360D341032BD]:0)


FAILED:  
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.org.apache.solr.hadoop.MorphlineBasicMiniMRTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([7753360D341032BD]:0)




Build Log:
[...truncated 53895 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:548: 
The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-5.x/build.xml:200: 
The following error occurred while executing this line:
: Java returned: 1

Total time: 428 minutes 45 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.7.0_67) - Build # 11454 - Still Failing!

2014-11-15 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11454/
Java: 32bit/jdk1.7.0_67 -client -XX:+UseParallelGC (asserts: true)

1 tests failed.
REGRESSION:  
org.apache.lucene.analysis.icu.segmentation.TestICUTokenizerCJK.testRandomHugeStrings

Error Message:
term 4 expected:ー[] but was:ー[詛]

Stack Trace:
org.junit.ComparisonFailure: term 4 expected:ー[] but was:ー[詛]
at 
__randomizedtesting.SeedInfo.seed([4AA917552BD0ECC:9C89F6B60CCBB284]:0)
at org.junit.Assert.assertEquals(Assert.java:125)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:180)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:295)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.assertTokenStreamContents(BaseTokenStreamTestCase.java:299)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkAnalysisConsistency(BaseTokenStreamTestCase.java:815)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:614)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:512)
at 
org.apache.lucene.analysis.BaseTokenStreamTestCase.checkRandomData(BaseTokenStreamTestCase.java:436)
at 
org.apache.lucene.analysis.icu.segmentation.TestICUTokenizerCJK.testRandomHugeStrings(TestICUTokenizerCJK.java:89)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Commented] (LUCENE-6062) Index corruption from numeric DV updates

2014-11-15 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213617#comment-14213617
 ] 

Shai Erera commented on LUCENE-6062:


I found the problem. With your change to the test, you created the following 
scenario: update a non-existing NDV field in a segment with other NDV fields 
(note that without this change, the test ensures that you can update a 
non-existing NDV fields in a segment without any other NDV fields).

What happens is that in this code of SegmentDocValuesProducer:

{code}
  if (baseProducer == null) {
// the base producer gets all the fields, so the Codec can validate 
properly
baseProducer = segDocValues.getDocValuesProducer(docValuesGen, si, 
IOContext.READ, dir, dvFormat, fieldInfos);
dvGens.add(docValuesGen);
dvProducers.add(baseProducer);
  }
{code}

We pass all the fieldInfos, which now also contain an FI for 'ndv'. But that 
field was never written to the base segment file (the .cfs), and so it cannot 
be found there...

Not yet sure how to resolve it. We pass all the FIS because e.g. Lucene50DVP 
verifies that every field it encounters in the metadata file has a matching 
entry in the given FieldInfos (to check for index corruption). So we cannot 
just pass only the FIs with dvGen=-1. On the other hand, we do have a case here 
where the base .cfs never had an instance of that field ... it's like we need 
to know in which 'gen' a DV field was introduced. Then we can pass to 
baseProducer all the FIs whose startGen==-1...

 Index corruption from numeric DV updates
 

 Key: LUCENE-6062
 URL: https://issues.apache.org/jira/browse/LUCENE-6062
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Fix For: 4.10.3, 5.0, Trunk


 I hit this while working on on LUCENE-6005: when cutting over 
 TestNumericDocValuesUpdates to the new Document2 API, I accidentally enabled 
 additional docValues in the test, and this this:
 {noformat}
 There was 1 failure:
 1) 
 testUpdateSegmentWithNoDocValues(org.apache.lucene.index.TestNumericDocValuesUpdates)
 java.io.FileNotFoundException: _1_Asserting_0.dvm in 
 dir=RAMDirectory@259847e5 
 lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@30981eab
   at __randomizedtesting.SeedInfo.seed([0:7C88A439A551C47D]:0)
   at 
 org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:645)
   at 
 org.apache.lucene.store.Directory.openChecksumInput(Directory.java:110)
   at 
 org.apache.lucene.codecs.lucene50.Lucene50DocValuesProducer.init(Lucene50DocValuesProducer.java:130)
   at 
 org.apache.lucene.codecs.lucene50.Lucene50DocValuesFormat.fieldsProducer(Lucene50DocValuesFormat.java:182)
   at 
 org.apache.lucene.codecs.asserting.AssertingDocValuesFormat.fieldsProducer(AssertingDocValuesFormat.java:66)
   at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.init(PerFieldDocValuesFormat.java:267)
   at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat.fieldsProducer(PerFieldDocValuesFormat.java:357)
   at 
 org.apache.lucene.index.SegmentDocValues.newDocValuesProducer(SegmentDocValues.java:51)
   at 
 org.apache.lucene.index.SegmentDocValues.getDocValuesProducer(SegmentDocValues.java:68)
   at 
 org.apache.lucene.index.SegmentDocValuesProducer.init(SegmentDocValuesProducer.java:63)
   at 
 org.apache.lucene.index.SegmentReader.initDocValuesProducer(SegmentReader.java:167)
   at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:109)
   at 
 org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58)
   at 
 org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50)
   at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:556)
   at 
 org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:50)
   at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
   at 
 org.apache.lucene.index.TestNumericDocValuesUpdates.testUpdateSegmentWithNoDocValues(TestNumericDocValuesUpdates.java:769)
 {noformat}
 A one-line change to the existing test (on trunk) causes this corruption:
 {noformat}
 Index: 
 lucene/core/src/test/org/apache/lucene/index/TestNumericDocValuesUpdates.java
 ===
 --- 
 lucene/core/src/test/org/apache/lucene/index/TestNumericDocValuesUpdates.java 
 (revision 1639580)
 +++ 
 lucene/core/src/test/org/apache/lucene/index/TestNumericDocValuesUpdates.java 
 (working copy)
 @@ -750,6 +750,7 @@
  // second segment with no NDV
  doc = new Document();
  doc.add(new StringField(id, doc1, 

[jira] [Created] (SOLR-6744) fl renaming of uniqueKey field generates null pointer exception in SolrCloud configuration

2014-11-15 Thread Garth Grimm (JIRA)
Garth Grimm created SOLR-6744:
-

 Summary: fl renaming of uniqueKey field generates null pointer 
exception in SolrCloud configuration
 Key: SOLR-6744
 URL: https://issues.apache.org/jira/browse/SOLR-6744
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.1
 Environment: Multiple replicas on SolrCloud config.  This specific 
example with 4 shard, 3 replica per shard config.  This bug does NOT exist when 
query is handled by single core.
Reporter: Garth Grimm
Priority: Minor


If trying to rename the uniqueKey field using 'fl', and sending the query to 
SolrCloud config, a null pointer exception gets generated.

http://localhost:8983/solr/cloudcollection/select?q=*%3A*wt=xmlindent=truefl=key:id

responselst name=responseHeaderint name=status500/intint 
name=QTime11/intlst name=paramsstr name=q*:*/strstr 
name=indenttrue/strstr name=flkey:id/strstr 
name=wtxml/str/lst/lstlst name=errorstr 
name=tracejava.lang.NullPointerException
at 
org.apache.solr.handler.component.QueryComponent.returnFields(QueryComponent.java:1257)
at 
org.apache.solr.handler.component.QueryComponent.handleRegularResponses(QueryComponent.java:720)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:695)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:324)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1967)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at 
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.headerComplete(AbstractHttpConnection.java:942)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.headerComplete(AbstractHttpConnection.java:1004)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:640)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)
/strint name=code500/int/lst/response



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 676 - Still Failing

2014-11-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/676/

2 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
java.lang.NullPointerException 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
java.lang.NullPointerException

at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 

[jira] [Updated] (SOLR-6127) Improve Solr's exampledocs data

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6127:
---
Fix Version/s: 5.0

 Improve Solr's exampledocs data
 ---

 Key: SOLR-6127
 URL: https://issues.apache.org/jira/browse/SOLR-6127
 Project: Solr
  Issue Type: Improvement
  Components: documentation
Reporter: Varun Thacker
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: LICENSE.txt, README.txt, README.txt, film.csv, 
 film.json, film.xml, freebase_film_dump.py, freebase_film_dump.py, 
 freebase_film_dump.py, freebase_film_dump.py, freebase_film_dump.py, 
 freebase_film_dump.py, freebase_film_dump.py


 Currently 
 - The CSV example has 10 documents.
 - The JSON example has 4 documents.
 - The XML example has 32 documents.
 1. We should have equal number of documents and the same documents in all the 
 example formats
 2. A data set which is slightly more comprehensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-6127) Improve Solr's exampledocs data

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher reassigned SOLR-6127:
--

Assignee: Erik Hatcher

 Improve Solr's exampledocs data
 ---

 Key: SOLR-6127
 URL: https://issues.apache.org/jira/browse/SOLR-6127
 Project: Solr
  Issue Type: Improvement
  Components: documentation
Reporter: Varun Thacker
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: LICENSE.txt, README.txt, README.txt, film.csv, 
 film.json, film.xml, freebase_film_dump.py, freebase_film_dump.py, 
 freebase_film_dump.py, freebase_film_dump.py, freebase_film_dump.py, 
 freebase_film_dump.py, freebase_film_dump.py


 Currently 
 - The CSV example has 10 documents.
 - The JSON example has 4 documents.
 - The XML example has 32 documents.
 1. We should have equal number of documents and the same documents in all the 
 example formats
 2. A data set which is slightly more comprehensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6745) Stats Field Exclusion Doesn't work in Distributed Mode

2014-11-15 Thread Harish Agarwal (JIRA)
Harish Agarwal created SOLR-6745:


 Summary: Stats Field Exclusion Doesn't work in Distributed Mode
 Key: SOLR-6745
 URL: https://issues.apache.org/jira/browse/SOLR-6745
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.9.1
 Environment: Ubuntu 12.04
Reporter: Harish Agarwal
Priority: Minor
 Fix For: 4.9.1


When using the stats exclusion operator an Exception is raised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6127) Improve Solr's exampledocs data

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6127?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6127:
---
Component/s: scripts and tools

 Improve Solr's exampledocs data
 ---

 Key: SOLR-6127
 URL: https://issues.apache.org/jira/browse/SOLR-6127
 Project: Solr
  Issue Type: Improvement
  Components: documentation, scripts and tools
Reporter: Varun Thacker
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: LICENSE.txt, README.txt, README.txt, film.csv, 
 film.json, film.xml, freebase_film_dump.py, freebase_film_dump.py, 
 freebase_film_dump.py, freebase_film_dump.py, freebase_film_dump.py, 
 freebase_film_dump.py, freebase_film_dump.py


 Currently 
 - The CSV example has 10 documents.
 - The JSON example has 4 documents.
 - The XML example has 32 documents.
 1. We should have equal number of documents and the same documents in all the 
 example formats
 2. A data set which is slightly more comprehensive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6700) ChildDocTransformer doesn't return correct children after updating and optimising solr index

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6700:
---
Summary: ChildDocTransformer doesn't return correct children after updating 
and optimising solr index  (was: ChildDocTransformer doesn't return correct 
children after updating and optimising sol'r index)

 ChildDocTransformer doesn't return correct children after updating and 
 optimising solr index
 

 Key: SOLR-6700
 URL: https://issues.apache.org/jira/browse/SOLR-6700
 Project: Solr
  Issue Type: Bug
Reporter: Bogdan Marinescu
Priority: Blocker
 Fix For: 4.10.3, 5.0


 I have an index with nested documents. 
 {code:title=schema.xml snippet|borderStyle=solid}
  field name=id type=string indexed=true stored=true required=true 
 multiValued=false /
 field name=entityType type=int indexed=true stored=true 
 required=true/
 field name=pName type=string indexed=true stored=true/
 field name=cAlbum type=string indexed=true stored=true/
 field name=cSong type=string indexed=true stored=true/
 field name=_root_ type=string indexed=true stored=true/
 field name=_version_ type=long indexed=true stored=true/
 {code}
 Afterwards I add the following documents:
 {code}
 add
   doc
 field name=id1/field
 field name=pNameTest Artist 1/field
 field name=entityType1/field
 doc
 field name=id11/field
 field name=cAlbumTest Album 1/field
   field name=cSongTest Song 1/field
 field name=entityType2/field
 /doc
   /doc
   doc
 field name=id2/field
 field name=pNameTest Artist 2/field
 field name=entityType1/field
 doc
 field name=id22/field
 field name=cAlbumTest Album 2/field
   field name=cSongTest Song 2/field
 field name=entityType2/field
 /doc
   /doc
 /add
 {code}
 After performing the following query 
 {quote}
 http://localhost:8983/solr/collection1/select?q=%7B!parent+which%3DentityType%3A1%7Dfl=*%2Cscore%2C%5Bchild+parentFilter%3DentityType%3A1%5Dwt=jsonindent=true
 {quote}
 I get a correct answer (child matches parent, check _root_ field)
 {code:title=add docs|borderStyle=solid}
 {
   responseHeader:{
 status:0,
 QTime:1,
 params:{
   fl:*,score,[child parentFilter=entityType:1],
   indent:true,
   q:{!parent which=entityType:1},
   wt:json}},
   response:{numFound:2,start:0,maxScore:1.0,docs:[
   {
 id:1,
 pName:Test Artist 1,
 entityType:1,
 _version_:1483832661048819712,
 _root_:1,
 score:1.0,
 _childDocuments_:[
 {
   id:11,
   cAlbum:Test Album 1,
   cSong:Test Song 1,
   entityType:2,
   _root_:1}]},
   {
 id:2,
 pName:Test Artist 2,
 entityType:1,
 _version_:1483832661050916864,
 _root_:2,
 score:1.0,
 _childDocuments_:[
 {
   id:22,
   cAlbum:Test Album 2,
   cSong:Test Song 2,
   entityType:2,
   _root_:2}]}]
   }}
 {code}
 Afterwards I try to update one document:
 {code:title=update doc|borderStyle=solid}
 add
 doc
 field name=id1/field
 field name=pName update=setINIT/field
 /doc
 /add
 {code}
 After performing the previous query I get the right result (like the previous 
 one but with the pName field updated).
 The problem only comes after performing an *optimize*. 
 Now, the same query yields the following result:
 {code}
 {
   responseHeader:{
 status:0,
 QTime:1,
 params:{
   fl:*,score,[child parentFilter=entityType:1],
   indent:true,
   q:{!parent which=entityType:1},
   wt:json}},
   response:{numFound:2,start:0,maxScore:1.0,docs:[
   {
 id:2,
 pName:Test Artist 2,
 entityType:1,
 _version_:1483832661050916864,
 _root_:2,
 score:1.0,
 _childDocuments_:[
 {
   id:11,
   cAlbum:Test Album 1,
   cSong:Test Song 1,
   entityType:2,
   _root_:1},
 {
   id:22,
   cAlbum:Test Album 2,
   cSong:Test Song 2,
   entityType:2,
   _root_:2}]},
   {
 id:1,
 pName:INIT,
 entityType:1,
 _root_:1,
 _version_:1483832916867809280,
 score:1.0}]
   }}
 {code}
 As can be seen, the document with id:2 now contains the child with id:11 that 
 belongs to the document with id:1. 
 I haven't found any references on the web about this except 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html
 Similar issue: SOLR-6096
 Is this problem known? Is there a workaround for this? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (SOLR-6700) ChildDocTransformer doesn't return correct children after updating and optimising solr index

2014-11-15 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213667#comment-14213667
 ] 

Erik Hatcher commented on SOLR-6700:


What's the course of action here?   Is there something that can be done here 
for a possible 4.10.3 release?  Looks like this is about known constraints when 
using Lucene block-join, so maybe this is won't fix or not a problem?

 ChildDocTransformer doesn't return correct children after updating and 
 optimising solr index
 

 Key: SOLR-6700
 URL: https://issues.apache.org/jira/browse/SOLR-6700
 Project: Solr
  Issue Type: Bug
Reporter: Bogdan Marinescu
Priority: Blocker
 Fix For: 4.10.3, 5.0


 I have an index with nested documents. 
 {code:title=schema.xml snippet|borderStyle=solid}
  field name=id type=string indexed=true stored=true required=true 
 multiValued=false /
 field name=entityType type=int indexed=true stored=true 
 required=true/
 field name=pName type=string indexed=true stored=true/
 field name=cAlbum type=string indexed=true stored=true/
 field name=cSong type=string indexed=true stored=true/
 field name=_root_ type=string indexed=true stored=true/
 field name=_version_ type=long indexed=true stored=true/
 {code}
 Afterwards I add the following documents:
 {code}
 add
   doc
 field name=id1/field
 field name=pNameTest Artist 1/field
 field name=entityType1/field
 doc
 field name=id11/field
 field name=cAlbumTest Album 1/field
   field name=cSongTest Song 1/field
 field name=entityType2/field
 /doc
   /doc
   doc
 field name=id2/field
 field name=pNameTest Artist 2/field
 field name=entityType1/field
 doc
 field name=id22/field
 field name=cAlbumTest Album 2/field
   field name=cSongTest Song 2/field
 field name=entityType2/field
 /doc
   /doc
 /add
 {code}
 After performing the following query 
 {quote}
 http://localhost:8983/solr/collection1/select?q=%7B!parent+which%3DentityType%3A1%7Dfl=*%2Cscore%2C%5Bchild+parentFilter%3DentityType%3A1%5Dwt=jsonindent=true
 {quote}
 I get a correct answer (child matches parent, check _root_ field)
 {code:title=add docs|borderStyle=solid}
 {
   responseHeader:{
 status:0,
 QTime:1,
 params:{
   fl:*,score,[child parentFilter=entityType:1],
   indent:true,
   q:{!parent which=entityType:1},
   wt:json}},
   response:{numFound:2,start:0,maxScore:1.0,docs:[
   {
 id:1,
 pName:Test Artist 1,
 entityType:1,
 _version_:1483832661048819712,
 _root_:1,
 score:1.0,
 _childDocuments_:[
 {
   id:11,
   cAlbum:Test Album 1,
   cSong:Test Song 1,
   entityType:2,
   _root_:1}]},
   {
 id:2,
 pName:Test Artist 2,
 entityType:1,
 _version_:1483832661050916864,
 _root_:2,
 score:1.0,
 _childDocuments_:[
 {
   id:22,
   cAlbum:Test Album 2,
   cSong:Test Song 2,
   entityType:2,
   _root_:2}]}]
   }}
 {code}
 Afterwards I try to update one document:
 {code:title=update doc|borderStyle=solid}
 add
 doc
 field name=id1/field
 field name=pName update=setINIT/field
 /doc
 /add
 {code}
 After performing the previous query I get the right result (like the previous 
 one but with the pName field updated).
 The problem only comes after performing an *optimize*. 
 Now, the same query yields the following result:
 {code}
 {
   responseHeader:{
 status:0,
 QTime:1,
 params:{
   fl:*,score,[child parentFilter=entityType:1],
   indent:true,
   q:{!parent which=entityType:1},
   wt:json}},
   response:{numFound:2,start:0,maxScore:1.0,docs:[
   {
 id:2,
 pName:Test Artist 2,
 entityType:1,
 _version_:1483832661050916864,
 _root_:2,
 score:1.0,
 _childDocuments_:[
 {
   id:11,
   cAlbum:Test Album 1,
   cSong:Test Song 1,
   entityType:2,
   _root_:1},
 {
   id:22,
   cAlbum:Test Album 2,
   cSong:Test Song 2,
   entityType:2,
   _root_:2}]},
   {
 id:1,
 pName:INIT,
 entityType:1,
 _root_:1,
 _version_:1483832916867809280,
 score:1.0}]
   }}
 {code}
 As can be seen, the document with id:2 now contains the child with id:11 that 
 belongs to the document with id:1. 
 I haven't found any references on the web about this except 
 http://blog.griddynamics.com/2013/09/solr-block-join-support.html
 Similar issue: SOLR-6096
 Is this problem known? Is there a workaround for this? 



--
This 

[jira] [Updated] (SOLR-6745) Stats Field Exclusion Doesn't work in Distributed Mode

2014-11-15 Thread Harish Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Agarwal updated SOLR-6745:
-
Description: When using the stats exclusion operator in distributed mode an 
Exception is raised.  (was: When using the stats exclusion operator an 
Exception is raised.)

 Stats Field Exclusion Doesn't work in Distributed Mode
 --

 Key: SOLR-6745
 URL: https://issues.apache.org/jira/browse/SOLR-6745
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.9.1
 Environment: Ubuntu 12.04
Reporter: Harish Agarwal
Priority: Minor
 Fix For: 4.9.1


 When using the stats exclusion operator in distributed mode an Exception is 
 raised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6702) Add facet.interval support to /browse GUI

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6702?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6702:
---
Fix Version/s: (was: 4.10.3)

Removing 4.10.3 as a fix version - I don't think we'll address it there, but we 
can aim to get this added to 4x and beyond.

 Add facet.interval support to /browse GUI
 -

 Key: SOLR-6702
 URL: https://issues.apache.org/jira/browse/SOLR-6702
 Project: Solr
  Issue Type: Task
  Components: contrib - Velocity
Affects Versions: 4.10.2
Reporter: Jan Høydahl
  Labels: velocity
 Fix For: 5.0, Trunk


 Now that we have the new [Interval 
 faceting|https://cwiki.apache.org/confluence/display/solr/Faceting#Faceting-IntervalFaceting]
  it should show in Velocity /browse GUI



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6649) Remove all use of loader.getConfigDir in SolrCloud mode

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6649:
---
Description: 
We have had several cases where getConfigDir() is called in Cloud/ZK mode 
causing exception, by components/features that were not yet 100% cloud-aware.

We should inspect the whole code base and avoid calling getConfigDir. Places 
where we want the full path of a resource for logging, we can simply use the 
new static method {{CloudUtil#unifiedResourcePath(loader)}} instead, introduced 
in SOLR-6647.

  was:
We have had several cases where getConfigDir() is called in Cloud/ZK mode 
causing exception, by components/features that were not yet 100% cloud-aware.

We should inspect the whole code base and avoid calling getConfigDir. Places 
where we want the full path of a resource for logging, we can simply use the 
new static method {{ClouldUtil#unifiedResourcePath(loader)}} instead, 
introduced in SOLR-6647.


 Remove all use of loader.getConfigDir in SolrCloud mode
 ---

 Key: SOLR-6649
 URL: https://issues.apache.org/jira/browse/SOLR-6649
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.1
Reporter: Jan Høydahl
Assignee: Jan Høydahl
  Labels: logging
 Fix For: 4.10.3, Trunk


 We have had several cases where getConfigDir() is called in Cloud/ZK mode 
 causing exception, by components/features that were not yet 100% cloud-aware.
 We should inspect the whole code base and avoid calling getConfigDir. Places 
 where we want the full path of a resource for logging, we can simply use the 
 new static method {{CloudUtil#unifiedResourcePath(loader)}} instead, 
 introduced in SOLR-6647.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6745) Stats Field Exclusion Doesn't work in Distributed Mode

2014-11-15 Thread Harish Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harish Agarwal updated SOLR-6745:
-
Attachment: SOLR-6745.patch

 Stats Field Exclusion Doesn't work in Distributed Mode
 --

 Key: SOLR-6745
 URL: https://issues.apache.org/jira/browse/SOLR-6745
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.9.1
 Environment: Ubuntu 12.04
Reporter: Harish Agarwal
Priority: Minor
 Fix For: 4.9.1

 Attachments: SOLR-6745.patch


 When using the stats exclusion operator in distributed mode an Exception is 
 raised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6745) Stats Field Exclusion Doesn't work in Distributed Mode

2014-11-15 Thread Harish Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6745?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213671#comment-14213671
 ] 

Harish Agarwal commented on SOLR-6745:
--

I've attached a fix which recognizes the exclusion operator in distributed mode.

 Stats Field Exclusion Doesn't work in Distributed Mode
 --

 Key: SOLR-6745
 URL: https://issues.apache.org/jira/browse/SOLR-6745
 Project: Solr
  Issue Type: Bug
  Components: SearchComponents - other
Affects Versions: 4.9.1
 Environment: Ubuntu 12.04
Reporter: Harish Agarwal
Priority: Minor
 Fix For: 4.9.1

 Attachments: SOLR-6745.patch


 When using the stats exclusion operator in distributed mode an Exception is 
 raised.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6539) SolrJ document object binding / BigDecimal

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-6539:
---
Fix Version/s: 5.0

 SolrJ document object binding / BigDecimal
 --

 Key: SOLR-6539
 URL: https://issues.apache.org/jira/browse/SOLR-6539
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: 4.9, 4.10, 4.10.1
Reporter: Bert Brecht
  Labels: patch
 Fix For: 4.10.3, 5.0, Trunk

 Attachments: SOLR-6539.diff


 We are using BigDecimals in our application quite often for calculating. We 
 store our values typically as java primitives (int, long/double, float) and 
 using the DocumentObjectBinder (annotations based document object binding). 
 Unfortunately, we must have exactly the type given in solr schema for type 
 used as field/accessor. We found out, that the following patch would allow us 
 to define BigDecimal as type as we just use BigDecimal as a type in our 
 mapped POJO. This would help to make the mapping more powerful without 
 loosing anything.
 --
 $ svn diff 
 Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java
 Index: 
 Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java
 ===
 --- 
 Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java
  (revision 1626087)
 +++ 
 Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java
  (working copy)
 @@ -359,6 +359,9 @@
if (v != null  type == ByteBuffer.class  v.getClass() == 
 byte[].class) {
  v = ByteBuffer.wrap((byte[]) v);
}
 +  if (type == java.math.BigDecimal.class){
 +v = BigDecimal.valueOf(v):
 +  }
try {
  if (field != null) {
field.set(obj, v);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6539) SolrJ document object binding / BigDecimal

2014-11-15 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6539?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213673#comment-14213673
 ] 

Erik Hatcher commented on SOLR-6539:


Bert - could you also include a test case that demonstrates the problem and 
passes after your fix? 

 SolrJ document object binding / BigDecimal
 --

 Key: SOLR-6539
 URL: https://issues.apache.org/jira/browse/SOLR-6539
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Affects Versions: 4.9, 4.10, 4.10.1
Reporter: Bert Brecht
  Labels: patch
 Fix For: 4.10.3, 5.0, Trunk

 Attachments: SOLR-6539.diff


 We are using BigDecimals in our application quite often for calculating. We 
 store our values typically as java primitives (int, long/double, float) and 
 using the DocumentObjectBinder (annotations based document object binding). 
 Unfortunately, we must have exactly the type given in solr schema for type 
 used as field/accessor. We found out, that the following patch would allow us 
 to define BigDecimal as type as we just use BigDecimal as a type in our 
 mapped POJO. This would help to make the mapping more powerful without 
 loosing anything.
 --
 $ svn diff 
 Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java
 Index: 
 Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java
 ===
 --- 
 Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java
  (revision 1626087)
 +++ 
 Downloads/solr/solr/solrj/src/java/org/apache/solr/client/solrj/beans/DocumentObjectBinder.java
  (working copy)
 @@ -359,6 +359,9 @@
if (v != null  type == ByteBuffer.class  v.getClass() == 
 byte[].class) {
  v = ByteBuffer.wrap((byte[]) v);
}
 +  if (type == java.math.BigDecimal.class){
 +v = BigDecimal.valueOf(v):
 +  }
try {
  if (field != null) {
field.set(obj, v);



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6684) Fix-up /export JSON

2014-11-15 Thread Erik Hatcher (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213674#comment-14213674
 ] 

Erik Hatcher commented on SOLR-6684:


[~joel.bernstein] - Is the work done for this?   If so, looks like it just 
needs to be merged to the 4.10 branch so it can make it to 4.10.3 (or remove 
that as a fix version).

 Fix-up /export JSON
 ---

 Key: SOLR-6684
 URL: https://issues.apache.org/jira/browse/SOLR-6684
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
 Fix For: 4.10.3, 5.0

 Attachments: SOLR-6684.patch


 This ticket does a couple of things. 
 1) Fixes a bug in the /export JSON, where a comma is missed every 30,000 
 records. 
 2) Changes the JSON format to match-up with the normal JSON result set.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-3669) Create a ScriptSearchComponent

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3669?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher resolved SOLR-3669.

Resolution: Won't Fix

I'm not motivated or inclined to tackle this.  It's a nice idea, but SOLR-5005 
can cover this sort of need, or other query pipelines that exist outside of 
Solr.

 Create a ScriptSearchComponent
 --

 Key: SOLR-3669
 URL: https://issues.apache.org/jira/browse/SOLR-3669
 Project: Solr
  Issue Type: Improvement
  Components: SearchComponents - other
Reporter: Erik Hatcher
Assignee: Erik Hatcher
 Fix For: Trunk


 Building on the infrastructure created from SOLR-1725, a 
 ScriptSearchComponent would be a valuable addition to Solr flexibility.
 Performance impact will be a very important factor and need to be measured.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-4512) /browse GUI: Extra URL params should be sticky

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-4512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-4512:
---
Fix Version/s: Trunk
   5.0

 /browse GUI: Extra URL params should be sticky
 --

 Key: SOLR-4512
 URL: https://issues.apache.org/jira/browse/SOLR-4512
 Project: Solr
  Issue Type: Improvement
  Components: contrib - Velocity
Reporter: Jan Høydahl
Assignee: Erik Hatcher
 Fix For: 5.0, Trunk


 Sometimes you want to experiment with extra query parms in Velocity 
 /browse. But if you modify the URL it will be forgotten once you click 
 anything in the GUI.
 We need a way to make them sticky, either generate all the links based on the 
 current actual URL, or add a checkbox which reveals a new input field where 
 you can write all the extra params you want appended



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-3711) Velocity: Break or truncate long strings in facet output

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-3711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-3711:
---
Fix Version/s: 5.0

 Velocity: Break or truncate long strings in facet output
 

 Key: SOLR-3711
 URL: https://issues.apache.org/jira/browse/SOLR-3711
 Project: Solr
  Issue Type: Bug
  Components: Response Writers
Reporter: Jan Høydahl
Assignee: Erik Hatcher
  Labels: /browse
 Fix For: 5.0, Trunk

 Attachments: SOLR-3711.patch


 In Solritas /browse GUI, if facets contain very long strings (such as 
 content-type tend to do), currently the too long text runs over the main 
 column and it is not pretty.
 Perhaps inserting a Soft Hyphen shy; 
 (http://en.wikipedia.org/wiki/Soft_hyphen) at position N in very long terms 
 is a solution?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2178) Use the Velocity UI as the default tutorial example

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2178?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-2178:
---
Fix Version/s: (was: 4.10)
   5.0

 Use the Velocity UI as the default tutorial example
 ---

 Key: SOLR-2178
 URL: https://issues.apache.org/jira/browse/SOLR-2178
 Project: Solr
  Issue Type: Improvement
Reporter: Grant Ingersoll
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk


 The /browse example in solr/example is much nicer to look at and work with, 
 we should convert the tutorial over to use it so as to present a nicer view 
 of Solr's capabilities.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-1723) VelocityResponseWriter view enhancement ideas

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-1723:
---
Fix Version/s: 5.0

 VelocityResponseWriter view enhancement ideas
 -

 Key: SOLR-1723
 URL: https://issues.apache.org/jira/browse/SOLR-1723
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers
Affects Versions: 1.4
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk


 Jotting down some ideas for improvement in the Solritas default view:
   * Look up uniqueKey field name (for use by highlighting, explain, and other 
 response extras)
   * Add highlighting support - don't show ... when whole field is 
 highlighted (fragsize=0), add hover to see stored field value that may be 
 returned also



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-2035) Add Velocity's ResourceTool to allow for i18n string lookups

2014-11-15 Thread Erik Hatcher (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-2035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erik Hatcher updated SOLR-2035:
---
Fix Version/s: Trunk
   5.0

 Add Velocity's ResourceTool to allow for i18n string lookups
 

 Key: SOLR-2035
 URL: https://issues.apache.org/jira/browse/SOLR-2035
 Project: Solr
  Issue Type: Improvement
  Components: Response Writers
Reporter: Erik Hatcher
Assignee: Erik Hatcher
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-2035.patch


 Being able to look up string resources through Java's ResourceBundle facility 
 can be really useful in Velocity templates (through VelocityResponseWriter).  
 Velocity Tools includes a ResourceTool. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-6746) Add New Learning Resoure

2014-11-15 Thread Narayan Prusty (JIRA)
Narayan Prusty created SOLR-6746:


 Summary: Add New Learning Resoure
 Key: SOLR-6746
 URL: https://issues.apache.org/jira/browse/SOLR-6746
 Project: Solr
  Issue Type: New Feature
Reporter: Narayan Prusty


Hello Solr Team,

I have created a video course on Apache Solr. I have it hosted on Udemy and 
QScutter. I have attached the links

http://qscutter.com/course/learn-apache-solr-with-big-data-and-cloud-computing/

https://www.udemy.com/learn-apache-solr-with-big-data-and-cloud-computing/

I have a lots of positive reviews. I see a lots of resources on your official 
site so can you please list one too. It will help beginners to get started with 
Solr easily.

Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6547) CloudSolrServer query getqtime Exception

2014-11-15 Thread Anurag Sharma (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anurag Sharma updated SOLR-6547:

Attachment: SOLR-6547.patch

Fix using #2 approach, without UT.

 CloudSolrServer query getqtime Exception
 

 Key: SOLR-6547
 URL: https://issues.apache.org/jira/browse/SOLR-6547
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Affects Versions: 4.10
Reporter: kevin
 Attachments: SOLR-6547.patch


 We are using CloudSolrServer to query ,but solrj throw Exception ;
 java.lang.ClassCastException: java.lang.Long cannot be cast to 
 java.lang.Integer  at 
 org.apache.solr.client.solrj.response.SolrResponseBase.getQTime(SolrResponseBase.java:76)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6684) Fix-up /export JSON

2014-11-15 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213704#comment-14213704
 ] 

Joel Bernstein commented on SOLR-6684:
--

Yep, this is done. I'll be back-porting to 4.10 shortly.

 Fix-up /export JSON
 ---

 Key: SOLR-6684
 URL: https://issues.apache.org/jira/browse/SOLR-6684
 Project: Solr
  Issue Type: Bug
Reporter: Joel Bernstein
 Fix For: 4.10.3, 5.0

 Attachments: SOLR-6684.patch


 This ticket does a couple of things. 
 1) Fixes a bug in the /export JSON, where a comma is missed every 30,000 
 records. 
 2) Changes the JSON format to match-up with the normal JSON result set.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6726) Specifying different ports with the new bin/solr script fails to bring up collection

2014-11-15 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213728#comment-14213728
 ] 

Erick Erickson commented on SOLR-6726:
--

Poking around a little more, mostly hacking around since I don't really know 
what's going on,
it's bash scripting and I'm sitting in an airport:

1 At first I wondered if it was a problem with just reading in the port, but 
if I 
hack bin/solr and change this line
declare -a CLOUD_PORTS=('8983' '7574' '8984' '7575');
to this line
declare -a CLOUD_PORTS=('8983' '7200' '7300' '7400');
the same thing occurs.
2 using this command: ./solr start -c -e cloud -z localhost:2181
3 If I change as above and put in the original ports, it's fine. (i.e., I get 
prompted for node2 to go on 7200
and enter 7574 instead.
4 node2/logs/solr-7200-console.log consists of: Error: Exception thrown by 
the agent : java.lang.NullPointerException


bonus question: Why is there a log for 8983 in node2 (which is where the 7200 
core is supposed to be) and why does it contain the line:
org.apache.solr.core.CoresLocator  – Looking for core definitions underneath 
/Users/Erick/apache/trunk_6703/solr/server/solr?

 Specifying different ports with the new bin/solr script fails to bring up 
 collection
 

 Key: SOLR-6726
 URL: https://issues.apache.org/jira/browse/SOLR-6726
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson

 As I recall, I tried to specify different ports when bringing up 4 instances 
 (7200, 7300, 7400) and the startup script failed. I'll confirm this and maybe 
 propose a fix if I can reproduce. Assigning it to me so I make sure it's 
 checked.
 I'm at Lucene Revolution this week, so if anyone wants to pick this up feel 
 free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-6726) Specifying different ports with the new bin/solr script fails to start solr instances

2014-11-15 Thread Erick Erickson (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Erick Erickson updated SOLR-6726:
-
Summary: Specifying different ports with the new bin/solr script fails to 
start solr instances  (was: Specifying different ports with the new bin/solr 
script fails to bring up collection)

 Specifying different ports with the new bin/solr script fails to start solr 
 instances
 -

 Key: SOLR-6726
 URL: https://issues.apache.org/jira/browse/SOLR-6726
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson

 As I recall, I tried to specify different ports when bringing up 4 instances 
 (7200, 7300, 7400) and the startup script failed. I'll confirm this and maybe 
 propose a fix if I can reproduce. Assigning it to me so I make sure it's 
 checked.
 I'm at Lucene Revolution this week, so if anyone wants to pick this up feel 
 free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6726) Specifying different ports with the new bin/solr script fails to start solr instances

2014-11-15 Thread Erick Erickson (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213741#comment-14213741
 ] 

Erick Erickson commented on SOLR-6726:
--

OK, apparently I'm getting  a port conflict.

For port 8983, the startup up needs
8983 for Solr
1083 for jmx
7983 for the stop port.

So, specifying 7200, 7300 and 7400 is a problem since they all map to 1000 for 
jmx.

I guess all we really should be doing here is documenting the ports used by 
each Solr instance.

 Specifying different ports with the new bin/solr script fails to start solr 
 instances
 -

 Key: SOLR-6726
 URL: https://issues.apache.org/jira/browse/SOLR-6726
 Project: Solr
  Issue Type: Bug
Reporter: Erick Erickson
Assignee: Erick Erickson

 As I recall, I tried to specify different ports when bringing up 4 instances 
 (7200, 7300, 7400) and the startup script failed. I'll confirm this and maybe 
 propose a fix if I can reproduce. Assigning it to me so I make sure it's 
 checked.
 I'm at Lucene Revolution this week, so if anyone wants to pick this up feel 
 free.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6062) Index corruption from numeric DV updates

2014-11-15 Thread Shai Erera (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14213752#comment-14213752
 ] 

Shai Erera commented on LUCENE-6062:


To prove this is the problem, I added this to PerFieldDVF.FieldsReader ctor 
(line 256):

{code}
if (readState.segmentInfo.name.equals(_1)  
fieldName.equals(ndv)  fi.getDocValuesGen() == 1  
readState.fieldInfos.size()  1) {
  continue;
}
{code}

And the test passes. So e.g. if we tracked startDVGen for each field, we'd know 
in segment _1 that 'ndv' only appeared in gen=1, and therefore not pass it at 
all to baseProducer. But that causes a format change, which I hope to avoid if 
possible (especially as it doesn't solve the issue for existing indexes, though 
I think this is an extreme case for somebody to have run into).

 Index corruption from numeric DV updates
 

 Key: LUCENE-6062
 URL: https://issues.apache.org/jira/browse/LUCENE-6062
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Michael McCandless
 Fix For: 4.10.3, 5.0, Trunk


 I hit this while working on on LUCENE-6005: when cutting over 
 TestNumericDocValuesUpdates to the new Document2 API, I accidentally enabled 
 additional docValues in the test, and this this:
 {noformat}
 There was 1 failure:
 1) 
 testUpdateSegmentWithNoDocValues(org.apache.lucene.index.TestNumericDocValuesUpdates)
 java.io.FileNotFoundException: _1_Asserting_0.dvm in 
 dir=RAMDirectory@259847e5 
 lockFactory=org.apache.lucene.store.SingleInstanceLockFactory@30981eab
   at __randomizedtesting.SeedInfo.seed([0:7C88A439A551C47D]:0)
   at 
 org.apache.lucene.store.MockDirectoryWrapper.openInput(MockDirectoryWrapper.java:645)
   at 
 org.apache.lucene.store.Directory.openChecksumInput(Directory.java:110)
   at 
 org.apache.lucene.codecs.lucene50.Lucene50DocValuesProducer.init(Lucene50DocValuesProducer.java:130)
   at 
 org.apache.lucene.codecs.lucene50.Lucene50DocValuesFormat.fieldsProducer(Lucene50DocValuesFormat.java:182)
   at 
 org.apache.lucene.codecs.asserting.AssertingDocValuesFormat.fieldsProducer(AssertingDocValuesFormat.java:66)
   at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat$FieldsReader.init(PerFieldDocValuesFormat.java:267)
   at 
 org.apache.lucene.codecs.perfield.PerFieldDocValuesFormat.fieldsProducer(PerFieldDocValuesFormat.java:357)
   at 
 org.apache.lucene.index.SegmentDocValues.newDocValuesProducer(SegmentDocValues.java:51)
   at 
 org.apache.lucene.index.SegmentDocValues.getDocValuesProducer(SegmentDocValues.java:68)
   at 
 org.apache.lucene.index.SegmentDocValuesProducer.init(SegmentDocValuesProducer.java:63)
   at 
 org.apache.lucene.index.SegmentReader.initDocValuesProducer(SegmentReader.java:167)
   at org.apache.lucene.index.SegmentReader.init(SegmentReader.java:109)
   at 
 org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:58)
   at 
 org.apache.lucene.index.StandardDirectoryReader$1.doBody(StandardDirectoryReader.java:50)
   at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:556)
   at 
 org.apache.lucene.index.StandardDirectoryReader.open(StandardDirectoryReader.java:50)
   at org.apache.lucene.index.DirectoryReader.open(DirectoryReader.java:63)
   at 
 org.apache.lucene.index.TestNumericDocValuesUpdates.testUpdateSegmentWithNoDocValues(TestNumericDocValuesUpdates.java:769)
 {noformat}
 A one-line change to the existing test (on trunk) causes this corruption:
 {noformat}
 Index: 
 lucene/core/src/test/org/apache/lucene/index/TestNumericDocValuesUpdates.java
 ===
 --- 
 lucene/core/src/test/org/apache/lucene/index/TestNumericDocValuesUpdates.java 
 (revision 1639580)
 +++ 
 lucene/core/src/test/org/apache/lucene/index/TestNumericDocValuesUpdates.java 
 (working copy)
 @@ -750,6 +750,7 @@
  // second segment with no NDV
  doc = new Document();
  doc.add(new StringField(id, doc1, Store.NO));
 +doc.add(new NumericDocValuesField(foo, 3));
  writer.addDocument(doc);
  doc = new Document();
  doc.add(new StringField(id, doc2, Store.NO)); // document that isn't 
 updated
 {noformat}
 For some reason, the base doc values for the 2nd segment is not being 
 written, but clearly should have (to hold field foo)... I'm not sure why.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: svn commit: r1639914 - /lucene/dev/trunk/dev-tools/scripts/createPatch.py

2014-11-15 Thread Michael McCandless
Thanks Ryan!

Mike McCandless

http://blog.mikemccandless.com


On Sat, Nov 15, 2014 at 4:01 PM,  rjer...@apache.org wrote:
 Author: rjernst
 Date: Sat Nov 15 21:01:33 2014
 New Revision: 1639914

 URL: http://svn.apache.org/r1639914
 Log:
 Fix patch tool to better compare svn vs git checkouts, and not require 
 comparing roots of checkouts

 Modified:
 lucene/dev/trunk/dev-tools/scripts/createPatch.py

 Modified: lucene/dev/trunk/dev-tools/scripts/createPatch.py
 URL: 
 http://svn.apache.org/viewvc/lucene/dev/trunk/dev-tools/scripts/createPatch.py?rev=1639914r1=1639913r2=1639914view=diff
 ==
 --- lucene/dev/trunk/dev-tools/scripts/createPatch.py (original)
 +++ lucene/dev/trunk/dev-tools/scripts/createPatch.py Sat Nov 15 21:01:33 2014
 @@ -30,10 +30,11 @@ import os
  import subprocess
  import sys

 -def make_filter_func(src_dir):
 -  if os.path.exists(os.path.join(src_dir, '.git')):
 +def make_filter_func(src_root, src_dir):
 +  git_root = os.path.join(src_root, '.git')
 +  if os.path.exists(git_root):
  def git_filter(filename):
 -  rc = subprocess.call('git --git-dir=%s check-ignore %s' % (src_dir, 
 filename), shell=True)
 +  rc = subprocess.call('git --git-dir=%s check-ignore %s' % (git_root, 
 filename), shell=True, stdout=subprocess.DEVNULL)
return rc == 0
  return git_filter

 @@ -89,7 +90,7 @@ def run_diff(from_dir, to_dir, skip_whit
  flags += 'bBw'

args = ['diff', flags]
 -  for ignore in ('.svn', '.git', 'build', '.caches'):
 +  for ignore in ('.svn', '.git', 'build', '.caches', '.idea', 'idea-build'):
  args.append('-x')
  args.append(ignore)
args.append(from_dir)
 @@ -97,6 +98,13 @@ def run_diff(from_dir, to_dir, skip_whit

return subprocess.Popen(args, shell=False, stdout=subprocess.PIPE)

 +def find_root(path):
 +  relative = []
 +  while not os.path.exists(os.path.join(path, 'lucene', 'CHANGES.txt')):
 +path, base = os.path.split(path)
 +relative.prepend(base)
 +  return path, '' if not relative else os.path.join(relative)
 +
  def parse_config():
parser = ArgumentParser(description=__doc__, 
 formatter_class=RawTextHelpFormatter)
parser.add_argument('--skip-whitespace', action='store_true', 
 default=False,
 @@ -107,20 +115,24 @@ def parse_config():

if not os.path.isdir(c.from_dir):
  parser.error('\'from\' path %s is not a valid directory' % c.from_dir)
 -  if not os.path.exists(os.path.join(c.from_dir, 'lucene', 'CHANGES.txt')):
 -parser.error('\'from\' path %s is not a valid lucene/solr checkout' % 
 c.from_dir)
 +  (c.from_root, from_relative) = find_root(c.from_dir)
 +  if c.from_root is None:
 +parser.error('\'from\' path %s is not relative to a lucene/solr 
 checkout' % c.from_dir)
if not os.path.isdir(c.to_dir):
  parser.error('\'to\' path %s is not a valid directory' % c.to_dir)
 -  if not os.path.exists(os.path.join(c.to_dir, 'lucene', 'CHANGES.txt')):
 -parser.error('\'to\' path %s is not a valid lucene/solr checkout' % 
 c.to_dir)
 -
 +  (c.to_root, to_relative) = find_root(c.to_dir)
 +  if c.to_root is None:
 +parser.error('\'to\' path %s is not relative to a lucene/solr checkout' 
 % c.to_dir)
 +  if from_relative != to_relative:
 +parser.error('\'from\' and \'to\' path are not equivalent relative paths 
 within their'
 + ' checkouts: %s != %s' % (from_relative, to_relative))
return c

  def main():
c = parse_config()

p = run_diff(c.from_dir, c.to_dir, c.skip_whitespace)
 -  should_filter = make_filter_func(c.from_dir)
 +  should_filter = make_filter_func(c.from_root, c.from_dir)
print_filtered_output(p.stdout, should_filter)

  if __name__ == '__main__':



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS-MAVEN] Lucene-Solr-Maven-trunk #1261: POMs out of sync

2014-11-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Maven-trunk/1261/

2 tests failed.
FAILED:  org.apache.solr.hadoop.MorphlineBasicMiniMRTest.testPathParts

Error Message:
Test abandoned because suite timeout was reached.

Stack Trace:
java.lang.Exception: Test abandoned because suite timeout was reached.
at __randomizedtesting.SeedInfo.seed([D0796000B4B1D28B]:0)


FAILED:  
org.apache.solr.hadoop.MorphlineBasicMiniMRTest.org.apache.solr.hadoop.MorphlineBasicMiniMRTest

Error Message:
Suite timeout exceeded (= 720 msec).

Stack Trace:
java.lang.Exception: Suite timeout exceeded (= 720 msec).
at __randomizedtesting.SeedInfo.seed([D0796000B4B1D28B]:0)




Build Log:
[...truncated 53089 lines...]
BUILD FAILED
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:548:
 The following error occurred while executing this line:
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-Maven-trunk/build.xml:200:
 The following error occurred while executing this line:
: Java returned: 1

Total time: 406 minutes 37 seconds
Build step 'Invoke Ant' marked build as failure
Recording test results
Email was triggered for: Failure
Sending email for trigger: Failure



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Updated] (SOLR-6732) Back-compat break for LIR state in 4.10.2

2014-11-15 Thread Timothy Potter (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6732?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Timothy Potter updated SOLR-6732:
-
Attachment: SOLR-6732.patch

Here's an updated patch that should allow for hot, rolling upgrades - handling 
the String state or JSON map correctly. I've added a unit test that checks for 
back-compat support.

The other concern is a node running old code that expects a String state and 
not the JSON map. I think that will not cause any issues since it will just 
treat the map as a String; a recovering replica will just delete the value once 
it's active. 

However, before I commit this I'll do a rolling upgrade to ensure no issues 
when going from 4.8.x to 4.10.3

 Back-compat break for LIR state in 4.10.2
 -

 Key: SOLR-6732
 URL: https://issues.apache.org/jira/browse/SOLR-6732
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.10.2
Reporter: Shalin Shekhar Mangar
Assignee: Timothy Potter
Priority: Blocker
 Fix For: 4.10.3

 Attachments: SOLR-6732.patch, SOLR-6732.patch


 We changed the LIR state to be kept as a map but it is not back-compatible. 
 The problem is that we're checking for map or string after parsing JSON but 
 if the key has down as a string then json parsing will fail.
 This was introduced in SOLR-6511. This error will prevent anyone from 
 upgrading to 4.10.2
 http://mail-archives.apache.org/mod_mbox/lucene-solr-user/201411.mbox/%3c54636ed2.8040...@cytainment.de%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-SmokeRelease-5.x - Build # 211 - Still Failing

2014-11-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-SmokeRelease-5.x/211/

No tests ran.

Build Log:
[...truncated 51749 lines...]
prepare-release-no-sign:
[mkdir] Created dir: 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist
 [copy] Copying 446 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/lucene
 [copy] Copying 254 files to 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/solr
   [smoker] Java 1.7 JAVA_HOME=/home/jenkins/tools/java/latest1.7
   [smoker] NOTE: output encoding is US-ASCII
   [smoker] 
   [smoker] Load release URL 
file:/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/dist/...
   [smoker] 
   [smoker] Test Lucene...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.02 sec (4.6 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download lucene-5.0.0-src.tgz...
   [smoker] 27.8 MB in 0.04 sec (708.5 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.tgz...
   [smoker] 63.7 MB in 0.09 sec (683.2 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download lucene-5.0.0.zip...
   [smoker] 73.2 MB in 0.19 sec (384.6 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack lucene-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5569 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0.zip...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] test demo with 1.7...
   [smoker]   got 5569 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] check Lucene's javadoc JAR
   [smoker]   unpack lucene-5.0.0-src.tgz...
   [smoker] make sure no JARs/WARs in src dist...
   [smoker] run ant validate
   [smoker] run tests w/ Java 7 and testArgs='-Dtests.jettyConnector=Socket 
-Dtests.multiplier=1 -Dtests.slow=false'...
   [smoker] test demo with 1.7...
   [smoker]   got 206 hits for query lucene
   [smoker] checkindex with 1.7...
   [smoker] generate javadocs w/ Java 7...
   [smoker] 
   [smoker] Crawl/parse...
   [smoker] 
   [smoker] Verify...
   [smoker]   confirm all releases have coverage in TestBackwardsCompatibility
   [smoker] find all past Lucene releases...
   [smoker] run TestBackwardsCompatibility..
   [smoker] success!
   [smoker] 
   [smoker] Test Solr...
   [smoker]   test basics...
   [smoker]   get KEYS
   [smoker] 0.1 MB in 0.00 sec (100.3 MB/sec)
   [smoker]   check changes HTML...
   [smoker]   download solr-5.0.0-src.tgz...
   [smoker] 34.1 MB in 0.11 sec (312.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.tgz...
   [smoker] 146.4 MB in 0.36 sec (409.9 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   download solr-5.0.0.zip...
   [smoker] 152.5 MB in 0.34 sec (452.7 MB/sec)
   [smoker] verify md5/sha1 digests
   [smoker]   unpack solr-5.0.0.tgz...
   [smoker] verify JAR metadata/identity/no javax.* or java.* classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/activation-1.1.1.jar:
 it has javax.* classes
   [smoker]   **WARNING**: skipping check of 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0/contrib/dataimporthandler-extras/lib/javax.mail-1.5.1.jar:
 it has javax.* classes
   [smoker] verify WAR metadata/contained JAR identity/no javax.* or java.* 
classes...
   [smoker] unpack lucene-5.0.0.tgz...
   [smoker] copying unpacked distribution for Java 7 ...
   [smoker] test solr example w/ Java 7...
   [smoker]   start Solr instance 
(log=/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log)...
   [smoker] No process found for Solr node running on port 8983
   [smoker]   starting Solr on port 8983 from 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7
   [smoker] Startup failed; see log 
/usr/home/jenkins/jenkins-slave/workspace/Lucene-Solr-SmokeRelease-5.x/lucene/build/smokeTestRelease/tmp/unpack/solr-5.0.0-java7/solr-example.log
   [smoker] 
   [smoker] Starting Solr on port 8983 from 

[JENKINS] Lucene-Solr-NightlyTests-trunk - Build # 685 - Still Failing

2014-11-15 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-trunk/685/

1 tests failed.
REGRESSION:  
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testDistribSearch

Error Message:
java.lang.NullPointerException 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: 
java.lang.NullPointerException

at 
org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:569)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:215)
at 
org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:211)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.testErrorHandling(CollectionsAPIDistributedZkTest.java:583)
at 
org.apache.solr.cloud.CollectionsAPIDistributedZkTest.doTest(CollectionsAPIDistributedZkTest.java:205)
at 
org.apache.solr.BaseDistributedSearchTestCase.testDistribSearch(BaseDistributedSearchTestCase.java:869)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1618)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:827)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:863)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:877)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:738)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:772)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:783)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesInvariantRule$1.evaluate(SystemPropertiesInvariantRule.java:55)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at