[jira] [Updated] (SOLR-6736) A collections-like request handler to manage solr configurations on zookeeper

2015-03-30 Thread Varun Rajput (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6736?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Rajput updated SOLR-6736:
---
Attachment: zkconfighandler.zip
SOLR-6736.patch

Hi [~noble.paul], I've written up tests for most part but I am facing an issue. 
Attached is the patch and a zip whose contents should be in the 
zkconfighandler folder under test-files. Maybe you can help me out in this:

In the last part of the test, I am able to test the authenticity of the signed 
zip but cannot verify the upload of files within the zip. With some debug, I 
noticed I couldnt reach within the zipentry loop while reading the content 
stream.

Also, I had to use name param for the config because when I try using it in 
the path, the solr dispatch filter throws an exception saying not found.

Any help will be great, thanks!

 A collections-like request handler to manage solr configurations on zookeeper
 -

 Key: SOLR-6736
 URL: https://issues.apache.org/jira/browse/SOLR-6736
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Varun Rajput
Assignee: Anshum Gupta
Priority: Minor
 Fix For: 5.0, Trunk

 Attachments: SOLR-6736.patch, SOLR-6736.patch, SOLR-6736.patch, 
 SOLR-6736.patch, SOLR-6736.patch, zkconfighandler.zip


 Managing Solr configuration files on zookeeper becomes cumbersome while using 
 solr in cloud mode, especially while trying out changes in the 
 configurations. 
 It will be great if there is a request handler that can provide an API to 
 manage the configurations similar to the collections handler that would allow 
 actions like uploading new configurations, linking them to a collection, 
 deleting configurations, etc.
 example : 
 {code}
 #use the following command to upload a new configset called mynewconf. This 
 will fail if there is alredy a conf called 'mynewconf'. The file could be a 
 jar , zip or a tar file which contains all the files for the this conf.
 curl -X POST -H 'Content-Type: application/octet-stream' --data-binary 
 @testconf.zip 
 http://localhost:8983/solr/admin/configs/mynewconf?sig=the-signature
 {code}
 A GET to http://localhost:8983/solr/admin/configs will give a list of configs 
 available
 A GET to http://localhost:8983/solr/admin/configs/mynewconf would give the 
 list of files in mynewconf



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6352) Add global ordinal based query time join

2015-03-30 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated LUCENE-6352:
--
Attachment: LUCENE-6352.patch

Attached a new version:
* added hascode/equal/extractTerms methods to the query impls.
* added optimization for in the case an index has only 1 segment
* updated to the latest two phase iterator changes.

 Add global ordinal based query time join 
 -

 Key: LUCENE-6352
 URL: https://issues.apache.org/jira/browse/LUCENE-6352
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Martijn van Groningen
 Attachments: LUCENE-6352.patch, LUCENE-6352.patch, LUCENE-6352.patch


 Global ordinal based query time join as an alternative to the current query 
 time join. The implementation is faster for subsequent joins between reopens, 
 but requires an OrdinalMap to be built.
 This join has certain restrictions and requirements:
 * A document can only refer to on other document. (but can be referred by one 
 or more documents)
 * A type field must exist on all documents and each document must be 
 categorized to a type. This is to distingues between the from and to side.
 * There must be a single sorted doc values field use by both the from and 
 to documents. By encoding join into a single doc values field it is trival 
 to build an ordinals map from it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6354) Add minChildren and maxChildren options to ToParentBlockJoinQuery

2015-03-30 Thread Martijn van Groningen (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386236#comment-14386236
 ] 

Martijn van Groningen commented on LUCENE-6354:
---

oops, I added the last patch to the wrong issue...

 Add minChildren and maxChildren options to ToParentBlockJoinQuery
 -

 Key: LUCENE-6354
 URL: https://issues.apache.org/jira/browse/LUCENE-6354
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Martijn van Groningen
 Attachments: LUCENE-6354.patch, LUCENE-6354.patch, LUCENE-6354.patch, 
 LUCENE-6354.patch


 This effectively allows to ignore parent documents with too few children 
 documents via the minChildren option or too many matching children documents 
 via the maxChildren option. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: java 9 compiler issues with ByteBuffer covariant overrides

2015-03-30 Thread Uwe Schindler
Hi Dawid, hi Robert,

Exactly! We had this issue also on earlier versions of the JDK and OpenJDK 
people don't want to deal about those issues. One occurrence was in Java 8 with 
the isAnnotationPresent() as default method, but this was fixed, because it 
also affected code compiled executed with Java 8 (here the compiler, not the 
runtime faced the same issue). But covariant return types, as are problematic 
here, is one similar issue we also had in the Lucene 2-3 backwards 
compatibility game: We did not change the return types of all clone() methods 
initially to be covariant, because it would have caused issues like this for 
users of Lucene who do in-place upgrades (this is long time back). Java is now 
facing the same problem. I really like those covariant returns (allows big 
builder method chaining), but it is impossible for the compiler people to take 
care about this, unless you compile with an older rt.jar.

This is the reason why since Java 7 the following message is printed when you 
compile with older target, but newer version:
[javac] warning: [options] bootstrap class path not set in conjunction with 
-source 1.x
(and this warning message is the reason why OpenJDK people say: it's broken, 
you should compile with the version you want to release for; I am not sure why 
this message in Lucene is no longer printed, but this could be because of 
different warning settings).

This just makes one statement from the Lucene Release TODO very important and I 
am glad that it's tested by the smoker (looking at the META-INF/manifest of our 
JAR files): Build the code and javadocs, and run the unit tests: ant clean 
javadocs test. Make sure that you are actually using the minimum compiler 
version supported for the release. For example, 4.x releases are on Java6 so 
make sure that you use Java6 for the release workflow.

Uwe

-
Uwe Schindler
H.-H.-Meier-Allee 63, D-28213 Bremen
http://www.thetaphi.de
eMail: u...@thetaphi.de


 -Original Message-
 From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On Behalf
 Of Dawid Weiss
 Sent: Monday, March 30, 2015 8:47 AM
 To: dev@lucene.apache.org
 Subject: Re: java 9 compiler issues with ByteBuffer covariant overrides
 
 Hi Robert,
 
 This was discussed on core-libs at some point, but I can't find the thread and
 the outcome of that conversation right now (damn it).
 
 I believe the answer was that these covariant changes are forward-
 compatible so that existing code works with 1.9 and if you're compiling with
 backward compatibility in mind you should compile against the target JDK of
 your choice.
 
 Dawid
 
 On Mon, Mar 30, 2015 at 4:19 AM, Robert Muir rcm...@gmail.com wrote:
  Hi,
 
  If I compile lucene with a java 9 compiler (using -source 1.8 -target
  1.8 like our build does), then the resulting jar file cannot actually
  be used with a java 8 JVM.
 
  The reason is, in java 9 ByteBuffer.class got some new covariant overrides:
  e.g. ByteBuffer.java has position(int) that returns ByteBuffer, but
  this does not exist on java 8 (the position method is only in
  Buffer.java returning Buffer).
 
  This leads to an exception like this (I am sure there are other
  problems, its just the first one you will hit):
 
  Exception in thread main java.lang.NoSuchMethodError:
  java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
  at
 org.apache.lucene.store.ByteBufferIndexInput$SingleBufferImpl.init(Byte
 BufferIndexInput.java:414)
  at
 org.apache.lucene.store.ByteBufferIndexInput.newInstance(ByteBufferInd
 exInput.java:55)
  at
 org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:21
 6)
  at
 org.apache.lucene.store.Directory.openChecksumInput(Directory.java:110)
  at
 org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:268
 )
  at
 org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:359)
  at
 org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:356)
  at
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfo
 s.java:574)
  at
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfo
 s.java:526)
  at
 org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.ja
 va:361)
  at
  org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.ja
  va:228)
 
  Is this expected behavior? Do we need to fix our build to also require
  a java 8 JDK on the developers machine, and set bootclasspath, or is
  -source/-target 1.8 supposed to just work without it like it did
  before?
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For additional
 commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6354) Add minChildren and maxChildren options to ToParentBlockJoinQuery

2015-03-30 Thread Martijn van Groningen (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6354?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Martijn van Groningen updated LUCENE-6354:
--
Attachment: (was: LUCENE-6352.patch)

 Add minChildren and maxChildren options to ToParentBlockJoinQuery
 -

 Key: LUCENE-6354
 URL: https://issues.apache.org/jira/browse/LUCENE-6354
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Martijn van Groningen
 Attachments: LUCENE-6354.patch, LUCENE-6354.patch, LUCENE-6354.patch, 
 LUCENE-6354.patch


 This effectively allows to ignore parent documents with too few children 
 documents via the minChildren option or too many matching children documents 
 via the maxChildren option. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: java 9 compiler issues with ByteBuffer covariant overrides

2015-03-30 Thread Dawid Weiss
Hi Robert,

This was discussed on core-libs at some point, but I can't find the
thread and the outcome of that conversation right now (damn it).

I believe the answer was that these covariant changes are
forward-compatible so that existing code works with 1.9 and if you're
compiling with backward compatibility in mind you should compile
against the target JDK of your choice.

Dawid

On Mon, Mar 30, 2015 at 4:19 AM, Robert Muir rcm...@gmail.com wrote:
 Hi,

 If I compile lucene with a java 9 compiler (using -source 1.8 -target
 1.8 like our build does), then the resulting jar file cannot actually
 be used with a java 8 JVM.

 The reason is, in java 9 ByteBuffer.class got some new covariant overrides:
 e.g. ByteBuffer.java has position(int) that returns ByteBuffer, but
 this does not exist on java 8 (the position method is only in
 Buffer.java returning Buffer).

 This leads to an exception like this (I am sure there are other
 problems, its just the first one you will hit):

 Exception in thread main java.lang.NoSuchMethodError:
 java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
 at 
 org.apache.lucene.store.ByteBufferIndexInput$SingleBufferImpl.init(ByteBufferIndexInput.java:414)
 at 
 org.apache.lucene.store.ByteBufferIndexInput.newInstance(ByteBufferIndexInput.java:55)
 at org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:216)
 at org.apache.lucene.store.Directory.openChecksumInput(Directory.java:110)
 at org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:268)
 at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:359)
 at org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:356)
 at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:574)
 at 
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfos.java:526)
 at 
 org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.java:361)
 at 
 org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.java:228)

 Is this expected behavior? Do we need to fix our build to also require
 a java 8 JDK on the developers machine, and set bootclasspath, or is
 -source/-target 1.8 supposed to just work without it like it did
 before?

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans

2015-03-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386317#comment-14386317
 ] 

Michael McCandless commented on LUCENE-6308:


If there are no objections, I'll commit this in a day or two ... thanks 
[~paul.elsc...@xs4all.nl]!

 Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
 -

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans

2015-03-30 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386389#comment-14386389
 ] 

Adrien Grand commented on LUCENE-6308:
--

+1 to commit

 Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
 -

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6303) CachingWrapperFilter - CachingWrapperQuery

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386390#comment-14386390
 ] 

ASF subversion and git services commented on LUCENE-6303:
-

Commit 1670007 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1670007 ]

LUCENE-6303: Disable auto-caching in IndexSearcher.

 CachingWrapperFilter - CachingWrapperQuery
 ---

 Key: LUCENE-6303
 URL: https://issues.apache.org/jira/browse/LUCENE-6303
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6303-disable_auto-caching.patch, LUCENE-6303.patch


 As part of the filter - query migration, we should migrate the caching 
 wrappers (including the filter cache).
 I think the behaviour should be to delegate to the wrapped query when scores 
 are needed and cache otherwise like CachingWrapperFilter does today.
 Also the cache should ignore query boosts so that field:value^2 and 
 field:value^3 are considered equal if scores are not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-MacOSX (64bit/jdk1.7.0) - Build # 2069 - Failure!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-MacOSX/2069/
Java: 64bit/jdk1.7.0 -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC

2 tests failed.
FAILED:  org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test

Error Message:
expected:0 but was:1

Stack Trace:
java.lang.AssertionError: expected:0 but was:1
at 
__randomizedtesting.SeedInfo.seed([B858371628878423:300C08CC867BE9DB]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.failNotEquals(Assert.java:647)
at org.junit.Assert.assertEquals(Assert.java:128)
at org.junit.Assert.assertEquals(Assert.java:472)
at org.junit.Assert.assertEquals(Assert.java:456)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.testDeleteByIdCompositeRouterWithRouterField(FullSolrCloudDistribCmdsTest.java:403)
at 
org.apache.solr.cloud.FullSolrCloudDistribCmdsTest.test(FullSolrCloudDistribCmdsTest.java:146)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 

[jira] [Created] (SOLR-7324) No need to call isIndexStale if full copy is already needed

2015-03-30 Thread Stephan Lagraulet (JIRA)
Stephan Lagraulet created SOLR-7324:
---

 Summary: No need to call isIndexStale if full copy is already 
needed
 Key: SOLR-7324
 URL: https://issues.apache.org/jira/browse/SOLR-7324
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 4.10.4
Reporter: Stephan Lagraulet


During replication, we had a message File _3ww7_Lucene41_0.tim expected to be 
2027667 while it is 1861076 when in fact there was already a match on 
commit.getGeneration() = latestGeneration

So this extra operation is not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Comment Edited] (SOLR-6816) Review SolrCloud Indexing Performance.

2015-03-30 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386432#comment-14386432
 ] 

Per Steffensen edited comment on SOLR-6816 at 3/30/15 9:22 AM:
---

bq. I think you mis-understood my point. I wasn't talking about retrying 
documents in the same UpdateRequest. If a Map/Reduce task fails, the HDFS block 
is retried entirely, meaning a Hadoop-based indexing job may send the same docs 
that have already been added so using overwite=false is dangerous when doing 
this type of bulk indexing. The solution proposed in SOLR-3382 would be great 
to have as well though.

Well, we might be mis-understanding each other. Im am not talking about 
retrying documents in the same UpdateRequest either. What we have:
Our indexing client (something not in Solr - think of it as the Map/Reduce job) 
decides to do 1000 update-doc-commands U1, U2, ... , U1000 (add-doc and 
delete-doc commands), by sending one bulk-job containing all of those to 
Solr-node S1. S1 handles some of the Us itself and forwards other Us to the 
other Solr-nodes - depending or routing. For simplicity lets say that we have 
three Solr-nodes S1, S2 and S3 and that S1 handles U1-U333 itself, forwards 
U334-U666 to S2 and U667-U1000 to S3. Now lets say that U100, U200, U400, U500, 
U700 and U800 fails (two on each Solr-node), and the rest succeeds. S1 gets 
that information back from S2 and S3 (including reasons for each U that 
failed), and is able to send a response to our indexing client saying that all 
was a success, except that U100, U200, U400, U500, U700 and U800 failed, and 
why they failed. Some might fail due to DocumentAlreadyExists (if U was about 
creating a new document, assuming that it does not already exist), others might 
fail due to VersionConflict (if U was about updating an existing document and 
includes its last known (to the client) version, but the document at server has 
a higher version-number), others again might fail due to DocumentDoesNotExist 
(if U was about updating an existing document, but that document does not exist 
(anylonger) at server). Our indexing client takes note of that combined 
response from S1, performs the appropriate actions (e.g. version-lookups) and 
sends a new request to the Solr-cluster now only including U100', U200', U400', 
U500', U700' and U800'.
We have done it like that for a long time, using our solution to EDR-3382 (and 
our solution to SOLR-3178). I would expect a Map/Reduce-job could do the same, 
playing the role as the indexing client. Essentially only resending (maybe by 
issuing a new Map/Reduce-job from the reduce-phase of the first 
Map/Reduce-job) the (modified versions of) update-commands that failed the 
first time.


was (Author: steff1193):
bq. I think you mis-understood my point. I wasn't talking about retrying 
documents in the same UpdateRequest. If a Map/Reduce task fails, the HDFS block 
is retried entirely, meaning a Hadoop-based indexing job may send the same docs 
that have already been added so using overwite=false is dangerous when doing 
this type of bulk indexing. The solution proposed in SOLR-3382 would be great 
to have as well though.

Well, we might be mis-understanding each other. Im am not talking about 
retrying documents in the same UpdateRequest either. What we have:
Our indexing client (something not in Solr - think of it as the Map/Reduce job) 
decides to do 1000 update-doc-commands U1, U2, ... , U1000 (add-doc and 
delete-doc commands), by sending one bulk-job containing all of those to 
Solr-node S1. S1 handles some of the Us itself and forwards other Us to the 
other Solr-nodes - depending or routing. For simplicity lets say that we have 
three Solr-nodes S1, S2 and S3 and that S1 handles U1-U333 itself, forwards 
U334-U666 to S2 and U667-U1000 to S3. Now lets say that U100, U200, U400, U500, 
U700 and U800 fails (two on each Solr-node), and the rest succeeds. S1 gets 
that information back from S2 and S3 (including reasons for each U that 
failed), and is able to send a response to our indexing client saying that all 
was a success, except that U100, U200, U400, U500, U700 and U800 failed, and 
why they failed. Some might fail due to DocumentDoNotExist (if U was about 
creating a new document, assuming that it does not already exist), others might 
fail due to VersionConflict (if U was about updating an existing document and 
includes its last known (to the client) version, but the document at server has 
a higher version-number), other might fail due to DocumentDoesNotExist (if U 
was about updating an existing document, but that document does not exist 
(anylonger) at server). Our indexing client takes note of that combined 
response from S1, perform the appropriate actions (e.g. version-lookups) and 
sends a new request to the Solr-cluster now only including U100', U200', U400', 
U500', U700' and U800'.

[jira] [Commented] (LUCENE-6352) Add global ordinal based query time join

2015-03-30 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386461#comment-14386461
 ] 

Adrien Grand commented on LUCENE-6352:
--

Thanks Martijn! I had a look at the patch it looks very clean, I like it.

{code}
Query rewrittenFromQuery = fromQuery.rewrite(indexReader); (JoinUtil.java)
{code}

I think you should rather call searcher.rewrite(fromQuery) here, which will 
take care of rewriting until rewrite returns 'this'.

{code}
final float[][] blocks = new float[Integer.MAX_VALUE / arraySize][];
{code}

Instead of allocating based on Integer.MAX_VALUE, maybe it should use the 
number of unique values? ie. '(int) (((long) valueCount + arraySize - 1) / 
arraySize)' ?

{code}
return new ComplexExplanation(true, score, Score based on join value  + 
joinValue.utf8ToString());
{code}

I don't think it is safe to convert to a string as we have no idea whether the 
value represents an utf8 string?

In BaseGlobalOrdinalScorer, you are caching the current doc ID, maybe we should 
not? When I worked on approximations, caching the current doc ID proved to be 
quite error-prone and it was often better to just call approximation.docID() 
when the current doc ID was needed.

Another thing I'm wondering about is the equals/hashCode impl of this global 
ordinal query: since documents that match depend on what happens in other 
segments, this query cannot be cached per segment. So maybe it should include 
the current IndexReader in its equals/hashCode comparison in order to work 
correctly with query caches? In the read-only case, this would still allow this 
query to be cached since the current reader never changes while in the 
read/write case this query will unlikely be cached given that the query cache 
will notice that it does not get reused?

 Add global ordinal based query time join 
 -

 Key: LUCENE-6352
 URL: https://issues.apache.org/jira/browse/LUCENE-6352
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Martijn van Groningen
 Attachments: LUCENE-6352.patch, LUCENE-6352.patch, LUCENE-6352.patch


 Global ordinal based query time join as an alternative to the current query 
 time join. The implementation is faster for subsequent joins between reopens, 
 but requires an OrdinalMap to be built.
 This join has certain restrictions and requirements:
 * A document can only refer to on other document. (but can be referred by one 
 or more documents)
 * A type field must exist on all documents and each document must be 
 categorized to a type. This is to distingues between the from and to side.
 * There must be a single sorted doc values field use by both the from and 
 to documents. By encoding join into a single doc values field it is trival 
 to build an ordinals map from it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6303) CachingWrapperFilter - CachingWrapperQuery

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386378#comment-14386378
 ] 

ASF subversion and git services commented on LUCENE-6303:
-

Commit 1670006 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1670006 ]

LUCENE-6303: Disable auto-caching in IndexSearcher.

 CachingWrapperFilter - CachingWrapperQuery
 ---

 Key: LUCENE-6303
 URL: https://issues.apache.org/jira/browse/LUCENE-6303
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6303-disable_auto-caching.patch, LUCENE-6303.patch


 As part of the filter - query migration, we should migrate the caching 
 wrappers (including the filter cache).
 I think the behaviour should be to delegate to the wrapped query when scores 
 are needed and cache otherwise like CachingWrapperFilter does today.
 Also the cache should ignore query boosts so that field:value^2 and 
 field:value^3 are considered equal if scores are not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6303) CachingWrapperFilter - CachingWrapperQuery

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386396#comment-14386396
 ] 

ASF subversion and git services commented on LUCENE-6303:
-

Commit 1670010 from [~jpountz] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1670010 ]

LUCENE-6303: Fix changelog.

 CachingWrapperFilter - CachingWrapperQuery
 ---

 Key: LUCENE-6303
 URL: https://issues.apache.org/jira/browse/LUCENE-6303
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6303-disable_auto-caching.patch, LUCENE-6303.patch


 As part of the filter - query migration, we should migrate the caching 
 wrappers (including the filter cache).
 I think the behaviour should be to delegate to the wrapped query when scores 
 are needed and cache otherwise like CachingWrapperFilter does today.
 Also the cache should ignore query boosts so that field:value^2 and 
 field:value^3 are considered equal if scores are not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6303) CachingWrapperFilter - CachingWrapperQuery

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386395#comment-14386395
 ] 

ASF subversion and git services commented on LUCENE-6303:
-

Commit 1670009 from [~jpountz] in branch 'dev/trunk'
[ https://svn.apache.org/r1670009 ]

LUCENE-6303: Fix changelog.

 CachingWrapperFilter - CachingWrapperQuery
 ---

 Key: LUCENE-6303
 URL: https://issues.apache.org/jira/browse/LUCENE-6303
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6303-disable_auto-caching.patch, LUCENE-6303.patch


 As part of the filter - query migration, we should migrate the caching 
 wrappers (including the filter cache).
 I think the behaviour should be to delegate to the wrapped query when scores 
 are needed and cache otherwise like CachingWrapperFilter does today.
 Also the cache should ignore query boosts so that field:value^2 and 
 field:value^3 are considered equal if scores are not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (LUCENE-6303) CachingWrapperFilter - CachingWrapperQuery

2015-03-30 Thread Adrien Grand (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adrien Grand resolved LUCENE-6303.
--
Resolution: Fixed

Committed: auto caching is now disabled until we figure out how we can fix 
queries so that they are good to use as cache keys.

 CachingWrapperFilter - CachingWrapperQuery
 ---

 Key: LUCENE-6303
 URL: https://issues.apache.org/jira/browse/LUCENE-6303
 Project: Lucene - Core
  Issue Type: Task
Reporter: Adrien Grand
Assignee: Adrien Grand
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6303-disable_auto-caching.patch, LUCENE-6303.patch


 As part of the filter - query migration, we should migrate the caching 
 wrappers (including the filter cache).
 I think the behaviour should be to delegate to the wrapped query when scores 
 are needed and cache otherwise like CachingWrapperFilter does today.
 Also the cache should ignore query boosts so that field:value^2 and 
 field:value^3 are considered equal if scores are not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7236) Securing Solr (umbrella issue)

2015-03-30 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386392#comment-14386392
 ] 

Per Steffensen commented on SOLR-7236:
--

Thanks for a constructive answer!

bq. Because as long as the network and HTTP are handled by software that is 
outside of Solr, Solr has absolutely no ability to control it.

First of all, that is not true. Second, I am pretty sure that Solr does not 
have any ambition to deal with all the hard low-level network stuff itself. We 
will always use a 3rd party component for that - Netty, Spray or whatever. So 
it will always be a 3rd party component that connects with the network, 
receives requests, parses them etc before handing over control to Solr. So why 
not let Jetty (or any other web-container) do that - they do the job pretty 
well.

bq. Ideally, you should be able to drop a configuration right in a handler 
definition (such as the one for /select) found in solrconfig.xml, listing 
security credentials (username/password, IP address, perhaps even certificate 
information) that you are willing to accept for that handler, along with 
exceptions or credentials that will allow SolrCloud inter-node communication to 
work.

You can do that with a web-container, and I do believe that the way you would 
do it will not change much whether you want to do it with Jetty or Netty. The 
HttpServletRequest handed over by the web-container contains everything you 
need (maybe together with the web-container context), just as it probably would 
with any other network component. You can plug things into the web-container 
just as you probably can with any other network component.
If you give me an more exact example of what you want to achieve, that you 
believe cannot be achieved with a web-container, but you believe can be 
achieved with the other approach, I would love to make the code showing that 
you are wrong. If I can’t, I will forever keep quiet - and that in itself is 
quit an achievement.

bq. Bringing the servlet container under our control as we did with 5.0 (with 
initial work in 4.10) allows us to tell people how to configure the servlet 
container for security in a predictable manner

Yes, and if it was a web-container that had control, your could point to 
web-container documentation for info about how to configure.
Even though I think it is an unnecessary move, it is an ok idea to say that 
Solr IS running in Jetty, making us able to tell exactly how to configure 
whatever you need to. If you want to use any other web-container, you are on 
your own. But leaving it a web-app behind the scenes would be great, so that 
advanced users can still take advantage of that. The problem, though, is that 
you lock with Jetty, and Jetty becomes an “implementation detail” of Solr. Do 
not do that if it is not necessary, and I still have not seen very good reasons 
to do it. But at least I recognize that there might be good reasons.

I am not sure about what you actually did in 5.0 (with initial work in 4.10). 
As far as I can see Solr still starts out by starting Jetty. Can you point me 
to some of the most important JIRAs for the remove-web-container initiative. 
Thanks!

bq. but it is still not *Solr* and its configuration that's controlling it.

But it can be, without removing the web-container.
The thing I fear is spending a lot of resources and time removing Jetty and 
replacing it with lots of other 3rd party components (e.g. including Netty), 
and at best just reach status quo after a year or two. This entire 
umbrella-issue (SOLR-7236) is only/mainly necessary because of the 
moving-out-of-web-container initiative.
The fact that Solr is running in a web-container makes it very flexible - e.g. 
my projects have significant changes to both web.xml and jetty.xml. Other 
people might have similar setups just with Tomcat or whatever. Establishing the 
same kind of flexibility without a web-container will take years.
In my organization we started out using ElasticSearch, but for political 
reasons we moved to SolrCloud. The first thing that made me happy about that, 
was the fact that Solr(Cloud) is a web-app, because I know exactly how they 
work, they are standardized and flexible - believe a lot of people feel the 
same way

At least, if you insist on removing the web-container, make sure not to do it 
before all the flexibility it gives can somehow be achieved in another way. If 
you really wanted to do cool stuff in this area, making Solr(Cloud) based on 
dependency injection (configuration and/or annotation) would be great - e.g. 
using Spring or Guice. Both top-level Solr, but also sub-parts of Solr. E.g. 
the fact that solrconfig.xml is a self-invented configuration-structure that 
screams to be replaced by (de-facto) standardized dependency injection is a 
major problem.

Sorry for partly highjacking the issue, [~janhoy] - I did not manage to keep 
this 

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4614 - Still Failing!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4614/
Java: 32bit/jdk1.8.0_40 -client -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.lucene.index.TestNRTThreads.testNRTThreads

Error Message:
Captured an uncaught exception in thread: Thread[id=232, name=Thread-213, 
state=RUNNABLE, group=TGRP-TestNRTThreads]

Stack Trace:
com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an uncaught 
exception in thread: Thread[id=232, name=Thread-213, state=RUNNABLE, 
group=TGRP-TestNRTThreads]
Caused by: java.lang.RuntimeException: 
java.util.ConcurrentModificationException: Removal from the cache failed! This 
is probably due to a query which has been modified after having been put into  
the cache or a badly implemented clone(). Query class: [class 
org.apache.lucene.search.TermQuery], query: [body:governo]
at __randomizedtesting.SeedInfo.seed([A135F63E1EE9790F]:0)
at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$2.run(ThreadedIndexingAndSearchingTestCase.java:397)
Caused by: java.util.ConcurrentModificationException: Removal from the cache 
failed! This is probably due to a query which has been modified after having 
been put into  the cache or a badly implemented clone(). Query class: [class 
org.apache.lucene.search.TermQuery], query: [body:governo]
at 
org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:284)
at 
org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:267)
at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:563)
at 
org.apache.lucene.search.ConstantScoreWeight.scorer(ConstantScoreWeight.java:68)
at org.apache.lucene.search.Weight.bulkScorer(Weight.java:127)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:544)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:351)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:470)
at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:455)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:382)
at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runQuery(ThreadedIndexingAndSearchingTestCase.java:668)
at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.access$100(ThreadedIndexingAndSearchingTestCase.java:58)
at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$2.run(ThreadedIndexingAndSearchingTestCase.java:383)




Build Log:
[...truncated 372 lines...]
   [junit4] Suite: org.apache.lucene.index.TestNRTThreads
   [junit4]   1 Thread-216: hit exc
   [junit4]   1 Thread-214: hit exc
   [junit4]   1 Thread-212: hit exc
   [junit4]   1 java.util.ConcurrentModificationException: Removal from the 
cache failed! This is probably due to a query which has been modified after 
having been put into  the cache or a badly implemented clone(). Query class: 
[class org.apache.lucene.search.TermQuery], query: [body:governo]
   [junit4]   1at 
org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:284)
   [junit4]   1at 
org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:267)
   [junit4]   1at 
org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:563)
   [junit4]   1at 
org.apache.lucene.search.ConstantScoreWeight.scorer(ConstantScoreWeight.java:68)
   [junit4]   1at 
org.apache.lucene.search.Weight.bulkScorer(Weight.java:127)
   [junit4]   1at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:544)
   [junit4]   1at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:351)
   [junit4]   1at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:470)
   [junit4]   1at 
org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:455)
   [junit4]   1at 
org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:382)
   [junit4]   1at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runQuery(ThreadedIndexingAndSearchingTestCase.java:670)
   [junit4]   1at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.access$100(ThreadedIndexingAndSearchingTestCase.java:58)
   [junit4]   1at 
org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$2.run(ThreadedIndexingAndSearchingTestCase.java:383)
   [junit4]   1 Thread-213: hit exc
   [junit4]   1 java.util.ConcurrentModificationException: Removal from the 
cache failed! This is probably due to a query which has been modified after 
having been put into  the cache or a badly implemented clone(). Query class: 
[class org.apache.lucene.search.TermQuery], query: [body:governo]
   [junit4]   1at 
org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:284)
   [junit4]   1at 

[jira] [Commented] (SOLR-6816) Review SolrCloud Indexing Performance.

2015-03-30 Thread Per Steffensen (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6816?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386432#comment-14386432
 ] 

Per Steffensen commented on SOLR-6816:
--

bq. I think you mis-understood my point. I wasn't talking about retrying 
documents in the same UpdateRequest. If a Map/Reduce task fails, the HDFS block 
is retried entirely, meaning a Hadoop-based indexing job may send the same docs 
that have already been added so using overwite=false is dangerous when doing 
this type of bulk indexing. The solution proposed in SOLR-3382 would be great 
to have as well though.

Well, we might be mis-understanding each other. Im am not talking about 
retrying documents in the same UpdateRequest either. What we have:
Our indexing client (something not in Solr - think of it as the Map/Reduce job) 
decides to do 1000 update-doc-commands U1, U2, ... , U1000 (add-doc and 
delete-doc commands), by sending one bulk-job containing all of those to 
Solr-node S1. S1 handles some of the Us itself and forwards other Us to the 
other Solr-nodes - depending or routing. For simplicity lets say that we have 
three Solr-nodes S1, S2 and S3 and that S1 handles U1-U333 itself, forwards 
U334-U666 to S2 and U667-U1000 to S3. Now lets say that U100, U200, U400, U500, 
U700 and U800 fails (two on each Solr-node), and the rest succeeds. S1 gets 
that information back from S2 and S3 (including reasons for each U that 
failed), and is able to send a response to our indexing client saying that all 
was a success, except that U100, U200, U400, U500, U700 and U800 failed, and 
why they failed. Some might fail due to DocumentDoNotExist (if U was about 
creating a new document, assuming that it does not already exist), others might 
fail due to VersionConflict (if U was about updating an existing document and 
includes its last known (to the client) version, but the document at server has 
a higher version-number), other might fail due to DocumentDoesNotExist (if U 
was about updating an existing document, but that document does not exist 
(anylonger) at server). Our indexing client takes note of that combined 
response from S1, perform the appropriate actions (e.g. version-lookups) and 
sends a new request to the Solr-cluster now only including U100', U200', U400', 
U500', U700' and U800'.
We have done it like that for a long time, using our solution to EDR-3382 (and 
our solution to SOLR-3178). I would expect a Map/Reduce-job could do the same, 
playing the role as the indexing client. Essentially only resending (maybe by 
issuing a new Map/Reduce-job from the reduce-phase of the first 
Map/Reduce-job) the (modified) update-commands that failed the first time.

 Review SolrCloud Indexing Performance.
 --

 Key: SOLR-6816
 URL: https://issues.apache.org/jira/browse/SOLR-6816
 Project: Solr
  Issue Type: Task
  Components: SolrCloud
Reporter: Mark Miller
Priority: Critical
 Attachments: SolrBench.pdf


 We have never really focused on indexing performance, just correctness and 
 low hanging fruit. We need to vet the performance and try to address any 
 holes.
 Note: A common report is that adding any replication is very slow.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2858 - Still Failing

2015-03-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2858/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:35653/pddwr/oq/c8n_1x3_commits_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:35653/pddwr/oq/c8n_1x3_commits_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([54CB20103163D96E:DC9F1FCA9F9FB496]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4614 - Still Failing!

2015-03-30 Thread Adrien Grand
I'm looking into it.

On Mon, Mar 30, 2015 at 10:50 AM, Policeman Jenkins Server
jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4614/
 Java: 32bit/jdk1.8.0_40 -client -XX:+UseSerialGC

 1 tests failed.
 FAILED:  org.apache.lucene.index.TestNRTThreads.testNRTThreads

 Error Message:
 Captured an uncaught exception in thread: Thread[id=232, name=Thread-213, 
 state=RUNNABLE, group=TGRP-TestNRTThreads]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=232, name=Thread-213, state=RUNNABLE, 
 group=TGRP-TestNRTThreads]
 Caused by: java.lang.RuntimeException: 
 java.util.ConcurrentModificationException: Removal from the cache failed! 
 This is probably due to a query which has been modified after having been put 
 into  the cache or a badly implemented clone(). Query class: [class 
 org.apache.lucene.search.TermQuery], query: [body:governo]
 at __randomizedtesting.SeedInfo.seed([A135F63E1EE9790F]:0)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$2.run(ThreadedIndexingAndSearchingTestCase.java:397)
 Caused by: java.util.ConcurrentModificationException: Removal from the cache 
 failed! This is probably due to a query which has been modified after having 
 been put into  the cache or a badly implemented clone(). Query class: [class 
 org.apache.lucene.search.TermQuery], query: [body:governo]
 at 
 org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:284)
 at 
 org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:267)
 at 
 org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:563)
 at 
 org.apache.lucene.search.ConstantScoreWeight.scorer(ConstantScoreWeight.java:68)
 at org.apache.lucene.search.Weight.bulkScorer(Weight.java:127)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:544)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:351)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:470)
 at 
 org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:455)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:382)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runQuery(ThreadedIndexingAndSearchingTestCase.java:668)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.access$100(ThreadedIndexingAndSearchingTestCase.java:58)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$2.run(ThreadedIndexingAndSearchingTestCase.java:383)




 Build Log:
 [...truncated 372 lines...]
[junit4] Suite: org.apache.lucene.index.TestNRTThreads
[junit4]   1 Thread-216: hit exc
[junit4]   1 Thread-214: hit exc
[junit4]   1 Thread-212: hit exc
[junit4]   1 java.util.ConcurrentModificationException: Removal from the 
 cache failed! This is probably due to a query which has been modified after 
 having been put into  the cache or a badly implemented clone(). Query class: 
 [class org.apache.lucene.search.TermQuery], query: [body:governo]
[junit4]   1at 
 org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:284)
[junit4]   1at 
 org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:267)
[junit4]   1at 
 org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:563)
[junit4]   1at 
 org.apache.lucene.search.ConstantScoreWeight.scorer(ConstantScoreWeight.java:68)
[junit4]   1at 
 org.apache.lucene.search.Weight.bulkScorer(Weight.java:127)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:544)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:351)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:470)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:455)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:382)
[junit4]   1at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runQuery(ThreadedIndexingAndSearchingTestCase.java:670)
[junit4]   1at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.access$100(ThreadedIndexingAndSearchingTestCase.java:58)
[junit4]   1at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$2.run(ThreadedIndexingAndSearchingTestCase.java:383)
[junit4]   1 Thread-213: hit exc
[junit4]   1 java.util.ConcurrentModificationException: Removal from the 
 cache failed! This is probably due to a query which has been modified after 
 having been put into  the cache or a badly implemented clone(). Query class: 
 

Re: [JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4614 - Still Failing!

2015-03-30 Thread Adrien Grand
This is again due to queries being mutated after they entered the
query cache. I fixed the test to copy the bytes before creating term
queries.

On Mon, Mar 30, 2015 at 11:31 AM, Adrien Grand jpou...@gmail.com wrote:
 I'm looking into it.

 On Mon, Mar 30, 2015 at 10:50 AM, Policeman Jenkins Server
 jenk...@thetaphi.de wrote:
 Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4614/
 Java: 32bit/jdk1.8.0_40 -client -XX:+UseSerialGC

 1 tests failed.
 FAILED:  org.apache.lucene.index.TestNRTThreads.testNRTThreads

 Error Message:
 Captured an uncaught exception in thread: Thread[id=232, name=Thread-213, 
 state=RUNNABLE, group=TGRP-TestNRTThreads]

 Stack Trace:
 com.carrotsearch.randomizedtesting.UncaughtExceptionError: Captured an 
 uncaught exception in thread: Thread[id=232, name=Thread-213, 
 state=RUNNABLE, group=TGRP-TestNRTThreads]
 Caused by: java.lang.RuntimeException: 
 java.util.ConcurrentModificationException: Removal from the cache failed! 
 This is probably due to a query which has been modified after having been 
 put into  the cache or a badly implemented clone(). Query class: [class 
 org.apache.lucene.search.TermQuery], query: [body:governo]
 at __randomizedtesting.SeedInfo.seed([A135F63E1EE9790F]:0)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$2.run(ThreadedIndexingAndSearchingTestCase.java:397)
 Caused by: java.util.ConcurrentModificationException: Removal from the cache 
 failed! This is probably due to a query which has been modified after having 
 been put into  the cache or a badly implemented clone(). Query class: [class 
 org.apache.lucene.search.TermQuery], query: [body:governo]
 at 
 org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:284)
 at 
 org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:267)
 at 
 org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:563)
 at 
 org.apache.lucene.search.ConstantScoreWeight.scorer(ConstantScoreWeight.java:68)
 at org.apache.lucene.search.Weight.bulkScorer(Weight.java:127)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:544)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:351)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:470)
 at 
 org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:455)
 at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:382)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runQuery(ThreadedIndexingAndSearchingTestCase.java:668)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.access$100(ThreadedIndexingAndSearchingTestCase.java:58)
 at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$2.run(ThreadedIndexingAndSearchingTestCase.java:383)




 Build Log:
 [...truncated 372 lines...]
[junit4] Suite: org.apache.lucene.index.TestNRTThreads
[junit4]   1 Thread-216: hit exc
[junit4]   1 Thread-214: hit exc
[junit4]   1 Thread-212: hit exc
[junit4]   1 java.util.ConcurrentModificationException: Removal from the 
 cache failed! This is probably due to a query which has been modified after 
 having been put into  the cache or a badly implemented clone(). Query class: 
 [class org.apache.lucene.search.TermQuery], query: [body:governo]
[junit4]   1at 
 org.apache.lucene.search.LRUQueryCache.evictIfNecessary(LRUQueryCache.java:284)
[junit4]   1at 
 org.apache.lucene.search.LRUQueryCache.putIfAbsent(LRUQueryCache.java:267)
[junit4]   1at 
 org.apache.lucene.search.LRUQueryCache$CachingWrapperWeight.scorer(LRUQueryCache.java:563)
[junit4]   1at 
 org.apache.lucene.search.ConstantScoreWeight.scorer(ConstantScoreWeight.java:68)
[junit4]   1at 
 org.apache.lucene.search.Weight.bulkScorer(Weight.java:127)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:544)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:351)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:470)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.searchAfter(IndexSearcher.java:455)
[junit4]   1at 
 org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:382)
[junit4]   1at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.runQuery(ThreadedIndexingAndSearchingTestCase.java:670)
[junit4]   1at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase.access$100(ThreadedIndexingAndSearchingTestCase.java:58)
[junit4]   1at 
 org.apache.lucene.index.ThreadedIndexingAndSearchingTestCase$2.run(ThreadedIndexingAndSearchingTestCase.java:383)
[junit4]   1 Thread-213: hit exc
[junit4]   1 

[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_40) - Build # 4497 - Failure!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4497/
Java: 32bit/jdk1.8.0_40 -client -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test

Error Message:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:65320/pv_ou/rv/collection1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: 
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:65320/pv_ou/rv/collection1
at 
__randomizedtesting.SeedInfo.seed([93937A046DDBCE1:816D087AE821D119]:0)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.directUpdate(CloudSolrClient.java:625)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:948)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:174)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:139)
at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:153)
at 
org.apache.solr.cloud.RecoveryAfterSoftCommitTest.test(RecoveryAfterSoftCommitTest.java:88)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-6709) ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an expanded section

2015-03-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386641#comment-14386641
 ] 

Joel Bernstein commented on SOLR-6709:
--

Just reviewed the test case and all the groups are accessed directly by name, 
this simulates access into a map. The order of the documents within the groups 
matter in the test case. But the order of the groups themselves does not.

The only thing I worry about is that someone might imply that order matters if 
we use an ordered map, but we could document this more closely.

 ClassCastException in QueryResponse after applying XMLResponseParser on a 
 response containing an expanded section
 ---

 Key: SOLR-6709
 URL: https://issues.apache.org/jira/browse/SOLR-6709
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Reporter: Simon Endele
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-6709.patch, SOLR-6709.patch, test-response.xml


 Shouldn't the following code work on the attached input file?
 It matches the structure of a Solr response with wt=xml.
 {code}import java.io.InputStream;
 import org.apache.solr.client.solrj.ResponseParser;
 import org.apache.solr.client.solrj.impl.XMLResponseParser;
 import org.apache.solr.client.solrj.response.QueryResponse;
 import org.apache.solr.common.util.NamedList;
 import org.junit.Test;
 public class ParseXmlExpandedTest {
   @Test
   public void test() {
   ResponseParser responseParser = new XMLResponseParser();
   InputStream inStream = getClass()
   .getResourceAsStream(test-response.xml);
   NamedListObject response = responseParser
   .processResponse(inStream, UTF-8);
   QueryResponse queryResponse = new QueryResponse(response, null);
   }
 }{code}
 Unexpectedly (for me), it throws a
 java.lang.ClassCastException: org.apache.solr.common.util.SimpleOrderedMap 
 cannot be cast to java.util.Map
 at 
 org.apache.solr.client.solrj.response.QueryResponse.setResponse(QueryResponse.java:126)
 Am I missing something, is XMLResponseParser deprecated or something?
 We use a setup like this to mock a QueryResponse for unit tests in our 
 service that post-processes the Solr response.
 Obviously, it works with the javabin format which SolrJ uses internally.
 But that is no appropriate format for unit tests, where the response should 
 be human readable.
 I think there's some conversion missing in QueryResponse or XMLResponseParser.
 Note: The null value supplied as SolrServer argument to the constructor of 
 QueryResponse shouldn't have an effect as the error occurs before the 
 parameter is even used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7325) Change Slice state into enum

2015-03-30 Thread Shai Erera (JIRA)
Shai Erera created SOLR-7325:


 Summary: Change Slice state into enum
 Key: SOLR-7325
 URL: https://issues.apache.org/jira/browse/SOLR-7325
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Shai Erera


Slice state is currently interacted with as a string. It is IMO not trivial to 
understand which values it can be compared to, in part because the Replica and 
Slice states are located in different classes, some repeating same constant 
names and values.

Also, it's not very clear when does a Slice get into which state and what does 
that mean.

I think if it's an enum, and documented briefly in the code, it would help 
interacting with it through code. I don't mind if we include more extensive 
documentation in the reference guide / wiki and refer people there for more 
details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7325) Change Slice state into enum

2015-03-30 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-7325:
-
Attachment: SOLR-7325.patch

Patch implements a basic change, in order to get some feedback first:

* Slice.State declares 4 values: ACTIVE, INACTIVE, CONSTRUCTION, RECOVERY. Are 
these all the states or did I miss some?

* I documented these very briefly, mostly from what I understood from the code, 
and some chats I had w/ [~anshumg]. I would definitely appreciate a review on 
this!

* Slice.state is held internally as an enum, but still exposed as a String:
** Backwards-compatibility wise, is it OK if we change Slice.getState() to 
return the enum? It's an API-break, but I assume it's pretty expert and the 
migration is really easy.
** Note that it's still written/read as a String.

* I didn't yet get rid of the state constants:
** Is it OK to just remove them, or should I deprecate them like I did for 
STATE?

In this issue I would like to handle Slice, and change Replica separately. 
After I get some feedback, and if there are no objections, I'll move the rest 
of the code to use the enum instead of the string.

 Change Slice state into enum
 

 Key: SOLR-7325
 URL: https://issues.apache.org/jira/browse/SOLR-7325
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Shai Erera
 Attachments: SOLR-7325.patch


 Slice state is currently interacted with as a string. It is IMO not trivial 
 to understand which values it can be compared to, in part because the Replica 
 and Slice states are located in different classes, some repeating same 
 constant names and values.
 Also, it's not very clear when does a Slice get into which state and what 
 does that mean.
 I think if it's an enum, and documented briefly in the code, it would help 
 interacting with it through code. I don't mind if we include more extensive 
 documentation in the reference guide / wiki and refer people there for more 
 details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2859 - Still Failing

2015-03-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2859/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:39463/l_w/m/c8n_1x3_commits_shard1_replica1

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:39463/l_w/m/c8n_1x3_commits_shard1_replica1
at 
__randomizedtesting.SeedInfo.seed([1C620EC4DA0E131B:9436311E74F27EE3]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
  

[GitHub] lucene-solr pull request: Using a constant instead of hardcoded st...

2015-03-30 Thread mirelon
GitHub user mirelon opened a pull request:

https://github.com/apache/lucene-solr/pull/139

Using a constant instead of hardcoded string



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mirelon/lucene-solr patch-1

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/lucene-solr/pull/139.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #139


commit d3e25e6d2b7d40727b9c556e46da1142865d2f76
Author: Michal Kovac m...@github.ksp.sk
Date:   2015-03-30T13:19:46Z

Using a constant instead of hardcoded string




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7324) No need to call isIndexStale if full copy is already needed

2015-03-30 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated SOLR-7324:

Attachment: SOLR-7324.patch

Hi Stephan,

Nice catch! Yes we don't need to check again if we already know that the we 
need to download the entire index.

Attached patch against trunk which addresses the issue. I still need to run the 
tests.

 No need to call isIndexStale if full copy is already needed
 ---

 Key: SOLR-7324
 URL: https://issues.apache.org/jira/browse/SOLR-7324
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 4.10.4
Reporter: Stephan Lagraulet
 Attachments: SOLR-7324.patch


 During replication, we had a message File _3ww7_Lucene41_0.tim expected to 
 be 2027667 while it is 1861076 when in fact there was already a match on 
 commit.getGeneration() = latestGeneration
 So this extra operation is not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6709) ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an expanded section

2015-03-30 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386592#comment-14386592
 ] 

Varun Thacker commented on SOLR-6709:
-

Hi Joel,

I would really appreciate a review on the patch. Not sure what the contract is 
on the ordering of group keys.

 ClassCastException in QueryResponse after applying XMLResponseParser on a 
 response containing an expanded section
 ---

 Key: SOLR-6709
 URL: https://issues.apache.org/jira/browse/SOLR-6709
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Reporter: Simon Endele
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-6709.patch, SOLR-6709.patch, test-response.xml


 Shouldn't the following code work on the attached input file?
 It matches the structure of a Solr response with wt=xml.
 {code}import java.io.InputStream;
 import org.apache.solr.client.solrj.ResponseParser;
 import org.apache.solr.client.solrj.impl.XMLResponseParser;
 import org.apache.solr.client.solrj.response.QueryResponse;
 import org.apache.solr.common.util.NamedList;
 import org.junit.Test;
 public class ParseXmlExpandedTest {
   @Test
   public void test() {
   ResponseParser responseParser = new XMLResponseParser();
   InputStream inStream = getClass()
   .getResourceAsStream(test-response.xml);
   NamedListObject response = responseParser
   .processResponse(inStream, UTF-8);
   QueryResponse queryResponse = new QueryResponse(response, null);
   }
 }{code}
 Unexpectedly (for me), it throws a
 java.lang.ClassCastException: org.apache.solr.common.util.SimpleOrderedMap 
 cannot be cast to java.util.Map
 at 
 org.apache.solr.client.solrj.response.QueryResponse.setResponse(QueryResponse.java:126)
 Am I missing something, is XMLResponseParser deprecated or something?
 We use a setup like this to mock a QueryResponse for unit tests in our 
 service that post-processes the Solr response.
 Obviously, it works with the javabin format which SolrJ uses internally.
 But that is no appropriate format for unit tests, where the response should 
 be human readable.
 I think there's some conversion missing in QueryResponse or XMLResponseParser.
 Note: The null value supplied as SolrServer argument to the constructor of 
 QueryResponse shouldn't have an effect as the error occurs before the 
 parameter is even used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6709) ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an expanded section

2015-03-30 Thread Joel Bernstein (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386615#comment-14386615
 ] 

Joel Bernstein commented on SOLR-6709:
--

Hi Varun,

There was no contract on the ordering of group keys. The groups are meant to be 
read into a Map and then accessed by key.

 ClassCastException in QueryResponse after applying XMLResponseParser on a 
 response containing an expanded section
 ---

 Key: SOLR-6709
 URL: https://issues.apache.org/jira/browse/SOLR-6709
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Reporter: Simon Endele
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-6709.patch, SOLR-6709.patch, test-response.xml


 Shouldn't the following code work on the attached input file?
 It matches the structure of a Solr response with wt=xml.
 {code}import java.io.InputStream;
 import org.apache.solr.client.solrj.ResponseParser;
 import org.apache.solr.client.solrj.impl.XMLResponseParser;
 import org.apache.solr.client.solrj.response.QueryResponse;
 import org.apache.solr.common.util.NamedList;
 import org.junit.Test;
 public class ParseXmlExpandedTest {
   @Test
   public void test() {
   ResponseParser responseParser = new XMLResponseParser();
   InputStream inStream = getClass()
   .getResourceAsStream(test-response.xml);
   NamedListObject response = responseParser
   .processResponse(inStream, UTF-8);
   QueryResponse queryResponse = new QueryResponse(response, null);
   }
 }{code}
 Unexpectedly (for me), it throws a
 java.lang.ClassCastException: org.apache.solr.common.util.SimpleOrderedMap 
 cannot be cast to java.util.Map
 at 
 org.apache.solr.client.solrj.response.QueryResponse.setResponse(QueryResponse.java:126)
 Am I missing something, is XMLResponseParser deprecated or something?
 We use a setup like this to mock a QueryResponse for unit tests in our 
 service that post-processes the Solr response.
 Obviously, it works with the javabin format which SolrJ uses internally.
 But that is no appropriate format for unit tests, where the response should 
 be human readable.
 I think there's some conversion missing in QueryResponse or XMLResponseParser.
 Note: The null value supplied as SolrServer argument to the constructor of 
 QueryResponse shouldn't have an effect as the error occurs before the 
 parameter is even used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6709) ClassCastException in QueryResponse after applying XMLResponseParser on a response containing an expanded section

2015-03-30 Thread Varun Thacker (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386640#comment-14386640
 ] 

Varun Thacker commented on SOLR-6709:
-

Thanks Joel for clarifying that. 

I'll run the tests and commit it shortly then.

 ClassCastException in QueryResponse after applying XMLResponseParser on a 
 response containing an expanded section
 ---

 Key: SOLR-6709
 URL: https://issues.apache.org/jira/browse/SOLR-6709
 Project: Solr
  Issue Type: Bug
  Components: SolrJ
Reporter: Simon Endele
Assignee: Varun Thacker
 Fix For: Trunk, 5.1

 Attachments: SOLR-6709.patch, SOLR-6709.patch, test-response.xml


 Shouldn't the following code work on the attached input file?
 It matches the structure of a Solr response with wt=xml.
 {code}import java.io.InputStream;
 import org.apache.solr.client.solrj.ResponseParser;
 import org.apache.solr.client.solrj.impl.XMLResponseParser;
 import org.apache.solr.client.solrj.response.QueryResponse;
 import org.apache.solr.common.util.NamedList;
 import org.junit.Test;
 public class ParseXmlExpandedTest {
   @Test
   public void test() {
   ResponseParser responseParser = new XMLResponseParser();
   InputStream inStream = getClass()
   .getResourceAsStream(test-response.xml);
   NamedListObject response = responseParser
   .processResponse(inStream, UTF-8);
   QueryResponse queryResponse = new QueryResponse(response, null);
   }
 }{code}
 Unexpectedly (for me), it throws a
 java.lang.ClassCastException: org.apache.solr.common.util.SimpleOrderedMap 
 cannot be cast to java.util.Map
 at 
 org.apache.solr.client.solrj.response.QueryResponse.setResponse(QueryResponse.java:126)
 Am I missing something, is XMLResponseParser deprecated or something?
 We use a setup like this to mock a QueryResponse for unit tests in our 
 service that post-processes the Solr response.
 Obviously, it works with the javabin format which SolrJ uses internally.
 But that is no appropriate format for unit tests, where the response should 
 be human readable.
 I think there's some conversion missing in QueryResponse or XMLResponseParser.
 Note: The null value supplied as SolrServer argument to the constructor of 
 QueryResponse shouldn't have an effect as the error occurs before the 
 parameter is even used.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: java 9 compiler issues with ByteBuffer covariant overrides

2015-03-30 Thread Robert Muir
Why don't they just fail on the -source and -target options then? If
they don't work at all... this is pretty ridiculous.

On Mon, Mar 30, 2015 at 3:20 AM, Uwe Schindler u...@thetaphi.de wrote:
 Hi Dawid, hi Robert,

 Exactly! We had this issue also on earlier versions of the JDK and OpenJDK 
 people don't want to deal about those issues. One occurrence was in Java 8 
 with the isAnnotationPresent() as default method, but this was fixed, because 
 it also affected code compiled executed with Java 8 (here the compiler, not 
 the runtime faced the same issue). But covariant return types, as are 
 problematic here, is one similar issue we also had in the Lucene 2-3 
 backwards compatibility game: We did not change the return types of all 
 clone() methods initially to be covariant, because it would have caused 
 issues like this for users of Lucene who do in-place upgrades (this is long 
 time back). Java is now facing the same problem. I really like those 
 covariant returns (allows big builder method chaining), but it is impossible 
 for the compiler people to take care about this, unless you compile with an 
 older rt.jar.

 This is the reason why since Java 7 the following message is printed when you 
 compile with older target, but newer version:
 [javac] warning: [options] bootstrap class path not set in conjunction with 
 -source 1.x
 (and this warning message is the reason why OpenJDK people say: it's broken, 
 you should compile with the version you want to release for; I am not sure 
 why this message in Lucene is no longer printed, but this could be because of 
 different warning settings).

 This just makes one statement from the Lucene Release TODO very important and 
 I am glad that it's tested by the smoker (looking at the META-INF/manifest of 
 our JAR files): Build the code and javadocs, and run the unit tests: ant 
 clean javadocs test. Make sure that you are actually using the minimum 
 compiler version supported for the release. For example, 4.x releases are on 
 Java6 so make sure that you use Java6 for the release workflow.

 Uwe

 -
 Uwe Schindler
 H.-H.-Meier-Allee 63, D-28213 Bremen
 http://www.thetaphi.de
 eMail: u...@thetaphi.de


 -Original Message-
 From: dawid.we...@gmail.com [mailto:dawid.we...@gmail.com] On Behalf
 Of Dawid Weiss
 Sent: Monday, March 30, 2015 8:47 AM
 To: dev@lucene.apache.org
 Subject: Re: java 9 compiler issues with ByteBuffer covariant overrides

 Hi Robert,

 This was discussed on core-libs at some point, but I can't find the thread 
 and
 the outcome of that conversation right now (damn it).

 I believe the answer was that these covariant changes are forward-
 compatible so that existing code works with 1.9 and if you're compiling with
 backward compatibility in mind you should compile against the target JDK of
 your choice.

 Dawid

 On Mon, Mar 30, 2015 at 4:19 AM, Robert Muir rcm...@gmail.com wrote:
  Hi,
 
  If I compile lucene with a java 9 compiler (using -source 1.8 -target
  1.8 like our build does), then the resulting jar file cannot actually
  be used with a java 8 JVM.
 
  The reason is, in java 9 ByteBuffer.class got some new covariant overrides:
  e.g. ByteBuffer.java has position(int) that returns ByteBuffer, but
  this does not exist on java 8 (the position method is only in
  Buffer.java returning Buffer).
 
  This leads to an exception like this (I am sure there are other
  problems, its just the first one you will hit):
 
  Exception in thread main java.lang.NoSuchMethodError:
  java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer;
  at
 org.apache.lucene.store.ByteBufferIndexInput$SingleBufferImpl.init(Byte
 BufferIndexInput.java:414)
  at
 org.apache.lucene.store.ByteBufferIndexInput.newInstance(ByteBufferInd
 exInput.java:55)
  at
 org.apache.lucene.store.MMapDirectory.openInput(MMapDirectory.java:21
 6)
  at
 org.apache.lucene.store.Directory.openChecksumInput(Directory.java:110)
  at
 org.apache.lucene.index.SegmentInfos.readCommit(SegmentInfos.java:268
 )
  at
 org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:359)
  at
 org.apache.lucene.index.SegmentInfos$1.doBody(SegmentInfos.java:356)
  at
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfo
 s.java:574)
  at
 org.apache.lucene.index.SegmentInfos$FindSegmentsFile.run(SegmentInfo
 s.java:526)
  at
 org.apache.lucene.index.SegmentInfos.readLatestCommit(SegmentInfos.ja
 va:361)
  at
  org.apache.lucene.index.DirectoryReader.listCommits(DirectoryReader.ja
  va:228)
 
  Is this expected behavior? Do we need to fix our build to also require
  a java 8 JDK on the developers machine, and set bootclasspath, or is
  -source/-target 1.8 supposed to just work without it like it did
  before?
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org For
  additional commands, e-mail: dev-h...@lucene.apache.org
 

 

[jira] [Created] (LUCENE-6376) Spatial PointVectorStrategy should use DocValues

2015-03-30 Thread David Smiley (JIRA)
David Smiley created LUCENE-6376:


 Summary: Spatial PointVectorStrategy should use DocValues 
 Key: LUCENE-6376
 URL: https://issues.apache.org/jira/browse/LUCENE-6376
 Project: Lucene - Core
  Issue Type: Improvement
  Components: modules/spatial
Reporter: David Smiley


PointVectorStrategy.createIndexableFields should be using DocValues, like 
BBoxStrategy does.  Without this, UninvertingReader is required.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7326) Reduce hl.maxAnalyzedChars budget for multi-valued fields in the default highlighter

2015-03-30 Thread David Smiley (JIRA)
David Smiley created SOLR-7326:
--

 Summary: Reduce hl.maxAnalyzedChars budget for multi-valued fields 
in the default highlighter
 Key: SOLR-7326
 URL: https://issues.apache.org/jira/browse/SOLR-7326
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley


in DefaultSolrHighlighter, the hl.maxAnalyzedChars figure is used to constrain 
how much text is analyzed before the highlighter stops, in the interests of 
performance.  For a multi-valued field, it effectively treats each value anew, 
no matter how much text it was previously analyzed for other values for the 
same field for the current document. The PostingsHighlighter doesn't work this 
way -- hl.maxAnalyzedChars is effectively the total budget for a field for a 
document, no matter how many values there might be.  It's not reset for each 
value.  I think this makes more sense.  When we loop over the values, we should 
subtract from hl.maxAnalyzedChars the length of the value just checked.  The 
motivation here is consistency with PostingsHighlighter, and to allow for 
hl.maxAnalyzedChars to be pushed down to term vector uninversion, which 
wouldn't be possible for multi-valued fields based on the current way this 
parameter is used.

Interestingly, I noticed Solr's use of FastVectorHighlighter doesn't honor 
hl.maxAnalyzedChars as the FVH doesn't have a knob for that.  It does have 
hl.phraseLimit which is a limit that could be used for a similar purpose, 
albeit applied differently.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (SOLR-7327) DefaultSolrHighlighter should lazily create a FVH FieldQuery.

2015-03-30 Thread David Smiley (JIRA)
David Smiley created SOLR-7327:
--

 Summary: DefaultSolrHighlighter should lazily create a FVH 
FieldQuery.
 Key: SOLR-7327
 URL: https://issues.apache.org/jira/browse/SOLR-7327
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor


DefaultSolrHighlighter switches between both the standard/default/classic 
Highlighter and the FastVectorHighlighter, depending on parameters and field 
options.  In doHighlighting(), it loops over the docs, then loops over the 
highlighted fields, then decides to use one highlighter or the other.  Outside 
of the doc loop it creates a FastVectorHighlighter instance (cheap) and a 
related FieldQuery object that may or may not be cheap.  fvh.getFieldQuery 
takes an IndexReader instance and it will be used for certain queries like 
MultiTermQuery (e.g. wildcards).  We shouldn't be doing this unless we know 
we'll actually need it -- it should be lazily constructed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-30 Thread Ferenczi Jim (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386754#comment-14386754
 ] 

Ferenczi Jim commented on SOLR-7319:


If there is a lot of other MMAP write activity, which is precisely how Lucene 
accomplishes indexing and merging = Are you sure about this statement, 
MMapDirectory uses MMap for reads and a simple RandomAccessFile for writes. I 
don't know how the RandomAccessFile is implemented but I doubt it's using MMap 
at all. 

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7327) DefaultSolrHighlighter should lazily create a FVH FieldQuery.

2015-03-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7327?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386742#comment-14386742
 ] 

David Smiley commented on SOLR-7327:


Note that there is even some argument it shouldn't be using the top-level 
IndexReader when we can supply a TermVectorLeafReader thin wrapper around term 
vector Terms instance.  I'm unsure what the net perf trade-off would end up 
being.  I do know that the MultiTermQuery wouldn't be limited to the top-1024 
terms though (more accurate highlights), and that's good.

side-note: unfortunately the Terms implementation provided by 
CompressingTermVectorsReader is O(N) for many interactions.  If it had an FST 
based terms dictionary, this would unlikely be so; which isn't to say an FST is 
required just that efficient lookup would have been simple.

 DefaultSolrHighlighter should lazily create a FVH FieldQuery.
 -

 Key: SOLR-7327
 URL: https://issues.apache.org/jira/browse/SOLR-7327
 Project: Solr
  Issue Type: Improvement
  Components: highlighter
Reporter: David Smiley
Assignee: David Smiley
Priority: Minor

 DefaultSolrHighlighter switches between both the standard/default/classic 
 Highlighter and the FastVectorHighlighter, depending on parameters and field 
 options.  In doHighlighting(), it loops over the docs, then loops over the 
 highlighted fields, then decides to use one highlighter or the other.  
 Outside of the doc loop it creates a FastVectorHighlighter instance (cheap) 
 and a related FieldQuery object that may or may not be cheap.  
 fvh.getFieldQuery takes an IndexReader instance and it will be used for 
 certain queries like MultiTermQuery (e.g. wildcards).  We shouldn't be doing 
 this unless we know we'll actually need it -- it should be lazily constructed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/ibm-j9-jdk7) - Build # 11989 - Failure!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11989/
Java: 32bit/ibm-j9-jdk7 
-Xjit:exclude={org/apache/lucene/util/fst/FST.pack(IIF)Lorg/apache/lucene/util/fst/FST;}

No tests ran.

Build Log:
[...truncated 301 lines...]
ERROR: Publisher hudson.tasks.junit.JUnitResultArchiver aborted due to exception
hudson.AbortException: No test report files were found. Configuration error?
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:116)
at 
hudson.tasks.junit.JUnitParser$ParseResultCallable.invoke(JUnitParser.java:93)
at hudson.FilePath.act(FilePath.java:989)
at hudson.FilePath.act(FilePath.java:967)
at hudson.tasks.junit.JUnitParser.parseResult(JUnitParser.java:90)
at 
hudson.tasks.junit.JUnitResultArchiver.parse(JUnitResultArchiver.java:120)
at 
hudson.tasks.junit.JUnitResultArchiver.perform(JUnitResultArchiver.java:137)
at 
hudson.tasks.BuildStepCompatibilityLayer.perform(BuildStepCompatibilityLayer.java:74)
at hudson.tasks.BuildStepMonitor$1.perform(BuildStepMonitor.java:20)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.perform(AbstractBuild.java:761)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.performAllBuildSteps(AbstractBuild.java:721)
at hudson.model.Build$BuildExecution.post2(Build.java:183)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.post(AbstractBuild.java:670)
at hudson.model.Run.execute(Run.java:1776)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:89)
at hudson.model.Executor.run(Executor.java:240)
Email was triggered for: Failure - Any
Sending email for trigger: Failure - Any



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org

[jira] [Commented] (SOLR-7309) bin/solr will not run when the solr home path contains a space

2015-03-30 Thread Martijn Koster (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386626#comment-14386626
 ] 

Martijn Koster commented on SOLR-7309:
--

Looks good -- thanks!

 bin/solr will not run when the solr home path contains a space
 --

 Key: SOLR-7309
 URL: https://issues.apache.org/jira/browse/SOLR-7309
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools, Server
Affects Versions: 5.0
Reporter: Martijn Koster
Assignee: Ramkumar Aiyengar
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: SOLR-7309.patch, SOLR-7309.patch, SOLR-7309.patch


 I thought I spotted some unquoted {{$SOLR_TIP}} references in {{bin/solr}} 
 with 5.0.0, prompting me to test:
 {noformat}
 $ mv solr-5.0.0 solr-5.0.0-with' space'
 $ cd solr-5.0.0-with' space'
 $ ./bin/solr -f
 ./bin/solr: line 1161: [: too many arguments
 ./bin/solr: line 1187: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1194: [: /Users/mak/Downloads/solr-5.0.0-with: binary 
 operator expected
 ./bin/solr: line 1327: cd: /Users/mak/Downloads/solr-5.0.0-with: No such file 
 or directory
 Starting Solr on port 8983 from /Users/mak/Downloads/solr-5.0.0-with 
 space/server
 Error: Could not find or load main class space.server.logs.solr_gc.log
 {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7324) No need to call isIndexStale if full copy is already needed

2015-03-30 Thread Stephan Lagraulet (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7324?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386632#comment-14386632
 ] 

Stephan Lagraulet commented on SOLR-7324:
-

My pull request for this: 
https://github.com/stephlag/lucene-solr/commit/1409f4ed7827e155677a2933801e1d491f2d72fa


 No need to call isIndexStale if full copy is already needed
 ---

 Key: SOLR-7324
 URL: https://issues.apache.org/jira/browse/SOLR-7324
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 4.10.4
Reporter: Stephan Lagraulet
Assignee: Varun Thacker
 Attachments: SOLR-7324.patch


 During replication, we had a message File _3ww7_Lucene41_0.tim expected to 
 be 2027667 while it is 1861076 when in fact there was already a match on 
 commit.getGeneration() = latestGeneration
 So this extra operation is not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6308) Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans

2015-03-30 Thread David Smiley (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386629#comment-14386629
 ] 

David Smiley commented on LUCENE-6308:
--

+1 to commit, thanks Paul!

[~jpountz] would you mind commenting on my example above -- right below where I 
mentioned you to get your input?

 Spans to extend DocIdSetIterator; was: SpansEnum, deprecate Spans
 -

 Key: LUCENE-6308
 URL: https://issues.apache.org/jira/browse/LUCENE-6308
 Project: Lucene - Core
  Issue Type: Bug
  Components: core/search
Affects Versions: Trunk
Reporter: Paul Elschot
Priority: Minor
 Attachments: LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308-changeapi.patch, 
 LUCENE-6308-changeapi.patch, LUCENE-6308.patch, LUCENE-6308.patch, 
 LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch, LUCENE-6308.patch


 An alternative for Spans that looks more like PositionsEnum and adds two 
 phase doc id iteration



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (SOLR-7324) No need to call isIndexStale if full copy is already needed

2015-03-30 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7324?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker reassigned SOLR-7324:
---

Assignee: Varun Thacker

 No need to call isIndexStale if full copy is already needed
 ---

 Key: SOLR-7324
 URL: https://issues.apache.org/jira/browse/SOLR-7324
 Project: Solr
  Issue Type: Improvement
  Components: replication (java)
Affects Versions: 4.10.4
Reporter: Stephan Lagraulet
Assignee: Varun Thacker
 Attachments: SOLR-7324.patch


 During replication, we had a message File _3ww7_Lucene41_0.tim expected to 
 be 2027667 while it is 1861076 when in fact there was already a match on 
 commit.getGeneration() = latestGeneration
 So this extra operation is not needed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6377) Pass previous reader to SearcherFactory

2015-03-30 Thread Simon Willnauer (JIRA)
Simon Willnauer created LUCENE-6377:
---

 Summary: Pass previous reader to SearcherFactory
 Key: LUCENE-6377
 URL: https://issues.apache.org/jira/browse/LUCENE-6377
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 5.0
Reporter: Simon Willnauer
Priority: Minor
 Fix For: Trunk, 5.1


SearcherFactory is often used as advertised for warming segments for newly 
flushed segments or for searchers that are opened for the first time (generally 
where merge warmers don't apply). To make this simpler we should pass the 
previous reader to the factory as well to know what needs to be warmed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Assigned] (LUCENE-6377) Pass previous reader to SearcherFactory

2015-03-30 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer reassigned LUCENE-6377:
---

Assignee: Simon Willnauer

 Pass previous reader to SearcherFactory
 ---

 Key: LUCENE-6377
 URL: https://issues.apache.org/jira/browse/LUCENE-6377
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 5.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6377.patch


 SearcherFactory is often used as advertised for warming segments for newly 
 flushed segments or for searchers that are opened for the first time 
 (generally where merge warmers don't apply). To make this simpler we should 
 pass the previous reader to the factory as well to know what needs to be 
 warmed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-6377) Pass previous reader to SearcherFactory

2015-03-30 Thread Simon Willnauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Simon Willnauer updated LUCENE-6377:

Attachment: LUCENE-6377.patch

here is a patch just for the discussion. I wanna add tests etc. if folks are ok 
with the API change.

 Pass previous reader to SearcherFactory
 ---

 Key: LUCENE-6377
 URL: https://issues.apache.org/jira/browse/LUCENE-6377
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 5.0
Reporter: Simon Willnauer
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6377.patch


 SearcherFactory is often used as advertised for warming segments for newly 
 flushed segments or for searchers that are opened for the first time 
 (generally where merge warmers don't apply). To make this simpler we should 
 pass the previous reader to the factory as well to know what needs to be 
 warmed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386834#comment-14386834
 ] 

Shawn Heisey commented on SOLR-7319:


Good questions, [~jim.ferenczi].  The option does appear to have helped GC 
pauses times for me, although it's hard to quantify.  I know that the *average* 
GC pause time dropped from .10 sec to .06 sec.  This isn't a lot, but when 
there are thousands of collections, even a small difference like that adds up.  
I wish I had a way to gather median, 75th, 95th, and 99th percentile info on GC 
pauses.

If you know something about how Lucene writes to disk that says it's not mmap 
when the directory is mmap, then you know more than I do.  I wonder whether 
heavy mmap *reads* might interfere with writing to the stats file.


 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Comment Edited] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386834#comment-14386834
 ] 

Shawn Heisey edited comment on SOLR-7319 at 3/30/15 3:14 PM:
-

Good questions, [~jim.ferenczi].  The option does appear to have helped GC 
pauses times for me, although it's hard to quantify.  I know that the *average* 
GC pause time dropped from .10 sec to .06 sec with Java 7 and G1GC.  This isn't 
a lot, but when there are thousands of collections, even a small difference 
like that adds up.  I wish I had a way to gather median, 75th, 95th, and 99th 
percentile info on GC pauses.  I have some indexes running on Java 8, but they 
are not yet big enough or active enough to give me useful GC info.  They are 
growing, and will soon be pushed into production.

I do not have any info on this problem with CMS, which is what the bin/solr 
script in 5.0 uses.

If you know something about how Lucene writes to disk that says it's not mmap 
when the directory is mmap, then you know more than I do.  I wonder whether 
heavy mmap *reads* might interfere with writing to the stats file.



was (Author: elyograg):
Good questions, [~jim.ferenczi].  The option does appear to have helped GC 
pauses times for me, although it's hard to quantify.  I know that the *average* 
GC pause time dropped from .10 sec to .06 sec.  This isn't a lot, but when 
there are thousands of collections, even a small difference like that adds up.  
I wish I had a way to gather median, 75th, 95th, and 99th percentile info on GC 
pauses.

If you know something about how Lucene writes to disk that says it's not mmap 
when the directory is mmap, then you know more than I do.  I wonder whether 
heavy mmap *reads* might interfere with writing to the stats file.


 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6377) Pass previous reader to SearcherFactory

2015-03-30 Thread Robert Muir (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386856#comment-14386856
 ] 

Robert Muir commented on LUCENE-6377:
-

I think its fine. Docs already link to mergedSegmentWarmer (which is the ideal 
way to do this warming in most situations), but at the very least this solves 
the first reader problem (an annoyance with mergedSegmentWarmer IMO) because 
you will get passed null the first time, so you can warm everything. 

 Pass previous reader to SearcherFactory
 ---

 Key: LUCENE-6377
 URL: https://issues.apache.org/jira/browse/LUCENE-6377
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 5.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6377.patch


 SearcherFactory is often used as advertised for warming segments for newly 
 flushed segments or for searchers that are opened for the first time 
 (generally where merge warmers don't apply). To make this simpler we should 
 pass the previous reader to the factory as well to know what needs to be 
 warmed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-30 Thread Ferenczi Jim (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386786#comment-14386786
 ] 

Ferenczi Jim commented on SOLR-7319:


I am saying this because if we are not sure that Lucene is impacted we should 
not add this in the default options. Not being able to do a jstat on a running 
node is problematic and will break a lot of monitoring tools built on top of 
Solr.

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-30 Thread Ferenczi Jim (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386787#comment-14386787
 ] 

Ferenczi Jim commented on SOLR-7319:


I am saying this because if we are not sure that Lucene is impacted we should 
not add this in the default options. Not being able to do a jstat on a running 
node is problematic and will break a lot of monitoring tools built on top of 
Solr.

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-30 Thread Timothy Potter (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386877#comment-14386877
 ] 

Timothy Potter commented on SOLR-7319:
--

That's correct - MMap is only for reading the index, so maybe instead of 
enabling this by default, we document it in bin/solr.in.(sh|cmd) and users can 
turn it on if they so choose. I've already been ding'd a few times on adding 
Java flags as defaults in those scripts because they helped my prod env. but 
weren't deemed generally applicable for all Solr users. So I vote for leaving 
it out by default, but documenting it as something for operators to enable if 
they experience this issue.

 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7236) Securing Solr (umbrella issue)

2015-03-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386883#comment-14386883
 ] 

Shawn Heisey commented on SOLR-7236:


I admit that I do not know what is possible when the servlet container is 
external to Solr, but I've heard that there are many things that we cannot do.  
One of the big ones is that we don't even know the port the container is 
listening on for our webapp, until we actually receive a request.  SolrCloud 
needs this information before requests are received, so we have overrides we 
can use if the port is not 8983, Java doesn't detect the correct IP address, 
etc ... but they are separate config items from what actually configures the 
servlet container, so it's possible to get the config wrong.

Just taking the step of embedding Jetty into the application would give us far 
more capability and consistency than we currently have, but again I am ignorant 
of what kind of limitations we would face, and how that would compare to using 
Netty instead.


 Securing Solr (umbrella issue)
 --

 Key: SOLR-7236
 URL: https://issues.apache.org/jira/browse/SOLR-7236
 Project: Solr
  Issue Type: New Feature
Reporter: Jan Høydahl
  Labels: Security

 This is an umbrella issue for adding security to Solr. The discussion here 
 should discuss real user needs and high-level strategy, before deciding on 
 implementation details. All work will be done in sub tasks and linked issues.
 Solr has not traditionally concerned itself with security. And It has been a 
 general view among the committers that it may be better to stay out of it to 
 avoid blood on our hands in this mine-field. Still, Solr has lately seen 
 SSL support, securing of ZK, and signing of jars, and discussions have begun 
 about securing operations in Solr.
 Some of the topics to address are
 * User management (flat file, AD/LDAP etc)
 * Authentication (Admin UI, Admin and data/query operations. Tons of auth 
 protocols: basic, digest, oauth, pki..)
 * Authorization (who can do what with what API, collection, doc)
 * Pluggability (no user's needs are equal)
 * And we could go on and on but this is what we've seen the most demand for



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7319) Workaround the Four Month Bug causing GC pause problems

2015-03-30 Thread Shawn Heisey (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7319?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386961#comment-14386961
 ] 

Shawn Heisey commented on SOLR-7319:


[~thelabdude], I am not opposed to a solution based purely on documentation.  
Let's get a few more opinions, and if that's the general feeling, can revert my 
patch.


 Workaround the Four Month Bug causing GC pause problems
 -

 Key: SOLR-7319
 URL: https://issues.apache.org/jira/browse/SOLR-7319
 Project: Solr
  Issue Type: Bug
  Components: scripts and tools
Affects Versions: 5.0
Reporter: Shawn Heisey
Assignee: Shawn Heisey
 Fix For: 5.1

 Attachments: SOLR-7319.patch, SOLR-7319.patch, SOLR-7319.patch


 A twitter engineer found a bug in the JVM that contributes to GC pause 
 problems:
 http://www.evanjones.ca/jvm-mmap-pause.html
 Problem summary (in case the blog post disappears):  The JVM calculates 
 statistics on things like garbage collection and writes them to a file in the 
 temp directory using MMAP.  If there is a lot of other MMAP write activity, 
 which is precisely how Lucene accomplishes indexing and merging, it can 
 result in a GC pause because the mmap write to the temp file is delayed.
 We should implement the workaround in the solr start scripts (disable 
 creation of the mmap statistics tempfile) and document the impact in 
 CHANGES.txt.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.9.0-ea-b54) - Build # 12157 - Failure!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12157/
Java: 32bit/jdk1.9.0-ea-b54 -client -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.test

Error Message:
Could not get expected value  'CY val' for path 'params/c' full output: {   
responseHeader:{ status:0, QTime:1},   params:{ 
wt:json, useParams:},   context:{ webapp:, 
path:/dump, httpMethod:GET}}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'CY val' for path 
'params/c' full output: {
  responseHeader:{
status:0,
QTime:1},
  params:{
wt:json,
useParams:},
  context:{
webapp:,
path:/dump,
httpMethod:GET}}
at 
__randomizedtesting.SeedInfo.seed([AB414CB80A31671D:23157362A4CD0AE5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:406)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:201)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.test(TestSolrConfigHandlerCloud.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Quesion concerning Arabic analyzer

2015-03-30 Thread Michal Diamantstein
Hi,
I'm a software developer at Genesys and we use Lucene in our product.
Lately we added support in Arabic which includes indexing (write and read) data 
in this language.
Using ArabicLetterTokenizer  from 
http://lucenenet.apache.org/docs/3.0.3/dc/d1c/_arabic_letter_tokenizer_8cs_source.html
I bump into some issue -
The function IsTokenChar(char c) does not allow numbers while parsing.

/**
 * Allows for Letter category or NonspacingMark category
 * @see org.apache.lucene.analysis.LetterTokenizer#isTokenChar(char)
 */
protected internal override bool IsTokenChar(char c)
{
  return base.IsTokenChar(c) || char.GetUnicodeCategory(c) == 
System.Globalization.UnicodeCategory.NonSpacingMark;
}


What is the reason for not allowing numbers?

The process includes using the analyzer to get all the tokens,
and then build a TermQuery, PhraseQuery, or nothing based on the term count.
While going over the tokens, numbers are dropped out).

Thanks in advance.


Michal Diamantstein
Software Engineer
T:  +972 72 220 1866
M: +972 50 424 5533
michal.diamantst...@genesys.commailto:michal.diamantst...@genesys.com





[Geneys_logo_RGB]http://www.genesyslab.com/











[JENKINS] Lucene-Solr-NightlyTests-5.x - Build # 802 - Still Failing

2015-03-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-NightlyTests-5.x/802/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:49688/c8n_1x3_commits_shard1_replica3

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:49688/c8n_1x3_commits_shard1_replica3
at 
__randomizedtesting.SeedInfo.seed([F39736AAE6DD2E4F:7BC30970482143B7]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Created] (LUCENE-6378) Fix RuntimeExceptions that are thrown without the root cause

2015-03-30 Thread Varun Thacker (JIRA)
Varun Thacker created LUCENE-6378:
-

 Summary: Fix RuntimeExceptions that are thrown without the root 
cause
 Key: LUCENE-6378
 URL: https://issues.apache.org/jira/browse/LUCENE-6378
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Varun Thacker


In the lucene/solr codebase I can see 15 RuntimeExceptions that are thrown 
without wrapping the root cause.

We should fix them to wrap the root cause before throwing it.






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6377) Pass previous reader to SearcherFactory

2015-03-30 Thread Adrien Grand (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387097#comment-14387097
 ] 

Adrien Grand commented on LUCENE-6377:
--

+1

 Pass previous reader to SearcherFactory
 ---

 Key: LUCENE-6377
 URL: https://issues.apache.org/jira/browse/LUCENE-6377
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 5.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6377.patch


 SearcherFactory is often used as advertised for warming segments for newly 
 flushed segments or for searchers that are opened for the first time 
 (generally where merge warmers don't apply). To make this simpler we should 
 pass the previous reader to the factory as well to know what needs to be 
 warmed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: Make Solr's core admin API internal-only/implementation detail?

2015-03-30 Thread Reitzel, Charles
As a Solr user, my feeling is that the proposal would be a significant 
improvement in the product design.  A significant piece of related work, imo, 
would be to update all of the examples and getting started docs.   Likewise, 
isn't the plan for migration from 4.x to 5.x (wrt to maintenance, monitoring, 
ETL, etc.) is affected by this issue? I feel clarity and stability in these 
areas would help adoption of 5.x.

Re: SOLR-6278, My feeling is that the term core could just go away with the 
public API .   There should be a single, clear cut way to address individual 
replicas, of any given shard, within any collection.  Even if there is only one 
replica (the leader) and only one shard, the addressing scheme and terminology 
remain the same.   

If there are functional gaps in the Collections API, fill them as needed - 
perhaps by delegating to the internal Admin service at the node level.  But 
please keep the public parameters and terminology consistent.   There should be 
only one way to do it ...

On Saturday, March 28, 2015 9:17 PM, Yonik Seeley [mailto:ysee...@gmail.com] 
wrote:

 On Sat, Mar 28, 2015 at 2:27 PM, Erick Erickson erickerick...@gmail.com 
 wrote:
  Fold any functionality we still want
  to support at a user level into the collections API. I mean a core on
  a machine is really just a single-node collection sans Zookeeper,
  right?

 +1, this is the position I've advocated in the past as well.

 -Yonik

On Sunday, March 29, 2015 7:53 PM, Ramkumar R. Aiyengar 
[mailto:andyetitmo...@gmail.com] wrote:

 Sounds good to me, except that we have to reconcile some of the objections in 
 the past 
 to collection API additions, like with 
 https://issues.apache.org/jira/SOLR-6278.  In short, 
 collection API provides you a way to operate on collections. Operationally 
 you would often
 also want functionality based off physical location (e.g. I need to 
 decommission this machine, 
 so boot and delete everything on it), core admin appeared to be the place for 
 it.


*
This e-mail may contain confidential or privileged information.
If you are not the intended recipient, please notify the sender immediately and 
then delete it.

TIAA-CREF
*


[jira] [Commented] (LUCENE-6377) Pass previous reader to SearcherFactory

2015-03-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387026#comment-14387026
 ] 

Michael McCandless commented on LUCENE-6377:


+1

 Pass previous reader to SearcherFactory
 ---

 Key: LUCENE-6377
 URL: https://issues.apache.org/jira/browse/LUCENE-6377
 Project: Lucene - Core
  Issue Type: Improvement
  Components: core/search
Affects Versions: 5.0
Reporter: Simon Willnauer
Assignee: Simon Willnauer
Priority: Minor
 Fix For: Trunk, 5.1

 Attachments: LUCENE-6377.patch


 SearcherFactory is often used as advertised for warming segments for newly 
 flushed segments or for searchers that are opened for the first time 
 (generally where merge warmers don't apply). To make this simpler we should 
 pass the previous reader to the factory as well to know what needs to be 
 warmed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Quesion concerning Arabic analyzer

2015-03-30 Thread Robert Muir
On Mon, Mar 30, 2015 at 12:14 PM, Michal Diamantstein 
michal.diamantst...@genesys.com wrote:

  *What is the reason for not allowing numbers?*




No reason, it was just a simple tokenizer that worked for Arabic.

Since Lucene 3.1, StandardTokenizer can tokenize arabic (and work with
numbers and other stuff), and ArabicAnalyzer uses that instead. This
tokenizer was then deprecated. See
https://issues.apache.org/jira/browse/LUCENE-2747


Re: Make Solr's core admin API internal-only/implementation detail?

2015-03-30 Thread Erick Erickson
Charles:

And it's especially trappy that the admin UI has this cores page.
Another very significant bit of work would be revamping this too.
There's work being done to move the Solr admin UI to Angular JS, maybe
that'll be the avenue for this switch too.

Thanks for you comments!

On Mon, Mar 30, 2015 at 10:19 AM, Reitzel, Charles
charles.reit...@tiaa-cref.org wrote:
 As a Solr user, my feeling is that the proposal would be a significant 
 improvement in the product design.  A significant piece of related work, imo, 
 would be to update all of the examples and getting started docs.   
 Likewise, isn't the plan for migration from 4.x to 5.x (wrt to maintenance, 
 monitoring, ETL, etc.) is affected by this issue? I feel clarity and 
 stability in these areas would help adoption of 5.x.

 Re: SOLR-6278, My feeling is that the term core could just go away with the 
 public API .   There should be a single, clear cut way to address individual 
 replicas, of any given shard, within any collection.  Even if there is only 
 one replica (the leader) and only one shard, the addressing scheme and 
 terminology remain the same.

 If there are functional gaps in the Collections API, fill them as needed - 
 perhaps by delegating to the internal Admin service at the node level.  But 
 please keep the public parameters and terminology consistent.   There should 
 be only one way to do it ...

 On Saturday, March 28, 2015 9:17 PM, Yonik Seeley [mailto:ysee...@gmail.com] 
 wrote:

 On Sat, Mar 28, 2015 at 2:27 PM, Erick Erickson erickerick...@gmail.com 
 wrote:
  Fold any functionality we still want
  to support at a user level into the collections API. I mean a core on
  a machine is really just a single-node collection sans Zookeeper,
  right?

 +1, this is the position I've advocated in the past as well.

 -Yonik

 On Sunday, March 29, 2015 7:53 PM, Ramkumar R. Aiyengar 
 [mailto:andyetitmo...@gmail.com] wrote:

 Sounds good to me, except that we have to reconcile some of the objections 
 in the past
 to collection API additions, like with 
 https://issues.apache.org/jira/SOLR-6278.  In short,
 collection API provides you a way to operate on collections. Operationally 
 you would often
 also want functionality based off physical location (e.g. I need to 
 decommission this machine,
 so boot and delete everything on it), core admin appeared to be the place 
 for it.


 *
 This e-mail may contain confidential or privileged information.
 If you are not the intended recipient, please notify the sender immediately 
 and then delete it.

 TIAA-CREF
 *

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Created] (LUCENE-6379) IndexWriter's delete-by-query should optimize/specialize MatchAllDocsQuery

2015-03-30 Thread Michael McCandless (JIRA)
Michael McCandless created LUCENE-6379:
--

 Summary: IndexWriter's delete-by-query should optimize/specialize 
MatchAllDocsQuery
 Key: LUCENE-6379
 URL: https://issues.apache.org/jira/browse/LUCENE-6379
 Project: Lucene - Core
  Issue Type: Improvement
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: Trunk, 5.1


We can short-circuit this to just IW.deleteAll (Solr already does so I think).  
This also has the nice side effect of clearing Lucene's low-schema (FieldInfos).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2860 - Still Failing

2015-03-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2860/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:51335/c8n_1x3_commits_shard1_replica3

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: http://127.0.0.1:51335/c8n_1x3_commits_shard1_replica3
at 
__randomizedtesting.SeedInfo.seed([FC97E93C8C0589AB:74C3D6E622F9E453]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Updated] (LUCENE-6378) Fix RuntimeExceptions that are thrown without the root cause

2015-03-30 Thread Varun Thacker (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Varun Thacker updated LUCENE-6378:
--
Fix Version/s: 5.1
   Trunk

 Fix RuntimeExceptions that are thrown without the root cause
 

 Key: LUCENE-6378
 URL: https://issues.apache.org/jira/browse/LUCENE-6378
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Varun Thacker
 Fix For: Trunk, 5.1


 In the lucene/solr codebase I can see 15 RuntimeExceptions that are thrown 
 without wrapping the root cause.
 We should fix them to wrap the root cause before throwing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6845) Add buildOnStartup option for suggesters

2015-03-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14386943#comment-14386943
 ] 

Tomás Fernández Löbbe commented on SOLR-6845:
-

bq. We should remove this comment from the solrconfig.xml file right?
I was sure I had remove the comment! Thanks for pointing that out. I'll remove 
it
bq. I have made the required change in the ref guide as well.
Thanks, I didn't do this initially because by the time I did this change the 
docs were still about 5.0, but now was a good time to fix that.

 Add buildOnStartup option for suggesters
 

 Key: SOLR-6845
 URL: https://issues.apache.org/jira/browse/SOLR-6845
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Erick Erickson
 Fix For: Trunk, 5.1

 Attachments: SOLR-6845.patch, SOLR-6845.patch, SOLR-6845.patch, 
 SOLR-6845_solrconfig.patch, tests-failures.txt


 SOLR-6679 was filed to track the investigation into the following problem...
 {panel}
 The stock solrconfig provides a bad experience with a large index... start up 
 Solr and it will spin at 100% CPU for minutes, unresponsive, while it 
 apparently builds a suggester index.
 ...
 This is what I did:
 1) indexed 10M very small docs (only takes a few minutes).
 2) shut down Solr
 3) start up Solr and watch it be unresponsive for over 4 minutes!
 I didn't even use any of the fields specified in the suggester config and I 
 never called the suggest request handler.
 {panel}
 ..but ultimately focused on removing/disabling the suggester from the sample 
 configs.
 Opening this new issue to focus on actually trying to identify the root 
 problem  fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (LUCENE-6378) Fix RuntimeExceptions that are thrown without the root cause

2015-03-30 Thread Michael McCandless (JIRA)

[ 
https://issues.apache.org/jira/browse/LUCENE-6378?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387012#comment-14387012
 ] 

Michael McCandless commented on LUCENE-6378:


+1

 Fix RuntimeExceptions that are thrown without the root cause
 

 Key: LUCENE-6378
 URL: https://issues.apache.org/jira/browse/LUCENE-6378
 Project: Lucene - Core
  Issue Type: Bug
Reporter: Varun Thacker
 Fix For: Trunk, 5.1


 In the lucene/solr codebase I can see 15 RuntimeExceptions that are thrown 
 without wrapping the root cause.
 We should fix them to wrap the root cause before throwing it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7325) Change Slice state into enum

2015-03-30 Thread Shalin Shekhar Mangar (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387354#comment-14387354
 ] 

Shalin Shekhar Mangar commented on SOLR-7325:
-

Thanks Shai!

bq. Slice.State declares 4 values: ACTIVE, INACTIVE, CONSTRUCTION, RECOVERY. 
Are these all the states or did I miss some?

No, those are the only ones we have right now.

bq. I documented these very briefly, mostly from what I understood from the 
code, and some chats I had w/ Anshum Gupta. I would definitely appreciate a 
review on this!

Looks good. We can expand on this a bit e.g. a shard in construction or 
recovery state receives indexing requests from the parent shard leader but does 
not participate in distributed search.

bq. Backwards-compatibility wise, is it OK if we change Slice.getState() to 
return the enum? It's an API-break, but I assume it's pretty expert and the 
migration is really easy.

We can change it to an enum everywhere. These are internal/expert APIs so we 
have leeway here.

bq. Is it OK to just remove them, or should I deprecate them like I did for 
STATE?

+1 to just remove them.

 Change Slice state into enum
 

 Key: SOLR-7325
 URL: https://issues.apache.org/jira/browse/SOLR-7325
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Shai Erera
 Attachments: SOLR-7325.patch


 Slice state is currently interacted with as a string. It is IMO not trivial 
 to understand which values it can be compared to, in part because the Replica 
 and Slice states are located in different classes, some repeating same 
 constant names and values.
 Also, it's not very clear when does a Slice get into which state and what 
 does that mean.
 I think if it's an enum, and documented briefly in the code, it would help 
 interacting with it through code. I don't mind if we include more extensive 
 documentation in the reference guide / wiki and refer people there for more 
 details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7082) Streaming Aggregation for SolrCloud

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387207#comment-14387207
 ] 

ASF subversion and git services commented on SOLR-7082:
---

Commit 1670176 from [~joel.bernstein] in branch 'dev/trunk'
[ https://svn.apache.org/r1670176 ]

SOLR-7082: Syntactic sugar for metric gathering

 Streaming Aggregation for SolrCloud
 ---

 Key: SOLR-7082
 URL: https://issues.apache.org/jira/browse/SOLR-7082
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Joel Bernstein
 Fix For: Trunk, 5.1

 Attachments: SOLR-7082.patch, SOLR-7082.patch, SOLR-7082.patch, 
 SOLR-7082.patch, SOLR-7082.patch


 This issue provides a general purpose streaming aggregation framework for 
 SolrCloud. An overview of how it works can be found at this link:
 http://heliosearch.org/streaming-aggregation-for-solrcloud/
 This functionality allows SolrCloud users to perform operations that we're 
 typically done using map/reduce or a parallel computing platform.
 Here is a brief explanation of how the framework works:
 There is a new Solrj *io* package found in: *org.apache.solr.client.solrj.io*
 Key classes:
 *Tuple*: Abstracts a document in a search result as a Map of key/value pairs.
 *TupleStream*: is the base class for all of the streams. Abstracts search 
 results as a stream of Tuples.
 *SolrStream*: connects to a single Solr instance. You call the read() method 
 to iterate over the Tuples.
 *CloudSolrStream*: connects to a SolrCloud collection and merges the results 
 based on the sort param. The merge takes place in CloudSolrStream itself.
 *Decorator Streams*: wrap other streams to perform operations the streams. 
 Some examples are the UniqueStream, MergeStream and ReducerStream.
 *Going parallel with the ParallelStream and  Worker Collections*
 The io package also contains the *ParallelStream*, which wraps a TupleStream 
 and sends it to N worker nodes. The workers are chosen from a SolrCloud 
 collection. These Worker Collections don't have to hold any data, they can 
 just be used to execute TupleStreams.
 *The StreamHandler*
 The Worker nodes have a new RequestHandler called the *StreamHandler*. The 
 ParallelStream serializes a TupleStream, before it is opened, and sends it to 
 the StreamHandler on the Worker Nodes.
 The StreamHandler on each Worker node deserializes the TupleStream, opens the 
 stream, iterates the tuples and streams them back to the ParallelStream. The 
 ParallelStream performs the final merge of Metrics and can be wrapped by 
 other Streams to handled the final merged TupleStream.
 *Sorting and Partitioning search results (Shuffling)*
 Each Worker node is shuffled 1/N of the document results. There is a 
 partitionKeys parameter that can be included with each TupleStream to 
 ensure that Tuples with the same partitionKeys are shuffled to the same 
 Worker. The actual partitioning is done with a filter query using the 
 HashQParserPlugin. The DocSets from the HashQParserPlugin can be cached in 
 the filter cache which provides extremely high performance hash partitioning. 
 Many of the stream transformations rely on the sort order of the TupleStreams 
 (GroupByStream, MergeJoinStream, UniqueStream, FilterStream etc..). To 
 accommodate this the search results can be sorted by specific keys. The 
 /export handler can be used to sort entire result sets efficiently.
 By specifying the sort order of the results and the partition keys, documents 
 will be sorted and partitioned inside of the search engine. So when the 
 tuples hit the network they are already sorted, partitioned and headed 
 directly to correct worker node.
 *Extending The Framework*
 To extend the framework you create new TupleStream Decorators, that gather 
 custom metrics or perform custom stream transformations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7272) Use OverseerCollectionProcessor.ROUTER and DocCollection.DOC_ROUTER consistently

2015-03-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-7272.
--
   Resolution: Fixed
Fix Version/s: 5.1
   Trunk

 Use OverseerCollectionProcessor.ROUTER and DocCollection.DOC_ROUTER 
 consistently
 

 Key: SOLR-7272
 URL: https://issues.apache.org/jira/browse/SOLR-7272
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: Trunk, 5.1

 Attachments: SOLR-7272.patch


 This is something I noticed while working on SOLR-7271.  There are two 
 different router constants:
 #1. OverseerCollectionProcessor.ROUTER: represents the argument(s) to the 
 OverseerCollectionProcessor
 #2 DocCollection.DOC_ROUTER: represents the router information as stored in 
 the clusterstate in ZK
 But these are sometimes used in the other contexts, which can cause issues if 
 the constant values are not the same (as in SOLR-7271).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Linux (32bit/jdk1.9.0-ea-b54) - Build # 11992 - Failure!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Linux/11992/
Java: 32bit/jdk1.9.0-ea-b54 -client -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.core.TestDynamicLoading.testDynamicLoading

Error Message:
Could not get expected value  'X val changed' for path 'x' full output: {   
responseHeader:{ status:0, QTime:0},   params:{wt:json},   
context:{ webapp:/mqgug/m, path:/test1, 
httpMethod:GET},   
class:org.apache.solr.core.BlobStoreTestRequestHandler,   x:X val}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'X val changed' for 
path 'x' full output: {
  responseHeader:{
status:0,
QTime:0},
  params:{wt:json},
  context:{
webapp:/mqgug/m,
path:/test1,
httpMethod:GET},
  class:org.apache.solr.core.BlobStoreTestRequestHandler,
  x:X val}
at 
__randomizedtesting.SeedInfo.seed([2748C046B27E2E14:FF05ED1145A38BB4]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:406)
at 
org.apache.solr.core.TestDynamicLoading.testDynamicLoading(TestDynamicLoading.java:260)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:502)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4616 - Failure!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4616/
Java: 32bit/jdk1.8.0_40 -server -XX:+UseParallelGC

1 tests failed.
FAILED:  org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test

Error Message:
There were too many update fails (21  20) - we expect it can happen, but 
shouldn't easily

Stack Trace:
java.lang.AssertionError: There were too many update fails (21  20) - we 
expect it can happen, but shouldn't easily
at 
__randomizedtesting.SeedInfo.seed([11F92DC63323C16C:99AD121C9DDFAC94]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at org.junit.Assert.assertFalse(Assert.java:68)
at 
org.apache.solr.cloud.ChaosMonkeyNothingIsSafeTest.test(ChaosMonkeyNothingIsSafeTest.java:230)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:54)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:55)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

[jira] [Commented] (SOLR-7272) Use OverseerCollectionProcessor.ROUTER and DocCollection.DOC_ROUTER consistently

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387231#comment-14387231
 ] 

ASF subversion and git services commented on SOLR-7272:
---

Commit 1670178 from gcha...@apache.org in branch 'dev/trunk'
[ https://svn.apache.org/r1670178 ]

SOLR-7272: Use OverseerCollectionProcessor.ROUTER and DocCollection.DOC_ROUTER 
consistently

 Use OverseerCollectionProcessor.ROUTER and DocCollection.DOC_ROUTER 
 consistently
 

 Key: SOLR-7272
 URL: https://issues.apache.org/jira/browse/SOLR-7272
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7272.patch


 This is something I noticed while working on SOLR-7271.  There are two 
 different router constants:
 #1. OverseerCollectionProcessor.ROUTER: represents the argument(s) to the 
 OverseerCollectionProcessor
 #2 DocCollection.DOC_ROUTER: represents the router information as stored in 
 the clusterstate in ZK
 But these are sometimes used in the other contexts, which can cause issues if 
 the constant values are not the same (as in SOLR-7271).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7272) Use OverseerCollectionProcessor.ROUTER and DocCollection.DOC_ROUTER consistently

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7272?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387233#comment-14387233
 ] 

ASF subversion and git services commented on SOLR-7272:
---

Commit 1670179 from gcha...@apache.org in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1670179 ]

SOLR-7272: Use OverseerCollectionProcessor.ROUTER and DocCollection.DOC_ROUTER 
consistently

 Use OverseerCollectionProcessor.ROUTER and DocCollection.DOC_ROUTER 
 consistently
 

 Key: SOLR-7272
 URL: https://issues.apache.org/jira/browse/SOLR-7272
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Attachments: SOLR-7272.patch


 This is something I noticed while working on SOLR-7271.  There are two 
 different router constants:
 #1. OverseerCollectionProcessor.ROUTER: represents the argument(s) to the 
 OverseerCollectionProcessor
 #2 DocCollection.DOC_ROUTER: represents the router information as stored in 
 the clusterstate in ZK
 But these are sometimes used in the other contexts, which can cause issues if 
 the constant values are not the same (as in SOLR-7271).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7271) 4.4 client to 4.5+ server compatibility Issue due to DocRouter format

2015-03-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan updated SOLR-7271:
-
Attachment: SOLR-7271ClusterState.patch

 4.4 client to 4.5+ server compatibility Issue due to DocRouter format
 -

 Key: SOLR-7271
 URL: https://issues.apache.org/jira/browse/SOLR-7271
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.5
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 4.10.5

 Attachments: SOLR-7271.patch, SOLR-7271ClusterState.patch


 SOLR-4221 changed the router format from e.g.:
 {code}
 ...
 router:compositeId”,
 ...
 {code}
 to:
 {code}
 ...
 router:{name:compositeId},
 ...
 {code}
 This later commit: 
 https://github.com/apache/lucene-solr/commit/54a94eedfd5651bb088e8cbd132393b771f5f5c2
  added backwards compatibility in the sense that the server can read the old 
 router format.   But the old 4.4 client can't read the new format, e.g. you 
 get:
 {code}
 org.apache.solr.common.SolrException: Unknown document router 
 '{name=compositeId}'
   at 
 org.apache.solr.common.cloud.DocRouter.getDocRouter(DocRouter.java:46)
   at 
 org.apache.solr.common.cloud.ClusterState.collectionFromObjects(ClusterState.java:289)
   at org.apache.solr.common.cloud.ClusterState.load(ClusterState.java:257)
   at org.apache.solr.common.cloud.ClusterState.load(ClusterState.java:233)
   at 
 org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:357)
   at com.cloudera.itest.search.util.ZkExecutor.init(ZkExecutor.java:39)
   at 
 com.cloudera.itest.search.util.SearchTestBase.getZkExecutor(SearchTestBase.java:648)
   at 
 com.cloudera.itest.search.util.SearchTestBase.setupSolrURL(SearchTestBase.java:584)
   at 
 com.cloudera.itest.search.util.SearchTestBase.setupEnvironment(SearchTestBase.java:371)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Resolved] (SOLR-7271) 4.4 client to 4.5+ server compatibility Issue due to DocRouter format

2015-03-30 Thread Gregory Chanan (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gregory Chanan resolved SOLR-7271.
--
Resolution: Won't Fix

 4.4 client to 4.5+ server compatibility Issue due to DocRouter format
 -

 Key: SOLR-7271
 URL: https://issues.apache.org/jira/browse/SOLR-7271
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.5
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 4.10.5

 Attachments: SOLR-7271.patch, SOLR-7271ClusterState.patch


 SOLR-4221 changed the router format from e.g.:
 {code}
 ...
 router:compositeId”,
 ...
 {code}
 to:
 {code}
 ...
 router:{name:compositeId},
 ...
 {code}
 This later commit: 
 https://github.com/apache/lucene-solr/commit/54a94eedfd5651bb088e8cbd132393b771f5f5c2
  added backwards compatibility in the sense that the server can read the old 
 router format.   But the old 4.4 client can't read the new format, e.g. you 
 get:
 {code}
 org.apache.solr.common.SolrException: Unknown document router 
 '{name=compositeId}'
   at 
 org.apache.solr.common.cloud.DocRouter.getDocRouter(DocRouter.java:46)
   at 
 org.apache.solr.common.cloud.ClusterState.collectionFromObjects(ClusterState.java:289)
   at org.apache.solr.common.cloud.ClusterState.load(ClusterState.java:257)
   at org.apache.solr.common.cloud.ClusterState.load(ClusterState.java:233)
   at 
 org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:357)
   at com.cloudera.itest.search.util.ZkExecutor.init(ZkExecutor.java:39)
   at 
 com.cloudera.itest.search.util.SearchTestBase.getZkExecutor(SearchTestBase.java:648)
   at 
 com.cloudera.itest.search.util.SearchTestBase.setupSolrURL(SearchTestBase.java:584)
   at 
 com.cloudera.itest.search.util.SearchTestBase.setupEnvironment(SearchTestBase.java:371)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6845) Add buildOnStartup option for suggesters

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387266#comment-14387266
 ] 

ASF subversion and git services commented on SOLR-6845:
---

Commit 1670183 from [~tomasflobbe] in branch 'dev/trunk'
[ https://svn.apache.org/r1670183 ]

SOLR-6845: Updated SuggestComponent comments in techproducts example configset

 Add buildOnStartup option for suggesters
 

 Key: SOLR-6845
 URL: https://issues.apache.org/jira/browse/SOLR-6845
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Erick Erickson
 Fix For: Trunk, 5.1

 Attachments: SOLR-6845.patch, SOLR-6845.patch, SOLR-6845.patch, 
 SOLR-6845_solrconfig.patch, tests-failures.txt


 SOLR-6679 was filed to track the investigation into the following problem...
 {panel}
 The stock solrconfig provides a bad experience with a large index... start up 
 Solr and it will spin at 100% CPU for minutes, unresponsive, while it 
 apparently builds a suggester index.
 ...
 This is what I did:
 1) indexed 10M very small docs (only takes a few minutes).
 2) shut down Solr
 3) start up Solr and watch it be unresponsive for over 4 minutes!
 I didn't even use any of the fields specified in the suggester config and I 
 never called the suggest request handler.
 {panel}
 ..but ultimately focused on removing/disabling the suggester from the sample 
 configs.
 Opening this new issue to focus on actually trying to identify the root 
 problem  fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-6845) Add buildOnStartup option for suggesters

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-6845?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387328#comment-14387328
 ] 

ASF subversion and git services commented on SOLR-6845:
---

Commit 1670186 from [~tomasflobbe] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1670186 ]

SOLR-6845: Updated SuggestComponent comments in techproducts example configset

 Add buildOnStartup option for suggesters
 

 Key: SOLR-6845
 URL: https://issues.apache.org/jira/browse/SOLR-6845
 Project: Solr
  Issue Type: Bug
Reporter: Hoss Man
Assignee: Erick Erickson
 Fix For: Trunk, 5.1

 Attachments: SOLR-6845.patch, SOLR-6845.patch, SOLR-6845.patch, 
 SOLR-6845_solrconfig.patch, tests-failures.txt


 SOLR-6679 was filed to track the investigation into the following problem...
 {panel}
 The stock solrconfig provides a bad experience with a large index... start up 
 Solr and it will spin at 100% CPU for minutes, unresponsive, while it 
 apparently builds a suggester index.
 ...
 This is what I did:
 1) indexed 10M very small docs (only takes a few minutes).
 2) shut down Solr
 3) start up Solr and watch it be unresponsive for over 4 minutes!
 I didn't even use any of the fields specified in the suggester config and I 
 never called the suggest request handler.
 {panel}
 ..but ultimately focused on removing/disabling the suggester from the sample 
 configs.
 Opening this new issue to focus on actually trying to identify the root 
 problem  fix it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-2305) DataImportScheduler

2015-03-30 Thread Marko Bonaci (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387224#comment-14387224
 ] 

Marko Bonaci commented on SOLR-2305:


Just to add some additional info (if someone stumbles here while searching for 
a solution).  
Since _Google Code_ is on its way out I transferred the repo to GitHub (source, 
drop-in jar and usage docs).  
Here: https://github.com/mbonaci/solr-data-import-scheduler

 DataImportScheduler
 ---

 Key: SOLR-2305
 URL: https://issues.apache.org/jira/browse/SOLR-2305
 Project: Solr
  Issue Type: New Feature
Affects Versions: 4.0-ALPHA
Reporter: Bill Bell
 Fix For: 4.9, Trunk

 Attachments: SOLR-2305-1.diff, patch.txt


 Marko Bonaci has updated the WIKI page to add the DataImportScheduler, but I 
 cannot find a JIRA ticket for it?
 http://wiki.apache.org/solr/DataImportHandler
 Do we have a ticket so the code can be tracked?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (32bit/jdk1.8.0_60-ea-b06) - Build # 12158 - Still Failing!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12158/
Java: 32bit/jdk1.8.0_60-ea-b06 -server -XX:+UseConcMarkSweepGC

1 tests failed.
FAILED:  org.apache.solr.cloud.TestRebalanceLeaders.test

Error Message:
No live SolrServers available to handle this request:[https://127.0.0.1:42167, 
https://127.0.0.1:34623, https://127.0.0.1:50611, https://127.0.0.1:52250, 
https://127.0.0.1:58971]

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: No live SolrServers available 
to handle this request:[https://127.0.0.1:42167, https://127.0.0.1:34623, 
https://127.0.0.1:50611, https://127.0.0.1:52250, https://127.0.0.1:58971]
at 
__randomizedtesting.SeedInfo.seed([93FCD366967984BD:1BA8ECBC3885E945]:0)
at 
org.apache.solr.client.solrj.impl.LBHttpSolrClient.request(LBHttpSolrClient.java:349)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.sendRequest(CloudSolrClient.java:1067)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.requestWithRetryOnStaleState(CloudSolrClient.java:839)
at 
org.apache.solr.client.solrj.impl.CloudSolrClient.request(CloudSolrClient.java:782)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1220)
at 
org.apache.solr.cloud.TestRebalanceLeaders.issueCommands(TestRebalanceLeaders.java:280)
at 
org.apache.solr.cloud.TestRebalanceLeaders.rebalanceLeaderTest(TestRebalanceLeaders.java:107)
at 
org.apache.solr.cloud.TestRebalanceLeaders.test(TestRebalanceLeaders.java:73)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

[jira] [Commented] (SOLR-7214) JSON Facet API

2015-03-30 Thread Crawdaddy (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387176#comment-14387176
 ] 

Crawdaddy commented on SOLR-7214:
-

I encountered a bug/incompatibility with JSON faceting on (multi-valued) fields 
w/DocValues.  Multi-valued in parentheses since I don't know if the bug is 
exclusive to that or not.  Issue seems similar to SOLR-6024.

My use case requires the Facet API/Analytics capabilities, and I both require 
and desire DocValues due to the high cardinality of the values I store, and the 
associated performance increase I get when faceting on them.  Without 
DocValues, I get the dreaded Too many values for UnInvertedField faceting on 
field error.

Possible there's a quick fix you could propose, [~yo...@apache.org], that I 
could back-port into Heliosearch until this stuff is available in Solr?

Example schema field:
field name=keywords  type=string  indexed=true  stored=true 
multiValued=true docValues=true/

Traditional Solr faceting on this field works:
[...]/select?rows=0q=toyotafacet=truefacet.field=keywords

JSON faceting returns Type mismatch: keywords was indexed as SORTED_SET:
[...]/select?rows=0q=toyotajson.facet={keywords:{terms:{field:keywords}}}

ERROR - 2015-03-30 10:52:05.806; org.apache.solr.common.SolrException; 
org.apache.solr.common.SolrException: Type mismatch: keywords was indexed as 
SORTED_SET
at org.apache.solr.search.facet.UnInvertedField.init(UnInvertedField.java:201)
at 
org.apache.solr.search.facet.UnInvertedField.getUnInvertedField(UnInvertedField.java:964)
at 
org.apache.solr.search.facet.FacetFieldProcessorUIF.findStartAndEndOrds(FacetField.java:463)
at 
org.apache.solr.search.facet.FacetFieldProcessorFCBase.getFieldCacheCounts(FacetField.java:203)
at 
org.apache.solr.search.facet.FacetFieldProcessorFCBase.process(FacetField.java:186)
at 
org.apache.solr.search.facet.FacetProcessor.fillBucketSubs(FacetRequest.java:176)
at 
org.apache.solr.search.facet.FacetProcessor.processSubs(FacetRequest.java:288)
at org.apache.solr.search.facet.FacetProcessor.fillBucket(FacetRequest.java:266)
at org.apache.solr.search.facet.FacetQueryProcessor.process(FacetQuery.java:56)
at org.apache.solr.search.facet.FacetModule.process(FacetModule.java:87)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:218)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1966)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:777)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:418)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:207)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1419)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:455)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:557)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1075)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:384)
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1009)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at 
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at 
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:368)
at 
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:489)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at 
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:953)
at 
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:1014)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:861)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:240)
at 
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at 
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:745)

 JSON Facet API
 --

 Key: SOLR-7214

[jira] [Commented] (SOLR-7082) Streaming Aggregation for SolrCloud

2015-03-30 Thread ASF subversion and git services (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387237#comment-14387237
 ] 

ASF subversion and git services commented on SOLR-7082:
---

Commit 1670181 from [~joel.bernstein] in branch 'dev/branches/branch_5x'
[ https://svn.apache.org/r1670181 ]

SOLR-7082: Syntactic sugar for metric gathering

 Streaming Aggregation for SolrCloud
 ---

 Key: SOLR-7082
 URL: https://issues.apache.org/jira/browse/SOLR-7082
 Project: Solr
  Issue Type: New Feature
  Components: SolrCloud
Reporter: Joel Bernstein
 Fix For: Trunk, 5.1

 Attachments: SOLR-7082.patch, SOLR-7082.patch, SOLR-7082.patch, 
 SOLR-7082.patch, SOLR-7082.patch


 This issue provides a general purpose streaming aggregation framework for 
 SolrCloud. An overview of how it works can be found at this link:
 http://heliosearch.org/streaming-aggregation-for-solrcloud/
 This functionality allows SolrCloud users to perform operations that we're 
 typically done using map/reduce or a parallel computing platform.
 Here is a brief explanation of how the framework works:
 There is a new Solrj *io* package found in: *org.apache.solr.client.solrj.io*
 Key classes:
 *Tuple*: Abstracts a document in a search result as a Map of key/value pairs.
 *TupleStream*: is the base class for all of the streams. Abstracts search 
 results as a stream of Tuples.
 *SolrStream*: connects to a single Solr instance. You call the read() method 
 to iterate over the Tuples.
 *CloudSolrStream*: connects to a SolrCloud collection and merges the results 
 based on the sort param. The merge takes place in CloudSolrStream itself.
 *Decorator Streams*: wrap other streams to perform operations the streams. 
 Some examples are the UniqueStream, MergeStream and ReducerStream.
 *Going parallel with the ParallelStream and  Worker Collections*
 The io package also contains the *ParallelStream*, which wraps a TupleStream 
 and sends it to N worker nodes. The workers are chosen from a SolrCloud 
 collection. These Worker Collections don't have to hold any data, they can 
 just be used to execute TupleStreams.
 *The StreamHandler*
 The Worker nodes have a new RequestHandler called the *StreamHandler*. The 
 ParallelStream serializes a TupleStream, before it is opened, and sends it to 
 the StreamHandler on the Worker Nodes.
 The StreamHandler on each Worker node deserializes the TupleStream, opens the 
 stream, iterates the tuples and streams them back to the ParallelStream. The 
 ParallelStream performs the final merge of Metrics and can be wrapped by 
 other Streams to handled the final merged TupleStream.
 *Sorting and Partitioning search results (Shuffling)*
 Each Worker node is shuffled 1/N of the document results. There is a 
 partitionKeys parameter that can be included with each TupleStream to 
 ensure that Tuples with the same partitionKeys are shuffled to the same 
 Worker. The actual partitioning is done with a filter query using the 
 HashQParserPlugin. The DocSets from the HashQParserPlugin can be cached in 
 the filter cache which provides extremely high performance hash partitioning. 
 Many of the stream transformations rely on the sort order of the TupleStreams 
 (GroupByStream, MergeJoinStream, UniqueStream, FilterStream etc..). To 
 accommodate this the search results can be sorted by specific keys. The 
 /export handler can be used to sort entire result sets efficiently.
 By specifying the sort order of the results and the partition keys, documents 
 will be sorted and partitioned inside of the search engine. So when the 
 tuples hit the network they are already sorted, partitioned and headed 
 directly to correct worker node.
 *Extending The Framework*
 To extend the framework you create new TupleStream Decorators, that gather 
 custom metrics or perform custom stream transformations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7271) 4.4 client to 4.5+ server compatibility Issue due to DocRouter format

2015-03-30 Thread Gregory Chanan (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387325#comment-14387325
 ] 

Gregory Chanan commented on SOLR-7271:
--

bq.  My own experience and view is that SolrCloud has been changing so fast 
that it's surprising you can get ANY different client/server versions to work 
properly together, and if the client is older than the server, it might be 
impossible.

Well, looks like I found another issue :).  Going to upload another patch.

bq. At this point I don't think we should worry about committing anything like 
this to an old 4.x branch, but for users who may be stuck in a situation where 
they cannot upgrade, it's awesome to have this issue to describe the problem 
and a patch that will allow them to stay functional until they can upgrade.

Sounds reasonable.  I'll mark this as Won't Fix.  Anyone can use the patch here 
or reopen the jira if they think otherwise.

 4.4 client to 4.5+ server compatibility Issue due to DocRouter format
 -

 Key: SOLR-7271
 URL: https://issues.apache.org/jira/browse/SOLR-7271
 Project: Solr
  Issue Type: Bug
  Components: SolrCloud
Affects Versions: 4.5
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 4.10.5

 Attachments: SOLR-7271.patch


 SOLR-4221 changed the router format from e.g.:
 {code}
 ...
 router:compositeId”,
 ...
 {code}
 to:
 {code}
 ...
 router:{name:compositeId},
 ...
 {code}
 This later commit: 
 https://github.com/apache/lucene-solr/commit/54a94eedfd5651bb088e8cbd132393b771f5f5c2
  added backwards compatibility in the sense that the server can read the old 
 router format.   But the old 4.4 client can't read the new format, e.g. you 
 get:
 {code}
 org.apache.solr.common.SolrException: Unknown document router 
 '{name=compositeId}'
   at 
 org.apache.solr.common.cloud.DocRouter.getDocRouter(DocRouter.java:46)
   at 
 org.apache.solr.common.cloud.ClusterState.collectionFromObjects(ClusterState.java:289)
   at org.apache.solr.common.cloud.ClusterState.load(ClusterState.java:257)
   at org.apache.solr.common.cloud.ClusterState.load(ClusterState.java:233)
   at 
 org.apache.solr.common.cloud.ZkStateReader.createClusterStateWatchersAndUpdate(ZkStateReader.java:357)
   at com.cloudera.itest.search.util.ZkExecutor.init(ZkExecutor.java:39)
   at 
 com.cloudera.itest.search.util.SearchTestBase.getZkExecutor(SearchTestBase.java:648)
   at 
 com.cloudera.itest.search.util.SearchTestBase.setupSolrURL(SearchTestBase.java:584)
   at 
 com.cloudera.itest.search.util.SearchTestBase.setupEnvironment(SearchTestBase.java:371)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (SOLR-7325) Change Slice state into enum

2015-03-30 Thread Shai Erera (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-7325?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shai Erera updated SOLR-7325:
-
Attachment: SOLR-7325.patch

Thanks Shalin! I was just about to upload a patch when I noticed your comment, 
so patch includes:

* New State enum replaces all string constants
* Moved all the code to use the new enum
* Expanded RECOVERY and CONSTRUCTION states' jdoc per Shalin's suggestions.

I also added a CHANGES entry under Migrating from 5.0 section noting the API 
change. If you think it's an overkill, I can move the comment under 
Optimizations.

I will run all the tests tomorrow. Few smoke tests that I picked seem happy 
with these changes. So would appreciate a review on the changes meanwhile.

 Change Slice state into enum
 

 Key: SOLR-7325
 URL: https://issues.apache.org/jira/browse/SOLR-7325
 Project: Solr
  Issue Type: Improvement
  Components: SolrJ
Reporter: Shai Erera
 Attachments: SOLR-7325.patch, SOLR-7325.patch


 Slice state is currently interacted with as a string. It is IMO not trivial 
 to understand which values it can be compared to, in part because the Replica 
 and Slice states are located in different classes, some repeating same 
 constant names and values.
 Also, it's not very clear when does a Slice get into which state and what 
 does that mean.
 I think if it's an enum, and documented briefly in the code, it would help 
 interacting with it through code. I don't mind if we include more extensive 
 documentation in the reference guide / wiki and refer people there for more 
 details.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-Tests-5.x-Java7 - Build # 2861 - Still Failing

2015-03-30 Thread Apache Jenkins Server
Build: https://builds.apache.org/job/Lucene-Solr-Tests-5.x-Java7/2861/

3 tests failed.
FAILED:  org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test

Error Message:
IOException occured when talking to server at: 
http://127.0.0.1:20392/t/qu/c8n_1x3_commits_shard1_replica2

Stack Trace:
org.apache.solr.client.solrj.SolrServerException: IOException occured when 
talking to server at: 
http://127.0.0.1:20392/t/qu/c8n_1x3_commits_shard1_replica2
at 
__randomizedtesting.SeedInfo.seed([FC65B5FF1C8C2C10:74318A25B27041E8]:0)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:570)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:233)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:225)
at 
org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:135)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:483)
at org.apache.solr.client.solrj.SolrClient.commit(SolrClient.java:464)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.oneShardTest(LeaderInitiatedRecoveryOnCommitTest.java:130)
at 
org.apache.solr.cloud.LeaderInitiatedRecoveryOnCommitTest.test(LeaderInitiatedRecoveryOnCommitTest.java:62)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)

Re: Creating 5.1 branch soon ...

2015-03-30 Thread Timothy Potter
I'd like to move ahead an create the 5.1 branch later today so that we can
start locking down what's included in the release. I know this adds an
extra merge step for you Adrien for LUCENE-6303, but I hope that's not too
much trouble for you?

Cheers,
Tim

On Fri, Mar 27, 2015 at 5:24 PM, Adrien Grand jpou...@gmail.com wrote:

 Hi Timothy,

 We have an issue with auto caching in Lucene that uncovered some
 issues with using queries as cache keys since some of them are mutable
 (including major one like BooleanQuery and PhraseQuery). I reopened
 https://issues.apache.org/jira/browse/LUCENE-6303 and provided a patch
 to disable this feature so that we can release. I can hopefully commit
 it early next week.

 On Wed, Mar 25, 2015 at 6:17 PM, Timothy Potter thelabd...@gmail.com
 wrote:
  Hi,
 
  I'd like to create the 5.1 branch soon'ish, thinking maybe late tomorrow
 or
  early Friday.
 
  If I understand correctly, that implies that new features should not be
  added after that point without some agreement among the committers about
  whether it should be included?
 
  Let me know if this is too soon and when a more ideal date/time would be.
 
  Sincerely,
 
  Your friendly 5.1 release manager (aka thelabdude)



 --
 Adrien

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




[jira] [Commented] (SOLR-7312) REST API is not REST

2015-03-30 Thread Noble Paul (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387595#comment-14387595
 ] 

Noble Paul commented on SOLR-7312:
--

bq.I think we should aim for true REST for all Admin APIs in the future.

I guess the discussion here was not about admin APIs. This probably was about 
the schema and config APIs. I have given my reasons why they are done this way.

I believe true REST or true any standard is a huge ball and chain. The 
practitioners of the standard will come to hunt you down the moment we make 
small deviation.  

bq.Perhaps we should also disallow GET calls to the {{/update}} handler by 
default?

 I'm +1 to disallow GET for write APIs. But that is not pure REST . I don't 
have the energy or time to modify all our existing APIs to satisfy any 
standards . But I can spend time for making them easier or more secure




 REST API is not REST
 --

 Key: SOLR-7312
 URL: https://issues.apache.org/jira/browse/SOLR-7312
 Project: Solr
  Issue Type: Improvement
  Components: Server
Affects Versions: 5.0
Reporter: Mark Haase
Assignee: Noble Paul

 The documentation refers to a REST API over and over, and yet I don't see a 
 REST API. I see an HTTP API but not a REST API. Here are a few things the 
 HTTP API does that are not RESTful:
 * Offers RPC verbs instead of resources/nouns. (E.g. schema API has commands 
 like add-field, add-copy-field, etc.)
 * Tunnels non-idempotent requests (like creating a core) through idempotent 
 HTTP verb (GET).
 * Tunnels deletes through HTTP GET.
 * PUT/POST confusion, POST used to update a named resource, such as the Blob 
 API.
 * Returns `200 OK` HTTP code even when the command fails. (Try adding a field 
 to your schema that already exists. You get `200 OK` and an error message 
 hidden in the payload. Try calling a collections API when you're using 
 non-cloud mode: `200 OK` and an error message in the payload. Gah.)
 * Does not provide link relations.
 * HTTP status line contains a JSON payload (!) and no 'Content-Type' header 
 for some failed commands, like `curl -X DELETE 
 http://solr:8983/solr/admin/cores/foo`
 * Content negotiation is done via query parameter (`wt=json`), instead of 
 `Accept` header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7312) REST API is not REST

2015-03-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387542#comment-14387542
 ] 

Jan Høydahl commented on SOLR-7312:
---

I think we should aim for true REST for all Admin APIs in the future. Users may 
perhaps need to send more requests to do the same thing but that's a minor 
problem. Overloading a single POST request with internal command verbs does not 
feel right. It's not really hard to design the APIs so that there is a way to 
avoid reload after every request.

Perhaps we should also disallow GET calls to the {{/update}} handler by default?

 REST API is not REST
 --

 Key: SOLR-7312
 URL: https://issues.apache.org/jira/browse/SOLR-7312
 Project: Solr
  Issue Type: Improvement
  Components: Server
Affects Versions: 5.0
Reporter: Mark Haase
Assignee: Noble Paul

 The documentation refers to a REST API over and over, and yet I don't see a 
 REST API. I see an HTTP API but not a REST API. Here are a few things the 
 HTTP API does that are not RESTful:
 * Offers RPC verbs instead of resources/nouns. (E.g. schema API has commands 
 like add-field, add-copy-field, etc.)
 * Tunnels non-idempotent requests (like creating a core) through idempotent 
 HTTP verb (GET).
 * Tunnels deletes through HTTP GET.
 * PUT/POST confusion, POST used to update a named resource, such as the Blob 
 API.
 * Returns `200 OK` HTTP code even when the command fails. (Try adding a field 
 to your schema that already exists. You get `200 OK` and an error message 
 hidden in the payload. Try calling a collections API when you're using 
 non-cloud mode: `200 OK` and an error message in the payload. Gah.)
 * Does not provide link relations.
 * HTTP status line contains a JSON payload (!) and no 'Content-Type' header 
 for some failed commands, like `curl -X DELETE 
 http://solr:8983/solr/admin/cores/foo`
 * Content negotiation is done via query parameter (`wt=json`), instead of 
 `Accept` header.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Commented] (SOLR-7236) Securing Solr (umbrella issue)

2015-03-30 Thread JIRA

[ 
https://issues.apache.org/jira/browse/SOLR-7236?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14387493#comment-14387493
 ] 

Jan Høydahl commented on SOLR-7236:
---

Let the topic of this JIRA be securing Solr. For the near future we can assume 
that will continue to be Jetty, but the security APIs we choose should not be 
tightly tied to Jetty or servlet containers. Feel free to find or create 
another JIRA to discuss container switch topics :-)

 Securing Solr (umbrella issue)
 --

 Key: SOLR-7236
 URL: https://issues.apache.org/jira/browse/SOLR-7236
 Project: Solr
  Issue Type: New Feature
Reporter: Jan Høydahl
  Labels: Security

 This is an umbrella issue for adding security to Solr. The discussion here 
 should discuss real user needs and high-level strategy, before deciding on 
 implementation details. All work will be done in sub tasks and linked issues.
 Solr has not traditionally concerned itself with security. And It has been a 
 general view among the committers that it may be better to stay out of it to 
 avoid blood on our hands in this mine-field. Still, Solr has lately seen 
 SSL support, securing of ZK, and signing of jars, and discussions have begun 
 about securing operations in Solr.
 Some of the topics to address are
 * User management (flat file, AD/LDAP etc)
 * Authentication (Admin UI, Admin and data/query operations. Tons of auth 
 protocols: basic, digest, oauth, pki..)
 * Authorization (who can do what with what API, collection, doc)
 * Pluggability (no user's needs are equal)
 * And we could go on and on but this is what we've seen the most demand for



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[jira] [Updated] (LUCENE-5879) Add auto-prefix terms to block tree terms dict

2015-03-30 Thread Michael McCandless (JIRA)

 [ 
https://issues.apache.org/jira/browse/LUCENE-5879?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael McCandless updated LUCENE-5879:
---
Attachment: LUCENE-5879.patch

New patch, fixing the nocommits.  I think it's ready ... I'll beast tests for a 
while on it.

I don't think we should rush this into 5.1.

 Add auto-prefix terms to block tree terms dict
 --

 Key: LUCENE-5879
 URL: https://issues.apache.org/jira/browse/LUCENE-5879
 Project: Lucene - Core
  Issue Type: New Feature
  Components: core/codecs
Reporter: Michael McCandless
Assignee: Michael McCandless
 Fix For: 5.0, Trunk

 Attachments: LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
 LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
 LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, LUCENE-5879.patch, 
 LUCENE-5879.patch, LUCENE-5879.patch


 This cool idea to generalize numeric/trie fields came from Adrien:
 Today, when we index a numeric field (LongField, etc.) we pre-compute
 (via NumericTokenStream) outside of indexer/codec which prefix terms
 should be indexed.
 But this can be inefficient: you set a static precisionStep, and
 always add those prefix terms regardless of how the terms in the field
 are actually distributed.  Yet typically in real world applications
 the terms have a non-random distribution.
 So, it should be better if instead the terms dict decides where it
 makes sense to insert prefix terms, based on how dense the terms are
 in each region of term space.
 This way we can speed up query time for both term (e.g. infix
 suggester) and numeric ranges, and it should let us use less index
 space and get faster range queries.
  
 This would also mean that min/maxTerm for a numeric field would now be
 correct, vs today where the externally computed prefix terms are
 placed after the full precision terms, causing hairy code like
 NumericUtils.getMaxInt/Long.  So optos like LUCENE-5860 become
 feasible.
 The terms dict can also do tricks not possible if you must live on top
 of its APIs, e.g. to handle the adversary/over-constrained case when a
 given prefix has too many terms following it but finer prefixes
 have too few (what block tree calls floor term blocks).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Soft commits and segments

2015-03-30 Thread Erick Erickson
Is it the expected case that when a soft commit happens, a new segment
is opened? 'Cause that's what I'm seeing. Thinking about it I don't
see how Lucene could successfully MMap the underlying disk files if
new segments weren't opened, and if they were all held in Java's
memory BOOM (that as a Big OOM).

So I'm guessing at this point that I need to revise my model from
soft commits do not write new segments to soft commits do not write
new segments _durably_.

Thanks,
Erick

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



5.1 branch created

2015-03-30 Thread Timothy Potter
The 5.1 branch has been created -
https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_5_1/

Here's a friendly reminder (from the wiki) on the agreed process for a
minor release:

* No new features may be committed to the branch.

* Documentation patches, build patches and serious bug fixes may be
committed to the branch. However, you should submit all patches you
want to commit to Jira first to give others the chance to review and
possibly vote against the patch. Keep in mind that it is our main
intention to keep the branch as stable as possible.

* All patches that are intended for the branch should first be
committed to trunk, merged into the minor release branch, and then
into the current release branch.

* Normal trunk and minor release branch development may continue as
usual. However, if you plan to commit a big change to the trunk while
the branch feature freeze is in effect, think twice: can't the
addition wait a couple more days? Merges of bug fixes into the branch
may become more difficult.

* Only Jira issues with Fix version 5.1 and priority Blocker will
delay a release candidate build.

FYI - We've already agreed that LUCENE-6303 should get committed to
this branch when it is ready.

On Mon, Mar 30, 2015 at 2:08 PM, Timothy Potter thelabd...@gmail.com wrote:
 I'd like to move ahead an create the 5.1 branch later today so that we can
 start locking down what's included in the release. I know this adds an extra
 merge step for you Adrien for LUCENE-6303, but I hope that's not too much
 trouble for you?

 Cheers,
 Tim

 On Fri, Mar 27, 2015 at 5:24 PM, Adrien Grand jpou...@gmail.com wrote:

 Hi Timothy,

 We have an issue with auto caching in Lucene that uncovered some
 issues with using queries as cache keys since some of them are mutable
 (including major one like BooleanQuery and PhraseQuery). I reopened
 https://issues.apache.org/jira/browse/LUCENE-6303 and provided a patch
 to disable this feature so that we can release. I can hopefully commit
 it early next week.

 On Wed, Mar 25, 2015 at 6:17 PM, Timothy Potter thelabd...@gmail.com
 wrote:
  Hi,
 
  I'd like to create the 5.1 branch soon'ish, thinking maybe late tomorrow
  or
  early Friday.
 
  If I understand correctly, that implies that new features should not be
  added after that point without some agreement among the committers about
  whether it should be included?
 
  Let me know if this is too soon and when a more ideal date/time would
  be.
 
  Sincerely,
 
  Your friendly 5.1 release manager (aka thelabdude)



 --
 Adrien

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Linux (64bit/jdk1.8.0_60-ea-b06) - Build # 12162 - Failure!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Linux/12162/
Java: 64bit/jdk1.8.0_60-ea-b06 -XX:-UseCompressedOops -XX:+UseSerialGC

1 tests failed.
FAILED:  org.apache.solr.TestDistributedSearch.test

Error Message:
Error from server at http://127.0.0.1:42180//collection1: 
java.lang.NullPointerException  at 
org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:102)
  at 
org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:744)
  at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:727)
  at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:355)
  at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
  at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)  at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829) 
 at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
  at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:103)
  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)  
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364)  at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
  at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585) 
 at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
  at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)  
at 
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
  at 
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1061)
  at 
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)  
at 
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:97)  
at org.eclipse.jetty.server.Server.handle(Server.java:497)  at 
org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:310)  at 
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:257)  at 
org.eclipse.jetty.io.AbstractConnection$2.run(AbstractConnection.java:540)  at 
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:635)
  at 
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:555) 
 at java.lang.Thread.run(Thread.java:745) 

Stack Trace:
org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException: Error 
from server at http://127.0.0.1:42180//collection1: 
java.lang.NullPointerException
at 
org.apache.solr.search.grouping.distributed.responseprocessor.TopGroupsShardResponseProcessor.process(TopGroupsShardResponseProcessor.java:102)
at 
org.apache.solr.handler.component.QueryComponent.handleGroupedResponses(QueryComponent.java:744)
at 
org.apache.solr.handler.component.QueryComponent.handleResponses(QueryComponent.java:727)
at 
org.apache.solr.handler.component.SearchHandler.handleRequestBody(SearchHandler.java:355)
at 
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:143)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1984)
at 
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:829)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:446)
at 
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:220)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.apache.solr.client.solrj.embedded.JettySolrRunner$DebugFilter.doFilter(JettySolrRunner.java:103)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlets.UserAgentFilter.doFilter(UserAgentFilter.java:83)
at org.eclipse.jetty.servlets.GzipFilter.doFilter(GzipFilter.java:364)
at 
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1652)
at 
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:585)
at 
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:221)
at 
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1127)
at 
org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:515)
at 

[jira] [Updated] (SOLR-6551) ConcurrentModificationException in UpdateLog

2015-03-30 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-6551?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-6551:

Affects Version/s: 4.10.4
Fix Version/s: (was: 4.10)
   5.0

Actually this fix is only in 5.0 because it was committed to branch_5x only. 
Since it wasn't mentioned in the change log, nobody back-ported it to branch_4x.

 ConcurrentModificationException in UpdateLog
 

 Key: SOLR-6551
 URL: https://issues.apache.org/jira/browse/SOLR-6551
 Project: Solr
  Issue Type: Bug
Affects Versions: 4.7, 4.8, 4.9, 4.10.4
Reporter: Noble Paul
Assignee: Noble Paul
 Fix For: 5.0

 Attachments: SOLR-6551.patch


 {code}
 null:java.util.ConcurrentModificationException
at java.util.LinkedList$ListItr.checkForComodification(Unknown Source)
at java.util.LinkedList$ListItr.next(Unknown Source)
at 
 org.apache.solr.update.UpdateLog.getTotalLogsSize(UpdateLog.java:199)
at 
 org.apache.solr.update.DirectUpdateHandler2.getStatistics(DirectUpdateHandler2.java:871)
at 
 org.apache.solr.handler.admin.SolrInfoMBeanHandler.getMBeanInfo(SolrInfoMBeanHandler.java:159)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-trunk-Windows (32bit/jdk1.8.0_40) - Build # 4617 - Still Failing!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-trunk-Windows/4617/
Java: 32bit/jdk1.8.0_40 -client -XX:+UseG1GC

1 tests failed.
FAILED:  junit.framework.TestSuite.org.apache.solr.core.TestSolrConfigHandler

Error Message:
Could not remove the following files (in the order of attempts):
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process. 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1\conf
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001 

Stack Trace:
java.io.IOException: Could not remove the following files (in the order of 
attempts):
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1\conf\params.json: 
java.nio.file.FileSystemException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1\conf\params.json: The process 
cannot access the file because it is being used by another process.

   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1\conf: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1\conf
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1: 
java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010\collection1
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010: java.nio.file.DirectoryNotEmptyException: 
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001\tempDir-010
   
C:\Users\JenkinsSlave\workspace\Lucene-Solr-trunk-Windows\solr\build\solr-core\test\J1\temp\solr.core.TestSolrConfigHandler
 8D36FCAB69BE0F2-001: java.nio.file.DirectoryNotEmptyException: 

Re: 5.1 branch created

2015-03-30 Thread Timothy Potter
Trying to follow the directions here -
http://wiki.apache.org/lucene-java/ReleaseTodo

But there's no bumpVersion.py script in dev-tools/scripts? Tried
addVersion.py but it's not cooperating???

[~/dev/lw/projects/lucene_solr_5_1]$ python3 -u dev-tools/scripts/addVersion.py
Traceback (most recent call last):
  File dev-tools/scripts/addVersion.py, line 214, in module
main()
  File dev-tools/scripts/addVersion.py, line 185, in main
c = read_config()
  File dev-tools/scripts/addVersion.py, line 163, in read_config
parser.add_argument('version', type=Version.parse)
NameError: global name 'Version' is not defined

On Mon, Mar 30, 2015 at 7:57 PM, Timothy Potter thelabd...@gmail.com wrote:
 The 5.1 branch has been created -
 https://svn.apache.org/repos/asf/lucene/dev/branches/lucene_solr_5_1/

 Here's a friendly reminder (from the wiki) on the agreed process for a
 minor release:

 * No new features may be committed to the branch.

 * Documentation patches, build patches and serious bug fixes may be
 committed to the branch. However, you should submit all patches you
 want to commit to Jira first to give others the chance to review and
 possibly vote against the patch. Keep in mind that it is our main
 intention to keep the branch as stable as possible.

 * All patches that are intended for the branch should first be
 committed to trunk, merged into the minor release branch, and then
 into the current release branch.

 * Normal trunk and minor release branch development may continue as
 usual. However, if you plan to commit a big change to the trunk while
 the branch feature freeze is in effect, think twice: can't the
 addition wait a couple more days? Merges of bug fixes into the branch
 may become more difficult.

 * Only Jira issues with Fix version 5.1 and priority Blocker will
 delay a release candidate build.

 FYI - We've already agreed that LUCENE-6303 should get committed to
 this branch when it is ready.

 On Mon, Mar 30, 2015 at 2:08 PM, Timothy Potter thelabd...@gmail.com wrote:
 I'd like to move ahead an create the 5.1 branch later today so that we can
 start locking down what's included in the release. I know this adds an extra
 merge step for you Adrien for LUCENE-6303, but I hope that's not too much
 trouble for you?

 Cheers,
 Tim

 On Fri, Mar 27, 2015 at 5:24 PM, Adrien Grand jpou...@gmail.com wrote:

 Hi Timothy,

 We have an issue with auto caching in Lucene that uncovered some
 issues with using queries as cache keys since some of them are mutable
 (including major one like BooleanQuery and PhraseQuery). I reopened
 https://issues.apache.org/jira/browse/LUCENE-6303 and provided a patch
 to disable this feature so that we can release. I can hopefully commit
 it early next week.

 On Wed, Mar 25, 2015 at 6:17 PM, Timothy Potter thelabd...@gmail.com
 wrote:
  Hi,
 
  I'd like to create the 5.1 branch soon'ish, thinking maybe late tomorrow
  or
  early Friday.
 
  If I understand correctly, that implies that new features should not be
  added after that point without some agreement among the committers about
  whether it should be included?
 
  Let me know if this is too soon and when a more ideal date/time would
  be.
 
  Sincerely,
 
  Your friendly 5.1 release manager (aka thelabdude)



 --
 Adrien

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



[JENKINS] Lucene-Solr-5.x-Windows (32bit/jdk1.8.0_40) - Build # 4500 - Failure!

2015-03-30 Thread Policeman Jenkins Server
Build: http://jenkins.thetaphi.de/job/Lucene-Solr-5.x-Windows/4500/
Java: 32bit/jdk1.8.0_40 -server -XX:+UseG1GC

1 tests failed.
FAILED:  org.apache.solr.handler.TestSolrConfigHandlerCloud.test

Error Message:
Could not get expected value  'P val' for path 'response/params/y/p' full 
output: {   responseHeader:{ status:0, QTime:0},   response:{   
  znodeVersion:1, params:{   x:{ a:A val, 
b:B val, :{v:0}},   y:{ c:CY val, 
b:BY val, i:20, d:[   val 1,   val 
2], :{v:0}

Stack Trace:
java.lang.AssertionError: Could not get expected value  'P val' for path 
'response/params/y/p' full output: {
  responseHeader:{
status:0,
QTime:0},
  response:{
znodeVersion:1,
params:{
  x:{
a:A val,
b:B val,
:{v:0}},
  y:{
c:CY val,
b:BY val,
i:20,
d:[
  val 1,
  val 2],
:{v:0}
at 
__randomizedtesting.SeedInfo.seed([3B5AEE6BE070782D:B30ED1B14E8C15D5]:0)
at org.junit.Assert.fail(Assert.java:93)
at org.junit.Assert.assertTrue(Assert.java:43)
at 
org.apache.solr.core.TestSolrConfigHandler.testForResponseElement(TestSolrConfigHandler.java:406)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.testReqParams(TestSolrConfigHandlerCloud.java:245)
at 
org.apache.solr.handler.TestSolrConfigHandlerCloud.test(TestSolrConfigHandlerCloud.java:78)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1627)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:836)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:872)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:886)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsFixedStatement.callStatement(BaseDistributedSearchTestCase.java:960)
at 
org.apache.solr.BaseDistributedSearchTestCase$ShardsRepeatRule$ShardsStatement.evaluate(BaseDistributedSearchTestCase.java:935)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:50)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:49)
at 
org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:65)
at 
org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:48)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:365)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:798)
at 
com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:458)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:845)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$3.evaluate(RandomizedRunner.java:747)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$4.evaluate(RandomizedRunner.java:781)
at 
com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:792)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:53)
at 
org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:46)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 
org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:42)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:39)
at 
com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
at 

Re: Soft commits and segments

2015-03-30 Thread Robert Muir
Yes, pending docs buffered in indexwriter are flushed when you reopen.

one way to avoid revising your model is to use NRTCachingDirectory.
When properly configured this can defer the writes to the filesystem
until they truly really need to be there, e.g. indexwriter.commit.

On Mon, Mar 30, 2015 at 7:55 PM, Erick Erickson erickerick...@gmail.com wrote:
 Is it the expected case that when a soft commit happens, a new segment
 is opened? 'Cause that's what I'm seeing. Thinking about it I don't
 see how Lucene could successfully MMap the underlying disk files if
 new segments weren't opened, and if they were all held in Java's
 memory BOOM (that as a Big OOM).

 So I'm guessing at this point that I need to revise my model from
 soft commits do not write new segments to soft commits do not write
 new segments _durably_.

 Thanks,
 Erick

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



  1   2   >